title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 1. Keystone authentication and the Ceph Object Gateway | Chapter 1. Keystone authentication and the Ceph Object Gateway Organizations using OpenStack Keystone to authenticate users can integrate Keystone with the Ceph Object Gateway. The Ceph Object Gateway enables the gateway to accept a Keystone token, authenticate the user and create a corresponding Ceph Object Gateway user. When Keystone validates a token, the gateway considers the user authenticated. Benefits Managing users with Keystone Automatic User Creation in the Ceph Object Gateway The Ceph Object Gateway will query Keystone periodically for a list of revoked tokens. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/using_keystone_with_the_ceph_object_gateway_guide/keystone-authentication-and-the-ceph-object-gateway_rgw-keystone |
Appendix B. Sample custom interface template: multiple bonded interfaces | Appendix B. Sample custom interface template: multiple bonded interfaces The following template is a customized version of /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml . It features multiple bonded interfaces to isolate back-end and front-end storage network traffic, along with redundancy for both connections. For more information, see Section 4.3, "Configuring multiple bonded interfaces for Ceph nodes" . It also uses custom bonding options, 'mode=4 lacp_rate=1' , see Section 4.3.1, "Configuring bonding module directives" . /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml (custom) | [
"heat_template_version: 2015-04-30 description: > Software Config to drive os-net-config with 2 bonded nics on a bridge with VLANs attached for the ceph storage role. parameters: ControlPlaneIp: default: '' description: IP address/subnet on the ctlplane network type: string ExternalIpSubnet: default: '' description: IP address/subnet on the external network type: string InternalApiIpSubnet: default: '' description: IP address/subnet on the internal API network type: string StorageIpSubnet: default: '' description: IP address/subnet on the storage network type: string StorageMgmtIpSubnet: default: '' description: IP address/subnet on the storage mgmt network type: string TenantIpSubnet: default: '' description: IP address/subnet on the project (tenant) network type: string ManagementIpSubnet: # Only populated when including environments/network-management.yaml default: '' description: IP address/subnet on the management network type: string BondInterfaceOvsOptions: default: 'mode=4 lacp_rate=1' description: The bonding_options string for the bond interface. Set things like lacp=active and/or bond_mode=balance-slb using this option. type: string constraints: - allowed_pattern: \"^((?!balance.tcp).)*USD\" description: | The balance-tcp bond mode is known to cause packet loss and must not be used in BondInterfaceOvsOptions. ExternalNetworkVlanID: default: 10 description: Vlan ID for the external network traffic. type: number InternalApiNetworkVlanID: default: 20 description: Vlan ID for the internal_api network traffic. type: number StorageNetworkVlanID: default: 30 description: Vlan ID for the storage network traffic. type: number StorageMgmtNetworkVlanID: default: 40 description: Vlan ID for the storage mgmt network traffic. type: number TenantNetworkVlanID: default: 50 description: Vlan ID for the project (tenant) network traffic. type: number ManagementNetworkVlanID: default: 60 description: Vlan ID for the management network traffic. type: number ControlPlaneSubnetCidr: # Override this with parameter_defaults default: '24' description: The subnet CIDR of the control plane network. type: string ControlPlaneDefaultRoute: # Override this with parameter_defaults description: The default route of the control plane network. type: string ExternalInterfaceDefaultRoute: # Not used by default in this template default: '10.0.0.1' description: The default route of the external network. type: string ManagementInterfaceDefaultRoute: # Commented out by default in this template default: unset description: The default route of the management network. type: string DnsServers: # Override this with parameter_defaults default: [] description: A list of DNS servers (2 max for some implementations) that are added to resolv.conf. type: comma_delimited_list EC2MetadataIp: # Override this with parameter_defaults description: The IP address of the EC2 metadata server. type: string resources: OsNetConfigImpl: type: OS::Heat::StructuredConfig properties: group: os-apply-config config: os_net_config: network_config: - type: interface name: nic1 use_dhcp: false dns_servers: {get_param: DnsServers} addresses: - ip_netmask: list_join: - '/' - - {get_param: ControlPlaneIp} - {get_param: ControlPlaneSubnetCidr} routes: - ip_netmask: 169.254.169.254/32 next_hop: {get_param: EC2MetadataIp} - default: true next_hop: {get_param: ControlPlaneDefaultRoute} - type: ovs_bridge name: br-bond members: - type: linux_bond name: bond1 bonding_options: {get_param: BondInterfaceOvsOptions} members: - type: interface name: nic2 primary: true - type: interface name: nic3 - type: vlan device: bond1 vlan_id: {get_param: StorageNetworkVlanID} addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: ovs_bridge name: br-bond2 members: - type: linux_bond name: bond2 bonding_options: {get_param: BondInterfaceOvsOptions} members: - type: interface name: nic4 primary: true - type: interface name: nic5 - type: vlan device: bond1 vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet} outputs: OS::stack_id: description: The OsNetConfigImpl resource. value: {get_resource: OsNetConfigImpl}"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/deploying_an_overcloud_with_containerized_red_hat_ceph/template-multibonded-nics |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/making-open-source-more-inclusive |
3.3. LVM Logical Volumes | 3.3. LVM Logical Volumes In LVM, a volume group is divided up into logical volumes. There are three types of LVM logical volumes: linear volumes, striped volumes, and mirrored volumes. These are described in the following sections. 3.3.1. Linear Volumes A linear volume aggregates space from one or more physical volumes into one logical volume. For example, if you have two 60GB disks, you can create a 120GB logical volume. The physical storage is concatenated. Creating a linear volume assigns a range of physical extents to an area of a logical volume in order. For example, as shown in Figure 3.2, "Extent Mapping" logical extents 1 to 99 could map to one physical volume and logical extents 100 to 198 could map to a second physical volume. From the point of view of the application, there is one device that is 198 extents in size. Figure 3.2. Extent Mapping The physical volumes that make up a logical volume do not have to be the same size. Figure 3.3, "Linear Volume with Unequal Physical Volumes" shows volume group VG1 with a physical extent size of 4MB. This volume group includes 2 physical volumes named PV1 and PV2 . The physical volumes are divided into 4MB units, since that is the extent size. In this example, PV1 is 200 extents in size (800MB) and PV2 is 100 extents in size (400MB). You can create a linear volume any size between 1 and 300 extents (4MB to 1200MB). In this example, the linear volume named LV1 is 300 extents in size. Figure 3.3. Linear Volume with Unequal Physical Volumes You can configure more than one linear logical volume of whatever size you require from the pool of physical extents. Figure 3.4, "Multiple Logical Volumes" shows the same volume group as in Figure 3.3, "Linear Volume with Unequal Physical Volumes" , but in this case two logical volumes have been carved out of the volume group: LV1 , which is 250 extents in size (1000MB) and LV2 which is 50 extents in size (200MB). Figure 3.4. Multiple Logical Volumes | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lv_overview |
Chapter 9. Adding RHEL compute machines to an OpenShift Container Platform cluster | Chapter 9. Adding RHEL compute machines to an OpenShift Container Platform cluster In OpenShift Container Platform, you can add Red Hat Enterprise Linux (RHEL) compute machines to a user-provisioned infrastructure cluster or an installation-provisioned infrastructure cluster on the x86_64 architecture. You can use RHEL as the operating system only on compute machines. 9.1. About adding RHEL compute nodes to a cluster In OpenShift Container Platform 4.14, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines in your cluster if you use a user-provisioned or installer-provisioned infrastructure installation on the x86_64 architecture. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane machines in your cluster. If you choose to use RHEL compute machines in your cluster, you are responsible for all operating system life cycle management and maintenance. You must perform system updates, apply patches, and complete all other required tasks. For installer-provisioned infrastructure clusters, you must manually add RHEL compute machines because automatic scaling in installer-provisioned infrastructure clusters adds Red Hat Enterprise Linux CoreOS (RHCOS) compute machines by default. Important Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster. Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines. 9.2. System requirements for RHEL compute nodes The Red Hat Enterprise Linux (RHEL) compute machine hosts in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements: You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information. Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10% for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity. Each system must meet the following hardware requirements: Physical or virtual system, or an instance running on a public or private IaaS. Base OS: RHEL 8.6 and later with "Minimal" installation option. Important Adding RHEL 7 compute machines to an OpenShift Container Platform cluster is not supported. If you have RHEL 7 compute machines that were previously supported in a past OpenShift Container Platform version, you cannot upgrade them to RHEL 8. You must deploy new RHEL 8 hosts, and the old RHEL 7 hosts should be removed. See the "Deleting nodes" section for more information. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. If you deployed OpenShift Container Platform in FIPS mode, you must enable FIPS on the RHEL machine before you boot it. See Installing a RHEL 8 system with FIPS mode enabled in the RHEL 8 documentation. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. NetworkManager 1.0 or later. 1 vCPU. Minimum 8 GB RAM. Minimum 15 GB hard disk space for the file system containing /var/ . Minimum 1 GB hard disk space for the file system containing /usr/local/bin/ . Minimum 1 GB hard disk space for the file system containing its temporary directory. The temporary system directory is determined according to the rules defined in the tempfile module in the Python standard library. Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the disk.enableUUID=TRUE attribute must be set. Each system must be able to access the cluster's API endpoints by using DNS-resolvable hostnames. Any network security access control that is in place must allow system access to the cluster's API service endpoints. Additional resources Deleting nodes 9.2.1. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 9.3. Preparing an image for your cloud Amazon Machine Images (AMI) are required because various image formats cannot be used directly by AWS. You may use the AMIs that Red Hat has provided, or you can manually import your own images. The AMI must exist before the EC2 instance can be provisioned. You will need a valid AMI ID so that the correct RHEL version needed for the compute machines is selected. 9.3.1. Listing latest available RHEL images on AWS AMI IDs correspond to native boot images for AWS. Because an AMI must exist before the EC2 instance is provisioned, you will need to know the AMI ID before configuration. The AWS Command Line Interface (CLI) is used to list the available Red Hat Enterprise Linux (RHEL) image IDs. Prerequisites You have installed the AWS CLI. Procedure Use this command to list RHEL 8.4 Amazon Machine Images (AMI): USD aws ec2 describe-images --owners 309956199498 \ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \ 2 --filters "Name=name,Values=RHEL-8.4*" \ 3 --region us-east-1 \ 4 --output table 5 1 The --owners command option shows Red Hat images based on the account ID 309956199498 . Important This account ID is required to display AMI IDs for images that are provided by Red Hat. 2 The --query command option sets how the images are sorted with the parameters 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' . In this case, the images are sorted by the creation date, and the table is structured to show the creation date, the name of the image, and the AMI IDs. 3 The --filter command option sets which version of RHEL is shown. In this example, since the filter is set by "Name=name,Values=RHEL-8.4*" , then RHEL 8.4 AMIs are shown. 4 The --region command option sets where the region where an AMI is stored. 5 The --output command option sets how the results are displayed. Note When creating a RHEL compute machine for AWS, ensure that the AMI is RHEL 8.4 or 8.5. Example output ------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.4.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.4.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.4.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+ Additional resources You may also manually import RHEL images to AWS . 9.4. Preparing the machine to run the playbook Before you can add compute machines that use Red Hat Enterprise Linux (RHEL) as the operating system to an OpenShift Container Platform 4.14 cluster, you must prepare a RHEL 8 machine to run an Ansible playbook that adds the new node to the cluster. This machine is not part of the cluster but must be able to access it. Prerequisites Install the OpenShift CLI ( oc ) on the machine that you run the playbook on. Log in as a user with cluster-admin permission. Procedure Ensure that the kubeconfig file for the cluster and the installation program that you used to install the cluster are on the RHEL 8 machine. One way to accomplish this is to use the same machine that you used to install the cluster. Configure the machine to access all of the RHEL hosts that you plan to use as compute machines. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN. Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts. Important If you use SSH key-based authentication, you must manage the key with an SSH agent. If you have not already done so, register the machine with RHSM and attach a pool with an OpenShift subscription to it: Register the machine with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Enable the repositories required by OpenShift Container Platform 4.14: # subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.14-for-rhel-8-x86_64-rpms" Install the required packages, including openshift-ansible : # yum install openshift-ansible openshift-clients jq The openshift-ansible package provides installation program utilities and pulls in other packages that you require to add a RHEL compute node to your cluster, such as Ansible, playbooks, and related configuration files. The openshift-clients provides the oc CLI, and the jq package improves the display of JSON output on your command line. 9.5. Preparing a RHEL compute node Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories. On each host, register with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Disable all yum repositories: Disable all the enabled RHSM repositories: # subscription-manager repos --disable="*" List the remaining yum repositories and note their names under repo id , if any: # yum repolist Use yum-config-manager to disable the remaining yum repositories: # yum-config-manager --disable <repo_id> Alternatively, disable all repositories: # yum-config-manager --disable \* Note that this might take a few minutes if you have a large number of available repositories Enable only the repositories required by OpenShift Container Platform 4.14: # subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.14-for-rhel-8-x86_64-rpms" \ --enable="fast-datapath-for-rhel-8-x86_64-rpms" Stop and disable firewalld on the host: # systemctl disable --now firewalld.service Note You must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker. 9.6. Attaching the role permissions to RHEL instance in AWS Using the Amazon IAM console in your browser, you may select the needed roles and assign them to a worker node. Procedure From the AWS IAM console, create your desired IAM role . Attach the IAM role to the desired worker node. Additional resources See Required AWS permissions for IAM roles . 9.7. Tagging a RHEL worker node as owned or shared A cluster uses the value of the kubernetes.io/cluster/<clusterid>,Value=(owned|shared) tag to determine the lifetime of the resources related to the AWS cluster. The owned tag value should be added if the resource should be destroyed as part of destroying the cluster. The shared tag value should be added if the resource continues to exist after the cluster has been destroyed. This tagging denotes that the cluster uses this resource, but there is a separate owner for the resource. Procedure With RHEL compute machines, the RHEL worker instance must be tagged with kubernetes.io/cluster/<clusterid>=owned or kubernetes.io/cluster/<cluster-id>=shared . Note Do not tag all existing security groups with the kubernetes.io/cluster/<name>,Value=<clusterid> tag, or the Elastic Load Balancing (ELB) will not be able to create a load balancer. 9.8. Adding a RHEL compute machine to your cluster You can add compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift Container Platform 4.14 cluster. Prerequisites You installed the required packages and performed the necessary configuration on the machine that you run the playbook on. You prepared the RHEL hosts for installation. Procedure Perform the following steps on the machine that you prepared to run the playbook: Create an Ansible inventory file that is named /<path>/inventory/hosts that defines your compute machine hosts and required variables: 1 Specify the user name that runs the Ansible tasks on the remote compute machines. 2 If you do not specify root for the ansible_user , you must set ansible_become to True and assign the user sudo permissions. 3 Specify the path and file name of the kubeconfig file for your cluster. 4 List each RHEL machine to add to your cluster. You must provide the fully-qualified domain name for each host. This name is the hostname that the cluster uses to access the machine, so set the correct public or private name to access the machine. Navigate to the Ansible playbook directory: USD cd /usr/share/ansible/openshift-ansible Run the playbook: USD ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1 1 For <path> , specify the path to the Ansible inventory file that you created. 9.9. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 9.10. Required parameters for the Ansible hosts file You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster. Parameter Description Values ansible_user The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. A user name on the system. The default value is root . ansible_become If the values of ansible_user is not root, you must set ansible_become to True , and the user that you specify as the ansible_user must be configured for passwordless sudo access. True . If the value is not True , do not specify and define this parameter. openshift_kubeconfig_path Specifies a path and file name to a local directory that contains the kubeconfig file for your cluster. The path and name of the configuration file. 9.10.1. Optional: Removing RHCOS compute machines from a cluster After you add the Red Hat Enterprise Linux (RHEL) compute machines to your cluster, you can optionally remove the Red Hat Enterprise Linux CoreOS (RHCOS) compute machines to free up resources. Prerequisites You have added RHEL compute machines to your cluster. Procedure View the list of machines and record the node names of the RHCOS compute machines: USD oc get nodes -o wide For each RHCOS compute machine, delete the node: Mark the node as unschedulable by running the oc adm cordon command: USD oc adm cordon <node_name> 1 1 Specify the node name of one of the RHCOS compute machines. Drain all the pods from the node: USD oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1 1 Specify the node name of the RHCOS compute machine that you isolated. Delete the node: USD oc delete nodes <node_name> 1 1 Specify the node name of the RHCOS compute machine that you drained. Review the list of compute machines to ensure that only the RHEL nodes remain: USD oc get nodes -o wide Remove the RHCOS machines from the load balancer for your cluster's compute machines. You can delete the virtual machines or reimage the physical hardware for the RHCOS compute machines. | [
"aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-8.4*\" \\ 3 --region us-east-1 \\ 4 --output table 5",
"------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.4.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.4.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.4.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.14-for-rhel-8-x86_64-rpms\"",
"yum install openshift-ansible openshift-clients jq",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --disable=\"*\"",
"yum repolist",
"yum-config-manager --disable <repo_id>",
"yum-config-manager --disable \\*",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.14-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"",
"systemctl disable --now firewalld.service",
"[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3",
"oc get nodes -o wide",
"oc adm cordon <node_name> 1",
"oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1",
"oc delete nodes <node_name> 1",
"oc get nodes -o wide"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/machine_management/adding-rhel-compute |
Chapter 8. Deployment [apps/v1] | Chapter 8. Deployment [apps/v1] Description Deployment enables declarative updates for Pods and ReplicaSets. Type object 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object DeploymentSpec is the specification of the desired behavior of the Deployment. status object DeploymentStatus is the most recently observed status of the Deployment. 8.1.1. .spec Description DeploymentSpec is the specification of the desired behavior of the Deployment. Type object Required selector template Property Type Description minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) paused boolean Indicates that the deployment is paused. progressDeadlineSeconds integer The maximum time in seconds for a deployment to make progress before it is considered to be failed. The deployment controller will continue to process failed deployments and a condition with a ProgressDeadlineExceeded reason will be surfaced in the deployment status. Note that progress will not be estimated during the time a deployment is paused. Defaults to 600s. replicas integer Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1. revisionHistoryLimit integer The number of old ReplicaSets to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. Defaults to 10. selector LabelSelector Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. strategy object DeploymentStrategy describes how to replace existing pods with new ones. template PodTemplateSpec Template describes the pods that will be created. 8.1.2. .spec.strategy Description DeploymentStrategy describes how to replace existing pods with new ones. Type object Property Type Description rollingUpdate object Spec to control the desired behavior of rolling update. type string Type of deployment. Can be "Recreate" or "RollingUpdate". Default is RollingUpdate. Possible enum values: - "Recreate" Kill all existing pods before creating new ones. - "RollingUpdate" Replace the old ReplicaSets by new one using rolling update i.e gradually scale down the old ReplicaSets and scale up the new one. 8.1.3. .spec.strategy.rollingUpdate Description Spec to control the desired behavior of rolling update. Type object Property Type Description maxSurge IntOrString The maximum number of pods that can be scheduled above the desired number of pods. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up. Defaults to 25%. Example: when this is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such that the total number of old and new pods do not exceed 130% of desired pods. Once old pods have been killed, new ReplicaSet can be scaled up further, ensuring that total number of pods running at any time during the update is at most 130% of desired pods. maxUnavailable IntOrString The maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding down. This can not be 0 if MaxSurge is 0. Defaults to 25%. Example: when this is set to 30%, the old ReplicaSet can be scaled down to 70% of desired pods immediately when the rolling update starts. Once new pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of pods available at all times during the update is at least 70% of desired pods. 8.1.4. .status Description DeploymentStatus is the most recently observed status of the Deployment. Type object Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this deployment. collisionCount integer Count of hash collisions for the Deployment. The Deployment controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ReplicaSet. conditions array Represents the latest available observations of a deployment's current state. conditions[] object DeploymentCondition describes the state of a deployment at a certain point. observedGeneration integer The generation observed by the deployment controller. readyReplicas integer readyReplicas is the number of pods targeted by this Deployment with a Ready Condition. replicas integer Total number of non-terminated pods targeted by this deployment (their labels match the selector). unavailableReplicas integer Total number of unavailable pods targeted by this deployment. This is the total number of pods that are still required for the deployment to have 100% available capacity. They may either be pods that are running but not yet available or pods that still have not been created. updatedReplicas integer Total number of non-terminated pods targeted by this deployment that have the desired template spec. 8.1.5. .status.conditions Description Represents the latest available observations of a deployment's current state. Type array 8.1.6. .status.conditions[] Description DeploymentCondition describes the state of a deployment at a certain point. Type object Required type status Property Type Description lastTransitionTime Time Last time the condition transitioned from one status to another. lastUpdateTime Time The last time this condition was updated. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of deployment condition. 8.2. API endpoints The following API endpoints are available: /apis/apps/v1/deployments GET : list or watch objects of kind Deployment /apis/apps/v1/watch/deployments GET : watch individual changes to a list of Deployment. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/deployments DELETE : delete collection of Deployment GET : list or watch objects of kind Deployment POST : create a Deployment /apis/apps/v1/watch/namespaces/{namespace}/deployments GET : watch individual changes to a list of Deployment. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/deployments/{name} DELETE : delete a Deployment GET : read the specified Deployment PATCH : partially update the specified Deployment PUT : replace the specified Deployment /apis/apps/v1/watch/namespaces/{namespace}/deployments/{name} GET : watch changes to an object of kind Deployment. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apps/v1/namespaces/{namespace}/deployments/{name}/status GET : read status of the specified Deployment PATCH : partially update status of the specified Deployment PUT : replace status of the specified Deployment 8.2.1. /apis/apps/v1/deployments Table 8.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Deployment Table 8.2. HTTP responses HTTP code Reponse body 200 - OK DeploymentList schema 401 - Unauthorized Empty 8.2.2. /apis/apps/v1/watch/deployments Table 8.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Deployment. deprecated: use the 'watch' parameter with a list operation instead. Table 8.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.3. /apis/apps/v1/namespaces/{namespace}/deployments Table 8.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Deployment Table 8.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 8.8. Body parameters Parameter Type Description body DeleteOptions schema Table 8.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Deployment Table 8.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.11. HTTP responses HTTP code Reponse body 200 - OK DeploymentList schema 401 - Unauthorized Empty HTTP method POST Description create a Deployment Table 8.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.13. Body parameters Parameter Type Description body Deployment schema Table 8.14. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 202 - Accepted Deployment schema 401 - Unauthorized Empty 8.2.4. /apis/apps/v1/watch/namespaces/{namespace}/deployments Table 8.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Deployment. deprecated: use the 'watch' parameter with a list operation instead. Table 8.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.5. /apis/apps/v1/namespaces/{namespace}/deployments/{name} Table 8.18. Global path parameters Parameter Type Description name string name of the Deployment namespace string object name and auth scope, such as for teams and projects Table 8.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Deployment Table 8.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 8.21. Body parameters Parameter Type Description body DeleteOptions schema Table 8.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Deployment Table 8.23. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Deployment Table 8.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.25. Body parameters Parameter Type Description body Patch schema Table 8.26. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Deployment Table 8.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.28. Body parameters Parameter Type Description body Deployment schema Table 8.29. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 401 - Unauthorized Empty 8.2.6. /apis/apps/v1/watch/namespaces/{namespace}/deployments/{name} Table 8.30. Global path parameters Parameter Type Description name string name of the Deployment namespace string object name and auth scope, such as for teams and projects Table 8.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Deployment. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 8.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.7. /apis/apps/v1/namespaces/{namespace}/deployments/{name}/status Table 8.33. Global path parameters Parameter Type Description name string name of the Deployment namespace string object name and auth scope, such as for teams and projects Table 8.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Deployment Table 8.35. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Deployment Table 8.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.37. Body parameters Parameter Type Description body Patch schema Table 8.38. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Deployment Table 8.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.40. Body parameters Parameter Type Description body Deployment schema Table 8.41. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/workloads_apis/deployment-apps-v1 |
Chapter 8. Web terminal | Chapter 8. Web terminal 8.1. Installing the web terminal You can install the web terminal by using the Web Terminal Operator listed in the OpenShift Container Platform OperatorHub. When you install the Web Terminal Operator, the custom resource definitions (CRDs) that are required for the command line configuration, such as the DevWorkspace CRD, are automatically installed. The web console creates the required resources when you open the web terminal. Prerequisites You are logged into the OpenShift Container Platform web console. You have cluster administrator permissions. Procedure In the Administrator perspective of the web console, navigate to Operators OperatorHub . Use the Filter by keyword box to search for the Web Terminal Operator in the catalog, and then click the Web Terminal tile. Read the brief description about the Operator on the Web Terminal page, and then click Install . On the Install Operator page, retain the default values for all fields. The fast option in the Update Channel menu enables installation of the latest release of the Web Terminal Operator. The All namespaces on the cluster option in the Installation Mode menu enables the Operator to watch and be available to all namespaces in the cluster. The openshift-operators option in the Installed Namespace menu installs the Operator in the default openshift-operators namespace. The Automatic option in the Approval Strategy menu ensures that the future upgrades to the Operator are handled automatically by the Operator Lifecycle Manager. Click Install . In the Installed Operators page, click the View Operator to verify that the Operator is listed on the Installed Operators page. Note The Web Terminal Operator installs the DevWorkspace Operator as a dependency. After the Operator is installed, refresh your page to see the command line terminal icon ( ) in the masthead of the console. 8.2. Configuring the web terminal You can configure timeout and image settings for the web terminal for your current session. 8.2.1. Configuring the web terminal timeout for a session You can change the default timeout period for the web terminal for your current session. Prerequisites You have access to an OpenShift Container Platform cluster that has the Web Terminal Operator installed. You are logged into the web console. Procedure Click the web terminal icon ( ). Optional: Set the web terminal timeout for the current session: Click Timeout. In the field that appears, enter the timeout value. From the drop-down list, select a timeout interval of Seconds , Minutes , Hours , or Milliseconds . Optional: Select a custom image for the web terminal to use. Click Image. In the field that appears, enter the URL of the image that you want to use. Click Start to begin a terminal instance using the specified timeout setting. 8.2.2. Configuring the web terminal image for a session You can change the default image for the web terminal for your current session. Prerequisites You have access to an OpenShift Container Platform cluster that has the Web Terminal Operator installed. You are logged into the web console. Procedure Click the web terminal icon ( ). Click Image to display advanced configuration options for the web terminal image. Enter the URL of the image that you want to use. Click Start to begin a terminal instance using the specified image setting. 8.3. Using the web terminal You can launch an embedded command line terminal instance in the web console. This terminal instance is preinstalled with common CLI tools for interacting with the cluster, such as oc , kubectl , odo , kn , tkn , helm , and subctl . It also has the context of the project you are working on and automatically logs you in using your credentials. 8.3.1. Accessing the web terminal After the Web Terminal Operator is installed, you can access the web terminal. After the web terminal is initialized, you can use the preinstalled CLI tools like oc , kubectl , odo , kn , tkn , helm , and subctl in the web terminal. You can re-run commands by selecting them from the list of commands you have run in the terminal. These commands persist across multiple terminal sessions. The web terminal remains open until you close it or until you close the browser window or tab. Prerequisites You have access to an OpenShift Container Platform cluster and are logged into the web console. The Web Terminal Operator is installed on your cluster. Procedure To launch the web terminal, click the command line terminal icon ( ) in the masthead of the console. A web terminal instance is displayed in the Command line terminal pane. This instance is automatically logged in with your credentials. If a project has not been selected in the current session, select the project where the DevWorkspace CR must be created from the Project drop-down list. By default, the current project is selected. Note One DevWorkspace CR defines the web terminal of one user. This CR contains details about the user's web terminal status and container image components. The DevWorkspace CR is created only if it does not already exist. The openshift-terminal project is the default project used for cluster administrators. They do not have the option to choose another project. The Web Terminal Operator installs the DevWorkspace Operator as a dependency. Optional: Set the web terminal timeout for the current session: Click Timeout. In the field that appears, enter the timeout value. From the drop-down list, select a timeout interval of Seconds , Minutes , Hours , or Milliseconds . Optional: Select a custom image for the web terminal to use. Click Image. In the field that appears, enter the URL of the image that you want to use. Click Start to initialize the web terminal using the selected project. Click + to open multiple tabs within the web terminal in the console. 8.4. Troubleshooting the web terminal 8.4.1. Web terminal and network policies The web terminal might fail to start if the cluster has network policies configured. To start a web terminal instance, the Web Terminal Operator must communicate with the web terminal's pod to verify it is running, and the OpenShift Container Platform web console needs to send information to automatically log in to the cluster within the terminal. If either step fails, the web terminal fails to start and the terminal panel is in a loading state until a context deadline exceeded error occurs. To avoid this issue, ensure that the network policies for namespaces that are used for terminals allow ingress from the openshift-console and openshift-operators namespaces. The following samples show NetworkPolicy objects for allowing ingress from the openshift-console and openshift-operators namespaces. Allowing ingress from the openshift-console namespace apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-console spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-console podSelector: {} policyTypes: - Ingress Allowing ingress from the openshift-operators namespace apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-operators spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-operators podSelector: {} policyTypes: - Ingress 8.5. Uninstalling the web terminal Uninstalling the Web Terminal Operator does not remove any of the custom resource definitions (CRDs) or managed resources that are created when the Operator is installed. For security purposes, you must manually uninstall these components. By removing these components, you save cluster resources because terminals do not idle when the Operator is uninstalled. Uninstalling the web terminal is a two-step process: Uninstall the Web Terminal Operator and related custom resources (CRs) that were added when you installed the Operator. Uninstall the DevWorkspace Operator and its related custom resources that were added as a dependency of the Web Terminal Operator. 8.5.1. Removing the Web Terminal Operator You can uninstall the web terminal by removing the Web Terminal Operator and custom resources used by the Operator. Prerequisites You have access to an OpenShift Container Platform cluster with cluster administrator permissions. You have installed the oc CLI. Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators . Scroll the filter list or type a keyword into the Filter by name box to find the Web Terminal Operator. Click the Options menu for the Web Terminal Operator, and then select Uninstall Operator . In the Uninstall Operator confirmation dialog box, click Uninstall to remove the Operator, Operator deployments, and pods from the cluster. The Operator stops running and no longer receives updates. 8.5.2. Removing the DevWorkspace Operator To completely uninstall the web terminal, you must also remove the DevWorkspace Operator and custom resources used by the Operator. Important The DevWorkspace Operator is a standalone Operator and may be required as a dependency for other Operators installed in the cluster. Follow the steps below only if you are sure that the DevWorkspace Operator is no longer needed. Prerequisites You have access to an OpenShift Container Platform cluster with cluster administrator permissions. You have installed the oc CLI. Procedure Remove the DevWorkspace custom resources used by the Operator, along with any related Kubernetes objects: USD oc delete devworkspaces.workspace.devfile.io --all-namespaces --all --wait USD oc delete devworkspaceroutings.controller.devfile.io --all-namespaces --all --wait Warning If this step is not complete, finalizers make it difficult to fully uninstall the Operator. Remove the CRDs used by the Operator: Warning The DevWorkspace Operator provides custom resource definitions (CRDs) that use conversion webhooks. Failing to remove these CRDs can cause issues in the cluster. USD oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaceroutings.controller.devfile.io USD oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaces.workspace.devfile.io USD oc delete customresourcedefinitions.apiextensions.k8s.io devworkspacetemplates.workspace.devfile.io USD oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaceoperatorconfigs.controller.devfile.io Verify that all involved custom resource definitions are removed. The following command should not display any output: USD oc get customresourcedefinitions.apiextensions.k8s.io | grep "devfile.io" Remove the devworkspace-webhook-server deployment, mutating, and validating webhooks: USD oc delete deployment/devworkspace-webhook-server -n openshift-operators USD oc delete mutatingwebhookconfigurations controller.devfile.io USD oc delete validatingwebhookconfigurations controller.devfile.io Note If you remove the devworkspace-webhook-server deployment without removing the mutating and validating webhooks, you can not use oc exec commands to run commands in a container in the cluster. After you remove the webhooks you can use the oc exec commands again. Remove any remaining services, secrets, and config maps. Depending on the installation, some resources included in the following commands may not exist in the cluster. USD oc delete all --selector app.kubernetes.io/part-of=devworkspace-operator,app.kubernetes.io/name=devworkspace-webhook-server -n openshift-operators USD oc delete serviceaccounts devworkspace-webhook-server -n openshift-operators USD oc delete clusterrole devworkspace-webhook-server USD oc delete clusterrolebinding devworkspace-webhook-server Uninstall the DevWorkspace Operator: In the Administrator perspective of the web console, navigate to Operators Installed Operators . Scroll the filter list or type a keyword into the Filter by name box to find the DevWorkspace Operator. Click the Options menu for the Operator, and then select Uninstall Operator . In the Uninstall Operator confirmation dialog box, click Uninstall to remove the Operator, Operator deployments, and pods from the cluster. The Operator stops running and no longer receives updates. | [
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-console spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-console podSelector: {} policyTypes: - Ingress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-operators spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-operators podSelector: {} policyTypes: - Ingress",
"oc delete devworkspaces.workspace.devfile.io --all-namespaces --all --wait",
"oc delete devworkspaceroutings.controller.devfile.io --all-namespaces --all --wait",
"oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaceroutings.controller.devfile.io",
"oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaces.workspace.devfile.io",
"oc delete customresourcedefinitions.apiextensions.k8s.io devworkspacetemplates.workspace.devfile.io",
"oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaceoperatorconfigs.controller.devfile.io",
"oc get customresourcedefinitions.apiextensions.k8s.io | grep \"devfile.io\"",
"oc delete deployment/devworkspace-webhook-server -n openshift-operators",
"oc delete mutatingwebhookconfigurations controller.devfile.io",
"oc delete validatingwebhookconfigurations controller.devfile.io",
"oc delete all --selector app.kubernetes.io/part-of=devworkspace-operator,app.kubernetes.io/name=devworkspace-webhook-server -n openshift-operators",
"oc delete serviceaccounts devworkspace-webhook-server -n openshift-operators",
"oc delete clusterrole devworkspace-webhook-server",
"oc delete clusterrolebinding devworkspace-webhook-server"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/web_console/web-terminal |
Installing Red Hat Developer Hub on OpenShift Dedicated on Google Cloud Platform | Installing Red Hat Developer Hub on OpenShift Dedicated on Google Cloud Platform Red Hat Developer Hub 1.3 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/installing_red_hat_developer_hub_on_openshift_dedicated_on_google_cloud_platform/index |
Chapter 4. Clair on OpenShift Container Platform | Chapter 4. Clair on OpenShift Container Platform To set up Clair v4 (Clair) on a Red Hat Quay deployment on OpenShift Container Platform, it is recommended to use the Red Hat Quay Operator. By default, the Red Hat Quay Operator installs or upgrades a Clair deployment along with your Red Hat Quay deployment and configure Clair automatically. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/vulnerability_reporting_with_clair_on_red_hat_quay/clair-quay-operator-overview |
Chapter 11. Managing Errata | Chapter 11. Managing Errata As a part of Red Hat's quality control and release process, we provide customers with updates for each release of official Red Hat RPMs. Red Hat compiles groups of related packages into an erratum along with an advisory that provides a description of the update. There are three types of advisories (in order of importance): Security Advisory Describes fixed security issues found in the package. The security impact of the issue can be Low, Moderate, Important, or Critical. Bug Fix Advisory Describes bug fixes for the package. Product Enhancement Advisory Describes enhancements and new features added to the package. Red Hat Satellite imports this errata information when synchronizing repositories with Red Hat's Content Delivery Network (CDN). Red Hat Satellite also provides tools to inspect and filter errata, allowing for precise update management. This way, you can select relevant updates and propagate them through Content Views to selected content hosts. Errata are labeled according to the most important advisory type they contain. Therefore, errata labeled as Product Enhancement Advisory can contain only enhancement updates, while Bug Fix Advisory errata can contain both bug fixes and enhancements, and Security Advisory can contain all three types. In Red Hat Satellite, there are two keywords that describe an erratum's relationship to the available content hosts: Applicable An erratum that applies to one or more content hosts, which means it updates packages present on the content host. Although these errata apply to content hosts, until their state changes to Installable , the errata are not ready to be installed. Installable errata are automatically applicable. Installable An erratum that applies to one or more content hosts and is available to install on the content host. Installable errata are available to a content host from life cycle environment and the associated Content View, but are not yet installed. This chapter shows how to manage errata and apply them to either a single host or multiple hosts. 11.1. Inspecting Available Errata The following procedure describes how to view and filter the available errata and how to display metadata of the selected advisory. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Errata to view the list of available errata. Use the filtering tools at the top of the page to limit the number of displayed errata: Select the repository to be inspected from the list. All Repositories is selected by default. The Applicable checkbox is selected by default to view only applicable errata in the selected repository. Select the Installable checkbox to view only errata marked as installable. To search the table of errata, type the query in the Search field in the form of: See Section 11.2, "Parameters Available for Errata Search" for the list of parameters available for search. Find the list of applicable operators in Supported Operators for Granular Search in Administering Red Hat Satellite . Automatic suggestion works as you type. You can also combine queries with the use of and and or operators. For example, to display only security advisories related to the kernel package, type: Press Enter to start the search. Click the Errata ID of the erratum you want to inspect: The Details tab contains the description of the updated package as well as documentation of important fixes and enhancements provided by the update. On the Content Hosts tab, you can apply the erratum to selected content hosts as described in Section 11.9, "Applying Errata to Multiple Hosts" . The Repositories tab lists repositories that already contain the erratum. You can filter repositories by the environment and Content View, and search for them by the repository name. You can also use the new Host page to view to inspect available errata and select errata to install. In the Satellite web UI, navigate to Hosts > All Hosts and select the host you require. If there are errata associated with the host, an Installable Errata card on the new Host page displays an interactive pie chart showing a breakdown of the security advisories, bugfixes, and enhancements. On the new Host page, select the Content tab. On the Content page select the Errata tab. The page displays installable errata for the chosen host. Click the checkbox for any errata you wish to install. Using the vertical ellipsis icon to the errata you want to add to the host, select Apply via Katello agent if you have no connectivity to the target host using SSH. Select Apply via Remote Execution to use Remote Execution, or Apply via customized remote execution if you want to customize the remote execution. Click Submit . CLI procedure To view errata that are available for all organizations, enter the following command: To view details of a specific erratum, enter the following command: You can search errata by entering the query with the --search option. For example, to view applicable errata for the selected product that contains the specified bugs ordered so that the security errata are displayed on top, enter the following command: 11.2. Parameters Available for Errata Search Parameter Description Example bug Search by the Bugzilla number. bug = 1172165 cve Search by the CVE number. cve = CVE-2015-0235 id Search by the errata ID. The auto-suggest system displays a list of available IDs as you type. id = RHBA-2014:2004 issued Search by the issue date. You can specify the exact date, like "Feb16,2015", or use keywords, for example "Yesterday", or "1 hour ago". The time range can be specified with the use of the "<" and ">" operators. issued < "Jan 12,2015" package Search by the full package build name. The auto-suggest system displays a list of available packages as you type. package = glib2-2.22.5-6.el6.i686 package_name Search by the package name. The auto-suggest system displays a list of available packages as you type. package_name = glib2 severity Search by the severity of the issue fixed by the security update. Specify Critical , Important , or Moderate . severity = Critical title Search by the advisory title. title ~ openssl type Search by the advisory type. Specify security , bugfix , or enhancement . type = bugfix updated Search by the date of the last update. You can use the same formats as with the issued parameter. updated = "6 days ago" 11.3. Applying Installable Errata Use the following procedure to view a list of installable errata and select errata to install. Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select the host you require. If there are errata associated with the host, they are displayed in an Installable Errata card on the new Host page. On the Content tab, Errata displays installable errata for the chosen host. Click the checkbox for any errata you wish to install. Using the vertical ellipsis icon to the errata you want to add to the host, select Apply via Remote Execution to use Remote Execution. Select Apply via customized remote execution if you want to customize the remote execution, or select Apply via Katello agent if you have no connectivity to the target host using SSH. Click Submit . 11.4. Subscribing to Errata Notifications You can configure email notifications for Satellite users. Users receive a summary of applicable and installable errata, notifications on Content View promotion or after synchronizing a repository. For more information, see Configuring Email Notifications in the Administering Red Hat Satellite guide. 11.5. Limitations to Repository Dependency Resolution With Satellite, using incremental updates to your Content Views solves some repository dependency problems. However, dependency resolution at a repository level still remains problematic on occasion. When a repository update becomes available with a new dependency, Satellite retrieves the newest version of the package to solve the dependency, even if there are older versions available in the existing repository package. This can create further dependency resolution problems when installing packages. Example scenario A repository on your client has the package example_repository-1.0 with the dependency example_repository-libs-1.0 . The repository also has another package example_tools-1.0 . A security erratum becomes available with the package example_tools-1.1 . The example_tools-1.1 package requires the example_repository-libs-1.1 package as a dependency. After an incremental Content View update, the example_tools-1.1 , example_tools-1.0 , and example_repository-libs-1.1 are now in the repository. The repository also has the packages example_repository-1.0 and example_repository-libs-1.0 . Note that the incremental update to the Content View did not add the package example_repository-1.1 . Because you can install all these packages using yum, no potential problem is detected. However, when the client installs the example_tools-1.1 package, a dependency resolution problem occurs because both example_repository-libs-1.0 and example_repository-libs-1.1 cannot be installed. There is currently no workaround for this problem. The larger the time frame, and major Y releases between the base set of RPMs and the errata being applied, the higher the chance of a problem with dependency resolution. 11.6. Creating a Content View Filter for Errata You can use content filters to limit errata. Such filters include: ID - Select specific erratum to allow into your resulting repositories. Date Range - Define a date range and include a set of errata released during that date range. Type - Select the type of errata to include such as bug fixes, enhancements, and security updates. Create a content filter to exclude errata after a certain date. This ensures your production systems in the application life cycle are kept up to date to a certain point. Then you can modify the filter's start date to introduce new errata into your testing environment to test the compatibility of new packages into your application life cycle. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites A Content View with the repositories that contain required errata is created. For more information, see Section 8.1, "Creating a Content View" . Procedure In the Satellite web UI, navigate to Content > Content Views and select a Content View that you want to use for applying errata. Select Yum Content > Filters and click New Filter . In the Name field, enter Errata Filter . From the Content Type list, select Erratum - Date and Type . From the Inclusion Type list, select Exclude . In the Description field, enter Exclude errata items from YYYY-MM-DD . Click Save . For Errata Type , select the checkboxes of errata types you want to exclude. For example, select the Enhancement and Bugfix checkboxes and clear the Security checkbox to exclude enhancement and bugfix errata after certain date, but include all the security errata. For Date Type , select one of two checkboxes: Issued On for the issued date of the erratum. Updated On for the date of the erratum's last update. Select the Start Date to exclude all errata on or after the selected date. Leave the End Date field blank. Click Save . Click Publish New Version to publish the resulting repository. Enter Adding errata filter in the Description field. Click Save . When the Content View completes publication, notice the Content column reports a reduced number of packages and errata from the initial repository. This means the filter successfully excluded the all non-security errata from the last year. Click the Versions tab. Click Promote to the right of the published version. Select the environments you want to promote the Content View version to. In the Description field, enter the description for promoting. Click Promote Version to promote this Content View version across the required environments. CLI procedure Create a filter for the errata: Create a filter rule to exclude all errata on or after the Start Date that you want to set: Publish the Content View: Promote the Content View to the lifecycle environment so that the included errata are available to that lifecycle environment: 11.7. Adding Errata to an Incremental Content View If errata are available but not installable, you can create an incremental Content View version to add the errata to your content hosts. For example, if the Content View is version 1.0, it becomes Content View version 1.1, and when you publish, it becomes Content View version 2.0. Important If your Content View version is old, you might encounter incompatibilities when incrementally adding enhancement errata. This is because enhancements are typically designed for the most current software in a repository. To use the CLI instead of the Satellite web UI, see CLI procedure . Procedure In the Satellite web UI, navigate to Content > Errata . From the Errata list, click the name of the errata that you want to apply. Select the content hosts that you want to apply the errata to, and click Apply to Hosts . This creates the incremental update to the Content View. If you want to apply the errata to the content host, select the Apply Errata to Content Hosts immediately after publishing checkbox. Click Confirm to apply the errata. CLI procedure List the errata and its corresponding IDs: List the different content-view versions and the corresponding IDs: Apply a single erratum to content-view version. You can add more IDs in a comma-separated list. 11.8. Applying Errata to a Host Use these procedures to review and apply errata to a host. Prerequisites Synchronize Red Hat Satellite repositories with the latest errata available from Red Hat. For more information, see Section 6.6, "Synchronizing Repositories" . Register the host to an environment and Content View on Satellite Server. For more information, see Registering Hosts in the Managing Hosts guide. Configure the host for remote execution. For more information about running remote execution jobs, see Configuring and Setting Up Remote Jobs in the Managing Hosts guide. Note If the host is already configured to receive content updates with the deprecated Katello Agent, migrate to remote execution instead. For more information, see Migrating Hosts From Katello Agent to Remote Execution in Managing Hosts . The procedure to apply an erratum to a managed host depends on its operating system. 11.8.1. Applying Errata to a Host running Red Hat Enterprise Linux 7 Use these procedures to review and apply errata to a host running Red Hat Enterprise Linux 7. Procedure In the Satellite web UI, navigate to Hosts > Content Hosts and select the host you want to apply errata to. Navigate to the Errata tab to see the list of errata. Select the errata to apply and click Apply Selected . In the confirmation window, click Apply . After the task to update all packages associated with the selected errata completes, click the Details tab to view the updated packages. CLI procedure List all errata for the host: Apply the most recent erratum to the host. Identify the erratum to apply using the erratum ID. Using Remote Execution Using Katello Agent (deprecated) 11.8.2. Applying Errata to a Host running Red Hat Enterprise Linux 8 Use this procedure to review and apply errata to a host running Red Hat Enterprise Linux 8. CLI procedure On Satellite, list all errata for the host: Find the module stream an erratum belongs to: On the host, update the module stream: 11.9. Applying Errata to Multiple Hosts Use these procedures to review and apply errata to multiple RHEL 7 hosts. Prerequisites Synchronize Red Hat Satellite repositories with the latest errata available from Red Hat. For more information, see Section 6.6, "Synchronizing Repositories" . Register the hosts to an environment and Content View on Satellite Server. For more information, see Registering Hosts in the Managing Hosts guide. Configure the host for remote execution. For more information about running remote execution jobs, see Configuring and Setting Up Remote Jobs in the Managing Hosts guide. Note If the host is already configured to receive content updates with the deprecated Katello Agent, migrate to remote execution instead. For more information, see Migrating Hosts From Katello Agent to Remote Execution in Managing Hosts . Procedure In the Satellite web UI, navigate to Content > Errata . Click the name of an erratum you want to apply. Click to Content Hosts tab. Select the hosts you want to apply errata to and click Apply to Hosts . Click Confirm . CLI procedure List all installable errata: Apply one of the errata to multiple hosts: Using Remote Execution Using Katello Agent (deprecated) Identify the erratum you want to use and list the hosts that this erratum is applicable to: The following Bash script applies an erratum to each host for which this erratum is available: for HOST in hammer --csv --csv-separator "|" host list --search "applicable_errata = ERRATUM_ID" --organization "Default Organization" | tail -n+2 | awk -F "|" '{ print USD2 }' ; do echo "== Applying to USDHOST ==" ; hammer host errata apply --host USDHOST --errata-ids ERRATUM_ID1,ERRATUM_ID2 ; done This command identifies all hosts with erratum_IDs as an applicable erratum and then applies the erratum to each host. To see if an erratum is applied successfully, find the corresponding task in the output of the following command: View the state of a selected task: 11.10. Applying Errata to a Host Collection Using Remote Execution Using Katello Agent (deprecated) | [
"parameter operator value",
"type = security and package_name = kernel",
"hammer erratum list",
"hammer erratum info --id erratum_ID",
"hammer erratum list --product-id 7 --search \"bug = 1213000 or bug = 1207972\" --errata-restrict-applicable 1 --order \"type desc\"",
"hammer content-view filter create --name \" Filter Name \" --description \"Exclude errata items from the YYYY-MM-DD \" --content-view \" CV Name \" --organization \" Default Organization \" --type \"erratum\"",
"hammer content-view filter rule create --start-date \" YYYY-MM-DD \" --content-view \" CV Name \" --content-view-filter=\" Filter Name \" --organization \" Default Organization \" --types=security,enhancement,bugfix",
"hammer content-view publish --name \" CV Name \" --organization \" Default Organization \"",
"hammer content-view version promote --content-view \" CV Name \" --organization \" Default Organization \" --to-lifecycle-environment \" Lifecycle Environment Name \"",
"hammer erratum list",
"hammer content-view version list",
"hammer content-view version incremental-update --content-view-version-id 319 --errata-ids 34068b",
"hammer host errata list --host client.example.com",
"hammer job-invocation create --feature katello_errata_install --inputs errata= ERRATUM_ID1 , ERRATUM_ID2 --search-query \"name = client.example.com\"",
"hammer host errata apply --host \"client.example.com\" --errata-ids ERRATUM_ID1 , ERRATUM_ID2",
"hammer host errata list --host client.example.com",
"hammer erratum info --id ERRATUM_ID",
"yum update Module_Stream_Name",
"hammer erratum list --errata-restrict-installable true --organization \" Default Organization \"",
"hammer job-invocation create --feature katello_errata_install --inputs errata= ERRATUM_ID --search-query \"applicable_errata = ERRATUM_ID \"",
"hammer host list --search \"applicable_errata = ERRATUM_ID \" --organization \" Default Organization \"",
"for HOST in hammer --csv --csv-separator \"|\" host list --search \"applicable_errata = ERRATUM_ID\" --organization \"Default Organization\" | tail -n+2 | awk -F \"|\" '{ print USD2 }' ; do echo \"== Applying to USDHOST ==\" ; hammer host errata apply --host USDHOST --errata-ids ERRATUM_ID1,ERRATUM_ID2 ; done",
"hammer task list",
"hammer task progress --id task_ID",
"hammer job-invocation create --feature katello_errata_install --inputs errata= ERRATUM_ID1 , ERRATUM_ID2 ,... --search-query \"host_collection = HOST_COLLECTION_NAME \"",
"hammer host-collection erratum install --errata \" erratum_ID1 , erratum_ID2 ,...\" --name \" host_collection_name \" --organization \" My_Organization \""
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_content/Managing_Errata_content-management |
Chapter 2. Troubleshooting installation issues | Chapter 2. Troubleshooting installation issues To assist in troubleshooting a failed OpenShift Container Platform installation, you can gather logs from the bootstrap and control plane machines. You can also get debug information from the installation program. If you are unable to resolve the issue using the logs and debug information, see Determining where installation issues occur for component-specific troubleshooting. Note If your OpenShift Container Platform installation fails and the debug output or logs contain network timeouts or other connectivity errors, review the guidelines for configuring your firewall . Gathering logs from your firewall and load balancer can help you diagnose network-related errors. 2.1. Prerequisites You attempted to install an OpenShift Container Platform cluster and the installation failed. 2.2. Gathering logs from a failed installation If you gave an SSH key to your installation program, you can gather data about your failed installation. Note You use a different command to gather logs about an unsuccessful installation than to gather logs from a running cluster. If you must gather logs from a running cluster, use the oc adm must-gather command. Prerequisites Your OpenShift Container Platform installation failed before the bootstrap process finished. The bootstrap node is running and accessible through SSH. The ssh-agent process is active on your computer, and you provided the same SSH key to both the ssh-agent process and the installation program. If you tried to install a cluster on infrastructure that you provisioned, you must have the fully qualified domain names of the bootstrap and control plane nodes. Procedure Generate the commands that are required to obtain the installation logs from the bootstrap and control plane machines: If you used installer-provisioned infrastructure, change to the directory that contains the installation program and run the following command: USD ./openshift-install gather bootstrap --dir <installation_directory> 1 1 installation_directory is the directory you specified when you ran ./openshift-install create cluster . This directory contains the OpenShift Container Platform definition files that the installation program creates. For installer-provisioned infrastructure, the installation program stores information about the cluster, so you do not specify the hostnames or IP addresses. If you used infrastructure that you provisioned yourself, change to the directory that contains the installation program and run the following command: USD ./openshift-install gather bootstrap --dir <installation_directory> \ 1 --bootstrap <bootstrap_address> \ 2 --master <master_1_address> \ 3 --master <master_2_address> \ 4 --master <master_3_address> 5 1 For installation_directory , specify the same directory you specified when you ran ./openshift-install create cluster . This directory contains the OpenShift Container Platform definition files that the installation program creates. 2 <bootstrap_address> is the fully qualified domain name or IP address of the cluster's bootstrap machine. 3 4 5 For each control plane, or master, machine in your cluster, replace <master_*_address> with its fully qualified domain name or IP address. Note A default cluster contains three control plane machines. List all of your control plane machines as shown, no matter how many your cluster uses. Example output INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here "<installation_directory>/log-bundle-<timestamp>.tar.gz" If you open a Red Hat support case about your installation failure, include the compressed logs in the case. 2.3. Manually gathering logs with SSH access to your host(s) Manually gather logs in situations where must-gather or automated collection methods do not work. Important By default, SSH access to the OpenShift Container Platform nodes is disabled on the Red Hat OpenStack Platform (RHOSP) based installations. Prerequisites You must have SSH access to your host(s). Procedure Collect the bootkube.service service logs from the bootstrap host using the journalctl command by running: USD journalctl -b -f -u bootkube.service Collect the bootstrap host's container logs using the podman logs. This is shown as a loop to get all of the container logs from the host: USD for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done Alternatively, collect the host's container logs using the tail command by running: # tail -f /var/lib/containers/storage/overlay-containers/*/userdata/ctr.log Collect the kubelet.service and crio.service service logs from the master and worker hosts using the journalctl command by running: USD journalctl -b -f -u kubelet.service -u crio.service Collect the master and worker host container logs using the tail command by running: USD sudo tail -f /var/log/containers/* 2.4. Manually gathering logs without SSH access to your host(s) Manually gather logs in situations where must-gather or automated collection methods do not work. If you do not have SSH access to your node, you can access the systems journal to investigate what is happening on your host. Prerequisites Your OpenShift Container Platform installation must be complete. Your API service is still functional. You have system administrator privileges. Procedure Access journald unit logs under /var/log by running: USD oc adm node-logs --role=master -u kubelet Access host file paths under /var/log by running: USD oc adm node-logs --role=master --path=openshift-apiserver 2.5. Getting debug information from the installation program You can use any of the following actions to get debug information from the installation program. Look at debug messages from a past installation in the hidden .openshift_install.log file. For example, enter: USD cat ~/<installation_directory>/.openshift_install.log 1 1 For installation_directory , specify the same directory you specified when you ran ./openshift-install create cluster . Change to the directory that contains the installation program and re-run it with --log-level=debug : USD ./openshift-install create cluster --dir <installation_directory> --log-level debug 1 1 For installation_directory , specify the same directory you specified when you ran ./openshift-install create cluster . 2.6. Reinstalling the OpenShift Container Platform cluster If you are unable to debug and resolve issues in the failed OpenShift Container Platform installation, consider installing a new OpenShift Container Platform cluster. Before starting the installation process again, you must complete thorough cleanup. For a user-provisioned infrastructure (UPI) installation, you must manually destroy the cluster and delete all associated resources. The following procedure is for an installer-provisioned infrastructure (IPI) installation. Procedure Destroy the cluster and remove all the resources associated with the cluster, including the hidden installer state files in the installation directory: USD ./openshift-install destroy cluster --dir <installation_directory> 1 1 installation_directory is the directory you specified when you ran ./openshift-install create cluster . This directory contains the OpenShift Container Platform definition files that the installation program creates. Before reinstalling the cluster, delete the installation directory: USD rm -rf <installation_directory> Follow the procedure for installing a new OpenShift Container Platform cluster. Additional resources Installing an OpenShift Container Platform cluster | [
"./openshift-install gather bootstrap --dir <installation_directory> 1",
"./openshift-install gather bootstrap --dir <installation_directory> \\ 1 --bootstrap <bootstrap_address> \\ 2 --master <master_1_address> \\ 3 --master <master_2_address> \\ 4 --master <master_3_address> 5",
"INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here \"<installation_directory>/log-bundle-<timestamp>.tar.gz\"",
"journalctl -b -f -u bootkube.service",
"for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done",
"tail -f /var/lib/containers/storage/overlay-containers/*/userdata/ctr.log",
"journalctl -b -f -u kubelet.service -u crio.service",
"sudo tail -f /var/log/containers/*",
"oc adm node-logs --role=master -u kubelet",
"oc adm node-logs --role=master --path=openshift-apiserver",
"cat ~/<installation_directory>/.openshift_install.log 1",
"./openshift-install create cluster --dir <installation_directory> --log-level debug 1",
"./openshift-install destroy cluster --dir <installation_directory> 1",
"rm -rf <installation_directory>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/validation_and_troubleshooting/installing-troubleshooting |
Chapter 7. Preparing Red Hat OpenShift Container Platform for Red Hat OpenStack Services on OpenShift | Chapter 7. Preparing Red Hat OpenShift Container Platform for Red Hat OpenStack Services on OpenShift You install Red Hat OpenStack Services on OpenShift (RHOSO) on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. To prepare for installing and deploying your RHOSO environment, you must configure the RHOCP worker nodes and the RHOCP networks on your RHOCP cluster. 7.1. Configuring Red Hat OpenShift Container Platform nodes for a Red Hat OpenStack Platform deployment Red Hat OpenStack Services on OpenShift (RHOSO) services run on Red Hat OpenShift Container Platform (RHOCP) worker nodes. By default, the OpenStack Operator deploys RHOSO services on any worker node. You can use node labels in your OpenStackControlPlane custom resource (CR) to specify which RHOCP nodes host the RHOSO services. By pinning some services to specific infrastructure nodes rather than running the services on all of your RHOCP worker nodes, you optimize the performance of your deployment. You can create labels for the RHOCP nodes, or you can use the existing labels, and then specify those labels in the OpenStackControlPlane CR by using the nodeSelector field. For example, the Block Storage service (cinder) has different requirements for each of its services: The cinder-scheduler service is a very light service with low memory, disk, network, and CPU usage. The cinder-api service has high network usage due to resource listing requests. The cinder-volume service has high disk and network usage because many of its operations are in the data path, such as offline volume migration, and creating a volume from an image. The cinder-backup service has high memory, network, and CPU requirements. Therefore, you can pin the cinder-api , cinder-volume , and cinder-backup services to dedicated nodes and let the OpenStack Operator place the cinder-scheduler service on a node that has capacity. Additional resources Placing pods on specific nodes using node selectors Machine configuration overview Node Feature Discovery Operator 7.2. Creating a storage class You must create a storage class for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end to provide persistent volumes to Red Hat OpenStack Services on OpenShift (RHOSO) pods. Use the Logical Volume Manager (LVM) Storage storage class with RHOSO. You specify this storage class as the cluster storage back end for the RHOSO deployment. Use a storage back end based on SSD or NVMe drives for the storage class. If you are using LVM, you must wait until the LVM Storage Operator announces that the storage is available before creating the control plane. The LVM Storage Operator announces that the cluster and LVMS storage configuration is complete through the annotation for the volume group to the worker node object. If you deploy pods before all the control plane nodes are ready, then multiple PVCs and pods are scheduled on the same nodes. To check that the storage is ready, you can query the nodes in your lvmclusters.lvm.topolvm.io object. For example, run the following command if you have three worker nodes and your device class for the LVM Storage Operator is named "local-storage": The storage is ready when this command returns three non-zero values For more information about how to configure the LVM Storage storage class, see Persistent storage using Logical Volume Manager Storage in the RHOCP Storage guide. 7.3. Creating the openstack namespace You must create a namespace within your Red Hat OpenShift Container Platform (RHOCP) environment for the service pods of your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. The service pods of each RHOSO deployment exist in their own namespace within the RHOCP environment. Prerequisites You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges. Procedure Create the openstack project for the deployed RHOSO environment: Ensure the openstack namespace is labeled to enable privileged pod creation by the OpenStack Operators: If the security context constraint (SCC) is not "privileged", use the following commands to change it: Optional: To remove the need to specify the namespace when executing commands on the openstack namespace, set the default namespace to openstack : 7.4. Providing secure access to the Red Hat OpenStack Services on OpenShift services You must create a Secret custom resource (CR) to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods. Warning You cannot change a service password once the control plane is deployed. If a service password is changed in osp-secret after deploying the control plane, the service is reconfigured to use the new password but the password is not updated in the Identity service (keystone). This results in a service outage. Procedure Create a Secret CR file on your workstation, for example, openstack_service_secret.yaml . Add the following initial configuration to openstack_service_secret.yaml : Replace <base64_password> with a 32-character key that is base64 encoded. You can use the following command to manually generate a base64 encoded password: Alternatively, if you are using a Linux workstation and you are generating the Secret CR definition file by using a Bash command such as cat , you can replace <base64_password> with the following command to auto-generate random passwords for each service: Replace the <base64_fernet_key> with a fernet key that is base64 encoded. You can use the following command to manually generate the fernet key: Note The HeatAuthEncryptionKey password must be a 32-character key for Orchestration service (heat) encryption. If you increase the length of the passwords for all other services, ensure that the HeatAuthEncryptionKey password remains at length 32. Create the Secret CR in the cluster: Verify that the Secret CR is created: | [
"oc get node -l \"topology.topolvm.io/node in (USD(oc get nodes -l node-role.kubernetes.io/worker -o name | cut -d '/' -f 2 | tr '\\n' ',' | sed 's/.\\{1\\}USD//'))\" -o=jsonpath='{.items[*].metadata.annotations.capacity\\.topolvm\\.io/local-storage}' | tr ' ' '\\n'",
"oc new-project openstack",
"oc get namespace openstack -ojsonpath='{.metadata.labels}' | jq { \"kubernetes.io/metadata.name\": \"openstack\", \"pod-security.kubernetes.io/enforce\": \"privileged\", \"security.openshift.io/scc.podSecurityLabelSync\": \"false\" }",
"oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwrite",
"oc project openstack",
"apiVersion: v1 data: AdminPassword: <base64_password> AodhPassword: <base64_password> AodhDatabasePassword: <base64_password> BarbicanDatabasePassword: <base64_password> BarbicanPassword: <base64_password> BarbicanSimpleCryptoKEK: <base64_fernet_key> CeilometerPassword: <base64_password> CinderDatabasePassword: <base64_password> CinderPassword: <base64_password> DatabasePassword: <base64_password> DbRootPassword: <base64_password> DesignateDatabasePassword: <base64_password> DesignatePassword: <base64_password> GlanceDatabasePassword: <base64_password> GlancePassword: <base64_password> HeatAuthEncryptionKey: <base64_password> HeatDatabasePassword: <base64_password> HeatPassword: <base64_password> IronicDatabasePassword: <base64_password> IronicInspectorDatabasePassword: <base64_password> IronicInspectorPassword: <base64_password> IronicPassword: <base64_password> KeystoneDatabasePassword: <base64_password> ManilaDatabasePassword: <base64_password> ManilaPassword: <base64_password> MetadataSecret: <base64_password> NeutronDatabasePassword: <base64_password> NeutronPassword: <base64_password> NovaAPIDatabasePassword: <base64_password> NovaAPIMessageBusPassword: <base64_password> NovaCell0DatabasePassword: <base64_password> NovaCell0MessageBusPassword: <base64_password> NovaCell1DatabasePassword: <base64_password> NovaCell1MessageBusPassword: <base64_password> NovaPassword: <base64_password> OctaviaDatabasePassword: <base64_password> OctaviaPassword: <base64_password> PlacementDatabasePassword: <base64_password> PlacementPassword: <base64_password> SwiftPassword: <base64_password> kind: Secret metadata: name: osp-secret namespace: openstack type: Opaque",
"echo -n <password> | base64",
"USD(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)",
"python3 -c \"from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))\" | base64",
"oc create -f openstack_service_secret.yaml -n openstack",
"oc describe secret osp-secret -n openstack"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_a_network_functions_virtualization_environment/assembly_preparing-RHOCP-for-RHOSO |
Chapter 5. Migrate a JBoss EAP Application's Maven Project to JBoss EAP 8.0 | Chapter 5. Migrate a JBoss EAP Application's Maven Project to JBoss EAP 8.0 When migrating an application's Maven project to JBoss EAP 8.0, which uses JBoss EAP BOMs to manage dependencies, you must update the pom.xml files due to the following significant changes introduced with the JBoss EAP 8.0 BOMs. Note If an application is migrated to JBoss EAP 8.0 without any changes, the application will build with incorrect dependencies and may fail to deploy on JBoss EAP 8.0. 5.1. Renaming of JBoss EAP Jakarta EE 8 The JBoss EAP Jakarta EE 8 BOM has been renamed to JBoss EAP EE and its Maven Coordinates have been changed from org.jboss.bom:jboss-eap-jakartaee8 to org.jboss.bom:jboss-eap-ee . The usage of this BOM in a Maven project ( pom.xml ) may be identified with the following dependency management import: <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-jakartaee8</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> The Maven project ( pom.xml ) should instead import the new "JBoss EAP EE" BOM to its dependency management: <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-ee</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> 5.2. Renaming of JBoss EAP Jakarta EE 8 with tools The JBoss EAP Jakarta EE 8 With Tools BOM has been renamed to JBoss EAP EE with tools , and its Maven Coordinates have been changed from org.jboss.bom:jboss-eap-jakartaee8-with-tools to org.jboss.bom:jboss-eap-ee-with-tools . The usage of this BOM in a Maven project ( pom.xml ) may be identified with the following dependency management import: <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-jakartaee8-with-tools</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> The Maven project ( pom.xml ) must import the new "JBoss EAP EE With Tools" BOM to its dependency management instead: <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-ee-with-tools</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> 5.3. Removal of JBoss EAP Jakarta EE 8 APIs The JBoss EAP Jakarta EE 8 APIs BOMs have been removed in JBoss EAP 8.0 and the new JBoss EAP EE BOM must be used instead. The usage of this BOM in a Maven project ( pom.xml ) may be identified with the following dependency management import: <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.spec</groupId> <artifactId>jboss-jakartaee-8.0</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>org.jboss.spec</groupId> <artifactId>jboss-jakartaee-web-8.0</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> The Maven project ( pom.xml ) must import the new JBoss EAP EE BOM to its dependency management: <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-ee</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> 5.4. Removal of the JBoss EAP Runtime BOM The JBoss EAP Runtime BOM is no longer distributed with JBoss EAP 8.0 and you must use the new JBoss EAP EE BOM instead. The usage of this BOM in a Maven project ( pom.xml ) may be identified with the following dependency management import: <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>eap-runtime-artifacts</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> The Maven project ( pom.xml ) must import the new JBoss EAP EE BOM to its dependency management instead: <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-ee</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> 5.5. Jakarta EE and JBoss APIs Maven Coordinates changes The following Jakarta EE and JBoss APIs artifacts, which were provided by the JBoss EAP BOMs, have changed Maven Coordinates, or were replaced by other artifacts: com.sun.activation:jakarta.activation org.jboss.spec.javax.annotation:jboss-annotations-api_1.3_spec org.jboss.spec.javax.security.auth.message:jboss-jaspi-api_1.0_spec org.jboss.spec.javax.security.jacc:jboss-jacc-api_1.5_spec org.jboss.spec.javax.batch:jboss-batch-api_1.0_spec org.jboss.spec.javax.ejb:jboss-ejb-api_3.2_spec org.jboss.spec.javax.el:jboss-el-api_3.0_spec org.jboss.spec.javax.enterprise.concurrent:jboss-concurrency-api_1.0_spec org.jboss.spec.javax.faces:jboss-jsf-api_2.3_spec org.jboss.spec.javax.interceptor:jboss-interceptors-api_1.2_spec org.jboss.spec.javax.jms:jboss-jms-api_2.0_spec com.sun.mail:jakarta.mail org.jboss.spec.javax.resource:jboss-connector-api_1.7_spec org.jboss.spec.javax.servlet:jboss-servlet-api_4.0_spec org.jboss.spec.javax.servlet.jsp:jboss-jsp-api_2.3_spec org.apache.taglibs:taglibs-standard-spec org.jboss.spec.javax.transaction:jboss-transaction-api_1.3_spec org.jboss.spec.javax.xml.bind:jboss-jaxb-api_2.3_spec org.jboss.spec.javax.xml.ws:jboss-jaxws-api_2.3_spec javax.jws:jsr181-api org.jboss.spec.javax.websocket:jboss-websocket-api_1.1_spec org.jboss.spec.javax.ws.rs:jboss-jaxrs-api_2.1_spec org.jboss.spec.javax.xml.soap:jboss-saaj-api_1.4_spec org.hibernate:hibernate-core org.hibernate:hibernate-jpamodelgen org.jboss.narayana.xts:jbossxts The Maven Project ( pom.xml ) should update the dependency's Maven Coordinates if the artifact was changed or replaced, or remove the dependency if the artifact is no longer supported. <dependencies> <!-- replaces com.sun.activation:jakarta.activation --> <dependency> <groupId>jakarta.activation</groupId> <artifactId>jakarta.activation-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.annotation:jboss-annotations-api_1.3_spec --> <dependency> <groupId>jakarta.annotation</groupId> <artifactId>jakarta.annotation-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.security.auth.message:jboss-jaspi-api_1.0_spec --> <dependency> <groupId>jakarta.authentication</groupId> <artifactId>jakarta.authentication-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.security.jacc:jboss-jacc-api_1.5_spec --> <dependency> <groupId>jakarta.authorization</groupId> <artifactId>jakarta.authorization-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.batch:jboss-batch-api_1.0_spec --> <dependency> <groupId>jakarta.batch</groupId> <artifactId>jakarta.batch-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.ejb:jboss-ejb-api_3.2_spec --> <dependency> <groupId>jakarta.ejb</groupId> <artifactId>jakarta.ejb-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.el:jboss-el-api_3.0_spec --> <dependency> <groupId>org.jboss.spec.jakarta.el</groupId> <artifactId>jboss-el-api_5.0_spec</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.enterprise.concurrent:jboss-concurrency-api_1.0_spec --> <dependency> <groupId>jakarta.enterprise.concurrent</groupId> <artifactId>jakarta.enterprise.concurrent-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.faces:jboss-jsf-api_2.3_spec --> <dependency> <groupId>jakarta.faces</groupId> <artifactId>jakarta.faces-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.interceptor:jboss-interceptors-api_1.2_spec --> <dependency> <groupId>jakarta.interceptor</groupId> <artifactId>jakarta.interceptor-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.jms:jboss-jms-api_2.0_spec --> <dependency> <groupId>jakarta.jms</groupId> <artifactId>jakarta.jms-api</artifactId> </dependency> <!-- replaces com.sun.mail:jakarta.mail --> <dependency> <groupId>jakarta.mail</groupId> <artifactId>jakarta.mail-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.resource:jboss-connector-api_1.7_spec --> <dependency> <groupId>jakarta.resource</groupId> <artifactId>jakarta.resource-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.servlet:jboss-servlet-api_4.0_spec --> <dependency> <groupId>jakarta.servlet</groupId> <artifactId>jakarta.servlet-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.servlet.jsp:jboss-jsp-api_2.3_spec --> <dependency> <groupId>jakarta.servlet.jsp</groupId> <artifactId>jakarta.servlet.jsp-api</artifactId> </dependency> <!-- replaces org.apache.taglibs:taglibs-standard-spec --> <dependency> <groupId>jakarta.servlet.jsp.jstl</groupId> <artifactId>jakarta.servlet.jsp.jstl-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.transaction:jboss-transaction-api_1.3_spec --> <dependency> <groupId>jakarta.transaction</groupId> <artifactId>jakarta.transaction-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.xml.bind:jboss-jaxb-api_2.3_spec --> <dependency> <groupId>jakarta.xml.bind</groupId> <artifactId>jakarta.xml.bind-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.xml.ws:jboss-jaxws-api_2.3_spec and javax.jws:jsr181-api --> <dependency> <groupId>org.jboss.spec.jakarta.xml.ws</groupId> <artifactId>jboss-jakarta-xml-ws-api_4.0_spec</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.websocket:jboss-websocket-api_1.1_spec --> <dependency> <groupId>jakarta.websocket</groupId> <artifactId>jakarta.websocket-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.ws.rs:jboss-jaxrs-api_2.1_spec --> <dependency> <groupId>jakarta.ws.rs</groupId> <artifactId>jakarta.ws.rs-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.xml.soap:jboss-saaj-api_1.4_spec --> <dependency> <groupId>org.jboss.spec.jakarta.xml.soap</groupId> <artifactId>jboss-saaj-api_3.0_spec</artifactId> </dependency> <!-- replaces org.hibernate:hibernate-core --> <dependency> <groupId>org.hibernate.orm</groupId> <artifactId>hibernate-core</artifactId> </dependency> <!-- replaces org.hibernate:hibernate-jpamodelgen --> <dependency> <groupId>org.hibernate.orm</groupId> <artifactId>hibernate-jpamodelgen</artifactId> </dependency> <!-- replaces org.jboss.narayana.xts:jbossxts --> <dependency> <groupId>org.jboss.narayana.xts</groupId> <artifactId>jbossxts-jakarta</artifactId> <classifier>api</classifier> </dependency> </dependencies> 5.6. Removal of JBoss EJB client legacy BOM The JBoss EJB Client Legacy BOM, with Maven groupId:artifactId coordinates org.jboss.eap:wildfly-ejb-client-legacy-bom , is no longer provided with JBoss EAP. You can identify this BOM used as dependency management import or as dependency reference in a Maven project pom.xml configuration file as illustrated in the following examples: Example dependency management import <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>wildfly-ejb-client-legacy-bom</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> Example dependency reference <dependencies> ... <dependency> <groupId>org.jboss.bom</groupId> <artifactId>wildfly-ejb-client-legacy-bom</artifactId> <version>USD{version.bom}</version> <type>pom</type> </dependency> ... </dependencies> Stand-alone client Java applications can continue to use the same version of the removed BOM, for example, version 7.4.0.GA , to work with JBoss EAP 8. However, it is recommended to replace the EJB Client Legacy API. For information about configuring an EJB Client, see How configure an EJB client in EAP 7.1+ / 8.0+ . For information about deprecation of EJB Client Legacy API in JBoss EAP 7.4, see Deprecated in Red Hat JBoss Enterprise Application Platform Platform (EAP) 7 . Important The JBoss EAP 7.4 client and JBoss EAP 8.x server follow different product lifecycle. For more information, see Product lifecycle for JBoss EAP . | [
"<dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-jakartaee8</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>",
"<dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-ee</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>",
"<dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-jakartaee8-with-tools</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>",
"<dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-ee-with-tools</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>",
"<dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.spec</groupId> <artifactId>jboss-jakartaee-8.0</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>org.jboss.spec</groupId> <artifactId>jboss-jakartaee-web-8.0</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>",
"<dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-ee</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>",
"<dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>eap-runtime-artifacts</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>",
"<dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-ee</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>",
"<dependencies> <!-- replaces com.sun.activation:jakarta.activation --> <dependency> <groupId>jakarta.activation</groupId> <artifactId>jakarta.activation-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.annotation:jboss-annotations-api_1.3_spec --> <dependency> <groupId>jakarta.annotation</groupId> <artifactId>jakarta.annotation-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.security.auth.message:jboss-jaspi-api_1.0_spec --> <dependency> <groupId>jakarta.authentication</groupId> <artifactId>jakarta.authentication-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.security.jacc:jboss-jacc-api_1.5_spec --> <dependency> <groupId>jakarta.authorization</groupId> <artifactId>jakarta.authorization-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.batch:jboss-batch-api_1.0_spec --> <dependency> <groupId>jakarta.batch</groupId> <artifactId>jakarta.batch-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.ejb:jboss-ejb-api_3.2_spec --> <dependency> <groupId>jakarta.ejb</groupId> <artifactId>jakarta.ejb-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.el:jboss-el-api_3.0_spec --> <dependency> <groupId>org.jboss.spec.jakarta.el</groupId> <artifactId>jboss-el-api_5.0_spec</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.enterprise.concurrent:jboss-concurrency-api_1.0_spec --> <dependency> <groupId>jakarta.enterprise.concurrent</groupId> <artifactId>jakarta.enterprise.concurrent-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.faces:jboss-jsf-api_2.3_spec --> <dependency> <groupId>jakarta.faces</groupId> <artifactId>jakarta.faces-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.interceptor:jboss-interceptors-api_1.2_spec --> <dependency> <groupId>jakarta.interceptor</groupId> <artifactId>jakarta.interceptor-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.jms:jboss-jms-api_2.0_spec --> <dependency> <groupId>jakarta.jms</groupId> <artifactId>jakarta.jms-api</artifactId> </dependency> <!-- replaces com.sun.mail:jakarta.mail --> <dependency> <groupId>jakarta.mail</groupId> <artifactId>jakarta.mail-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.resource:jboss-connector-api_1.7_spec --> <dependency> <groupId>jakarta.resource</groupId> <artifactId>jakarta.resource-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.servlet:jboss-servlet-api_4.0_spec --> <dependency> <groupId>jakarta.servlet</groupId> <artifactId>jakarta.servlet-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.servlet.jsp:jboss-jsp-api_2.3_spec --> <dependency> <groupId>jakarta.servlet.jsp</groupId> <artifactId>jakarta.servlet.jsp-api</artifactId> </dependency> <!-- replaces org.apache.taglibs:taglibs-standard-spec --> <dependency> <groupId>jakarta.servlet.jsp.jstl</groupId> <artifactId>jakarta.servlet.jsp.jstl-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.transaction:jboss-transaction-api_1.3_spec --> <dependency> <groupId>jakarta.transaction</groupId> <artifactId>jakarta.transaction-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.xml.bind:jboss-jaxb-api_2.3_spec --> <dependency> <groupId>jakarta.xml.bind</groupId> <artifactId>jakarta.xml.bind-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.xml.ws:jboss-jaxws-api_2.3_spec and javax.jws:jsr181-api --> <dependency> <groupId>org.jboss.spec.jakarta.xml.ws</groupId> <artifactId>jboss-jakarta-xml-ws-api_4.0_spec</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.websocket:jboss-websocket-api_1.1_spec --> <dependency> <groupId>jakarta.websocket</groupId> <artifactId>jakarta.websocket-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.ws.rs:jboss-jaxrs-api_2.1_spec --> <dependency> <groupId>jakarta.ws.rs</groupId> <artifactId>jakarta.ws.rs-api</artifactId> </dependency> <!-- replaces org.jboss.spec.javax.xml.soap:jboss-saaj-api_1.4_spec --> <dependency> <groupId>org.jboss.spec.jakarta.xml.soap</groupId> <artifactId>jboss-saaj-api_3.0_spec</artifactId> </dependency> <!-- replaces org.hibernate:hibernate-core --> <dependency> <groupId>org.hibernate.orm</groupId> <artifactId>hibernate-core</artifactId> </dependency> <!-- replaces org.hibernate:hibernate-jpamodelgen --> <dependency> <groupId>org.hibernate.orm</groupId> <artifactId>hibernate-jpamodelgen</artifactId> </dependency> <!-- replaces org.jboss.narayana.xts:jbossxts --> <dependency> <groupId>org.jboss.narayana.xts</groupId> <artifactId>jbossxts-jakarta</artifactId> <classifier>api</classifier> </dependency> </dependencies>",
"<dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>wildfly-ejb-client-legacy-bom</artifactId> <version>USD{version.bom}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>",
"<dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>wildfly-ejb-client-legacy-bom</artifactId> <version>USD{version.bom}</version> <type>pom</type> </dependency> </dependencies>"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/migration_guide/migrate-a-jboss-eap-application-s-maven-project-to-jboss-eap-8-0_default |
5.87. gnome-desktop | 5.87. gnome-desktop 5.87.1. RHBA-2012:1352 - gnome-desktop bug fix update Updated gnome-desktop packages that fix a bug are now available. The gnome-desktop package contains an internal library (libgnome-desktop) used to implement some portions of the GNOME desktop, and also some data files and other shared components of the GNOME user environment. Bug Fix BZ# 829891 Previously, when a user hit the system's hot-key (most commonly Fn+F7) to change display configurations, the system could potentially switch to an invalid mode, which would fail to display. With this update, gnome-desktop now selects valid XRandR modes and correctly switching displays with the hot-key works as expected. All users of gnome-desktop are advised to upgrade to these updated packages, which fix this bug. 5.87.2. RHBA-2012:0405 - gnome-desktop bug fix update An updated gnome-desktop package that fixes one bug is now available for Red Hat Enterprise Linux 6. The gnome-desktop package contains an internal library (libgnomedesktop) used to implement some portions of the GNOME desktop, and also some data files and other shared components of the GNOME user environment. Bug Fix BZ# 639732 Previously, due to an object not being destroyed, the Nautilus file manager could consume an excessive amount of memory. Consequently, constantly growing resident memory would slow down the system. The source code has been modified to prevent memory leaks from occurring and Nautilus now consumes a reasonable amount of memory. All users of gnome-desktop are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/gnome-desktop |
Chapter 15. Log Record Fields | Chapter 15. Log Record Fields The following fields can be present in log records exported by the logging. Although log records are typically formatted as JSON objects, the same data model can be applied to other encodings. To search these fields from Elasticsearch and Kibana, use the full dotted field name when searching. For example, with an Elasticsearch /_search URL , to look for a Kubernetes pod name, use /_search/q=kubernetes.pod_name:name-of-my-pod . The top level fields may be present in every record. message The original log entry text, UTF-8 encoded. This field may be absent or empty if a non-empty structured field is present. See the description of structured for more. Data type text Example value HAPPY structured Original log entry as a structured object. This field may be present if the forwarder was configured to parse structured JSON logs. If the original log entry was a valid structured log, this field will contain an equivalent JSON structure. Otherwise this field will be empty or absent, and the message field will contain the original log message. The structured field can have any subfields that are included in the log message, there are no restrictions defined here. Data type group Example value map[message:starting fluentd worker pid=21631 ppid=21618 worker=0 pid:21631 ppid:21618 worker:0] @timestamp A UTC value that marks when the log payload was created or, if the creation time is not known, when the log payload was first collected. The "@" prefix denotes a field that is reserved for a particular use. By default, most tools look for "@timestamp" with ElasticSearch. Data type date Example value 2015-01-24 14:06:05.071000000 Z hostname The name of the host where this log message originated. In a Kubernetes cluster, this is the same as kubernetes.host . Data type keyword ipaddr4 The IPv4 address of the source server. Can be an array. Data type ip ipaddr6 The IPv6 address of the source server, if available. Can be an array. Data type ip level The logging level from various sources, including rsyslog(severitytext property) , a Python logging module, and others. The following values come from syslog.h , and are preceded by their numeric equivalents : 0 = emerg , system is unusable. 1 = alert , action must be taken immediately. 2 = crit , critical conditions. 3 = err , error conditions. 4 = warn , warning conditions. 5 = notice , normal but significant condition. 6 = info , informational. 7 = debug , debug-level messages. The two following values are not part of syslog.h but are widely used: 8 = trace , trace-level messages, which are more verbose than debug messages. 9 = unknown , when the logging system gets a value it does not recognize. Map the log levels or priorities of other logging systems to their nearest match in the preceding list. For example, from python logging , you can match CRITICAL with crit , ERROR with err , and so on. Data type keyword Example value info pid The process ID of the logging entity, if available. Data type keyword service The name of the service associated with the logging entity, if available. For example, syslog's APP-NAME and rsyslog's programname properties are mapped to the service field. Data type keyword | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/logging/cluster-logging-exported-fields |
Chapter 4. Integrating Policy Generator | Chapter 4. Integrating Policy Generator By integrating the Policy Generator, you can use it to automatically build Red Hat Advanced Cluster Management for Kubernetes policies. To integrate the Policy Generator, see Policy Generator . For an example of what you can do with the Policy Generator, see Generating a policy that installs the Compliance Operator . 4.1. Policy Generator The Policy Generator is a part of the Red Hat Advanced Cluster Management for Kubernetes application lifecycle subscription Red Hat OpenShift GitOps workflow that generates Red Hat Advanced Cluster Management policies using Kustomize. The Policy Generator builds Red Hat Advanced Cluster Management policies from Kubernetes manifest YAML files, which are provided through a PolicyGenerator manifest YAML file that is used to configure it. The Policy Generator is implemented as a Kustomize generator plug-in. For more information on Kustomize, read the Kustomize documentation . View the following sections for more information: Policy Generator capabilities Policy Generator configuration structure Policy Generator configuration reference table 4.1.1. Policy Generator capabilities The integration of the Policy Generator with the Red Hat Advanced Cluster Management application lifecycle subscription OpenShift GitOps workflow simplifies the distribution of Kubernetes resource objects to managed OpenShift Container Platform clusters, and Kubernetes clusters through Red Hat Advanced Cluster Management policies. Use the Policy Generator to complete the following actions: Convert any Kubernetes manifest files to Red Hat Advanced Cluster Management configuration policies, including manifests that are created from a Kustomize directory. Patch the input Kubernetes manifests before they are inserted into a generated Red Hat Advanced Cluster Management policy. Generate additional configuration policies so you can report on Gatekeeper policy violations through Red Hat Advanced Cluster Management for Kubernetes. Generate policy sets on the hub cluster. 4.1.2. Policy Generator configuration structure The Policy Generator is a Kustomize generator plug-in that is configured with a manifest of the PolicyGenerator kind and policy.open-cluster-management.io/v1 API version. Continue reading to learn about the configuration structure: To use the plug-in, add a generators section in a kustomization.yaml file. View the following example: generators: - policy-generator-config.yaml 1 1 The policy-generator-config.yaml file that is referenced in the example is a YAML file with the instructions of the policies to generate. A simple PolicyGenerator configuration file might resemble the following example: apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: config-data-policies policyDefaults: namespace: policies policySets: [] policies: - name: config-data manifests: - path: configmap.yaml 1 1 The configmap.yaml represents a Kubernetes manifest YAML file to be included in the policy. Alternatively, you can set the path to a Kustomize directory, or a directory with multiple Kubernetes manifest YAML files. View the following example: apiVersion: v1 kind: ConfigMap metadata: name: my-config namespace: default data: key1: value1 key2: value2 Use the object-templates-raw manifest to automatically generate a ConfigurationPolicy resource with the content you add. See the following examples: For example, create a manifest file with the following syntax: object-templates-raw: | {{- range (lookup "v1" "ConfigMap" "my-namespace" "").items }} - complianceType: musthave objectDefinition: kind: ConfigMap apiVersion: v1 metadata: name: {{ .metadata.name }} namespace: {{ .metadata.namespace }} labels: i-am-from: template {{- end }} Then create a PolicyGenerator configuration file similar to the following YAML: apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: config-data-policies policyDefaults: namespace: policies policySets: [] policies: - name: config-data manifests: - path: manifest.yaml 1 1 Path to your manifest.yaml file. The generated Policy resource, along with the generated Placement and PlacementBinding might resemble the following example: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement-config-data namespace: policies spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: [] --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-config-data namespace: policies placementRef: apiGroup: cluster.open-cluster-management.io kind: Placement name: placement-config-data subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: config-data --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/description: name: config-data namespace: policies spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: config-data spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 data: key1: value1 key2: value2 kind: ConfigMap metadata: name: my-config namespace: default remediationAction: inform severity: low 4.1.3. Policy Generator configuration reference table All the fields in the policyDefaults section except for namespace can be overridden for each policy, and all the fields in the policySetDefaults section can be overridden for each policy set. Table 4.1. Parameter table Field Optional or required Description apiVersion Required Set the value to policy.open-cluster-management.io/v1 . kind Required Set the value to PolicyGenerator to indicate the type of policy. metadata.name Required The name for identifying the policy resource. placementBindingDefaults.name Optional If multiple policies use the same placement, this name is used to generate a unique name for the resulting PlacementBinding , binding the placement with the array of policies that reference it. policyDefaults Required Any default value listed here is overridden by an entry in the policies array except for namespace . policyDefaults.namespace Required The namespace of all the policies. policyDefaults.complianceType Optional Determines the policy controller behavior when comparing the manifest to objects on the cluster. The values that you can use are musthave , mustonlyhave , or mustnothave . The default value is musthave . policyDefaults.metadataComplianceType Optional Overrides complianceType when comparing the manifest metadata section to objects on the cluster. The values that you can use are musthave , and mustonlyhave . The default value is empty ( {} ) to avoid overriding the complianceType for metadata. policyDefaults.categories Optional Array of categories to be used in the policy.open-cluster-management.io/categories annotation. The default value is CM Configuration Management . policyDefaults.controls Optional Array of controls to be used in the policy.open-cluster-management.io/controls annotation. The default value is CM-2 Baseline Configuration . policyDefaults.standards Optional An array of standards to be used in the policy.open-cluster-management.io/standards annotation. The default value is NIST SP 800-53 . policyDefaults.policyAnnotations Optional Annotations that the policy includes in the metadata.annotations section. It is applied for all policies unless specified in the policy. The default value is empty ( {} ). policyDefaults.configurationPolicyAnnotations Optional Key-value pairs of annotations to set on generated configuration policies. For example, you can disable policy templates by defining the following parameter: {"policy.open-cluster-management.io/disable-templates": "true"} . The default value is empty ( {} ). policyDefaults.copyPolicyMetadata Optional Copies the labels and annotations for all policies and adds them to a replica policy. Set to true by default. If set to false , only the policy framework specific policy labels and annotations are copied to the replicated policy. policyDefaults.severity Optional The severity of the policy violation. The default value is low . policyDefaults.disabled Optional Whether the policy is disabled, meaning it is not propagated and no status as a result. The default value is false to enable the policy. policyDefaults.remediationAction Optional The remediation mechanism of your policy. The parameter values are enforce and inform . The default value is inform . policyDefaults.namespaceSelector Required for namespaced objects that do not have a namespace specified Determines namespaces in the managed cluster that the object is applied to. The include and exclude parameters accept file path expressions to include and exclude namespaces by name. The matchExpressions and matchLabels parameters specify namespaces to include by label. Read the Kubernetes labels and selectors documentation. The resulting list is compiled by using the intersection of results from all parameters. policyDefaults.evaluationInterval Optional Use the parameters compliant and noncompliant to specify the frequency for a policy to be evaluated when in a particular compliance state. When managed clusters have low CPU resources, the evaluation interval can be increased to reduce CPU usage on the Kubernetes API. These are in the format of durations. For example, "1h25m3s" represents 1 hour, 25 minutes, and 3 seconds. These can also be set to "never" to avoid evaluating the policy after it has become a particular compliance state. policyDefaults.pruneObjectBehavior Optional Determines whether objects created or monitored by the policy should be deleted when the policy is deleted. Pruning only takes place if the remediation action of the policy has been set to enforce . Example values are DeleteIfCreated , DeleteAll , or None . The default value is None . policyDefaults.recreateOption Optional Describes whether to delete and recreate an object when an update is required. The IfRequired value recreates the object when you update an immutable field. Without dry run update support, the IfRequired has no effect on clusters. The Always value always recreate the object if a mismatch is detected. When the remediationAction parameter is set to inform the RecreateOption value has no effect. The default value is None . policyDefaults.recordDiff Optional Specifies if and where to log the difference between the object on the cluster and the objectDefinition in the policy. Set to Log to log the difference in the controller logs or None to not log the difference. By default, this parameter is empty to not log the difference. policyDefaults.dependencies Optional A list of objects that must be in specific compliance states before this policy is applied. Cannot be specified when policyDefaults.orderPolicies is set to true . policyDefaults.dependencies[].name Required The name of the object being depended on. policyDefaults.dependencies[].namespace Optional The namespace of the object being depended on. The default is the namespace of policies set for the Policy Generator. policyDefaults.dependencies[].compliance Optional The compliance state the object needs to be in. The default value is Compliant . policyDefaults.dependencies[].kind Optional The kind of the object. By default, the kind is set to Policy , but can also be other kinds that have compliance state, such as ConfigurationPolicy . policyDefaults.dependencies[].apiVersion Optional The API version of the object. The default value is policy.open-cluster-management.io/v1 . policyDefaults.description Optional The description of the policy you want to create. policyDefaults.extraDependencies Optional A list of objects that must be in specific compliance states before this policy is applied. The dependencies that you define are added to each policy template (for example, ConfigurationPolicy ) separately from the dependencies list. Cannot be specified when policyDefaults.orderManifests is set to true . policyDefaults.extraDependencies[].name Required The name of the object being depended on. policyDefaults.extraDependencies[].namespace Optional The namespace of the object being depended on. By default, the value is set to the namespace of policies set for the Policy Generator. policyDefaults.extraDependencies[].compliance Optional The compliance state the object needs to be in. The default value is Compliant . policyDefaults.extraDependencies[].kind Optional The kind of the object. The default value is to Policy , but can also be other kinds that have a compliance state, such as ConfigurationPolicy . policyDefaults.extraDependencies[].apiVersion Optional The API version of the object. The default value is policy.open-cluster-management.io/v1 . policyDefaults.ignorePending Optional Bypass compliance status checks when the Policy Generator is waiting for its dependencies to reach their desired states. The default value is false . policyDefaults.orderPolicies Optional Automatically generate dependencies on the policies so they are applied in the order you defined in the policies list. By default, the value is set to false . Cannot be specified at the same time as policyDefaults.dependencies . policyDefaults.orderManifests Optional Automatically generate extraDependencies on policy templates so that they are applied in the order you defined in the manifests list for that policy. Cannot be specified when policyDefaults.consolidateManifests is set to true . Cannot be specified at the same time as policyDefaults.extraDependencies . policyDefaults.consolidateManifests Optional This determines if a single configuration policy is generated for all the manifests being wrapped in the policy. If set to false , a configuration policy per manifest is generated. The default value is true . policyDefaults.informGatekeeperPolicies (Deprecated) Optional Set informGatekeeperPolicies to false to use Gatekeeper manifests directly without defining it in a configuration policy. When the policy references a violated Gatekeeper policy manifest, an additional configuration policy is generated in order to receive policy violations in Red Hat Advanced Cluster Management. The default value is true . policyDefaults.informKyvernoPolicies Optional When the policy references a Kyverno policy manifest, this determines if an additional configuration policy is generated to receive policy violations in Red Hat Advanced Cluster Management, when the Kyverno policy is violated. The default value is true . policyDefaults.policyLabels Optional Labels that the policy includes in its metadata.labels section. The policyLabels parameter is applied for all policies unless specified in the policy. policyDefaults.policySets Optional Array of policy sets that the policy joins. Policy set details can be defined in the policySets section. When a policy is part of a policy set, a placement binding is not generated for the policy since one is generated for the set. Set policies[].generatePlacementWhenInSet or policyDefaults.generatePlacementWhenInSet to override policyDefaults.policySets . policyDefaults.generatePolicyPlacement Optional Generate placement manifests for policies. Set to true by default. When set to false , the placement manifest generation is skipped, even if a placement is specified. policyDefaults.generatePlacementWhenInSet Optional When a policy is part of a policy set, by default, the generator does not generate the placement for this policy since a placement is generated for the policy set. Set generatePlacementWhenInSet to true to deploy the policy with both policy placement and policy set placement. The default value is false . policyDefaults.placement Optional The placement configuration for the policies. This defaults to a placement configuration that matches all clusters. policyDefaults.placement.name Optional Specifying a name to consolidate placements that contain the same cluster label selectors. policyDefaults.placement.labelSelector Optional Specify a placement by defining a cluster label selector using either key:value , or providing a matchExpressions , matchLabels , or both, with appropriate values. See placementPath to specify an existing file. policyDefaults.placement.placementName Optional Define this parameter to use a placement that already exists on the cluster. A Placement is not created, but a PlacementBinding binds the policy to this Placement . policyDefaults.placement.placementPath Optional To reuse an existing placement, specify the path relative to the location of the kustomization.yaml file. If provided, this placement is used by all policies by default. See labelSelector to generate a new Placement . policyDefaults.placement.clusterSelector (Deprecated) Optional PlacementRule is deprecated. Use labelSelector instead to generate a placement. Specify a placement rule by defining a cluster selector using either key:value or by providing matchExpressions , matchLabels , or both, with appropriate values. See placementRulePath to specify an existing file. policyDefaults.placement.placementRuleName (Deprecated) Optional PlacementRule is deprecated. Alternatively, use placementName to specify a placement. To use an existing placement rule on the cluster, specify the name for this parameter. A PlacementRule is not created, but a PlacementBinding binds the policy to the existing PlacementRule . policyDefaults.placement.placementRulePath (Deprecated) Optional PlacementRule is deprecated. Alternatively, use placementPath to specify a placement. To reuse an existing placement rule, specify the path relative to the location of the kustomization.yaml file. If provided, this placement rule is used by all policies by default. See clusterSelector to generate a new PlacementRule . policySetDefaults Optional Default values for policy sets. Any default value listed for this parameter is overridden by an entry in the policySets array. policySetDefaults.placement Optional The placement configuration for the policies. This defaults to a placement configuration that matches all clusters. See policyDefaults.placement for description of this field. policySetDefaults.generatePolicySetPlacement Optional Generate placement manifests for policy sets. Set to true by default. When set to false the placement manifest generation is skipped, even if a placement is specified. policies Required The list of policies to create along with overrides to either the default values, or the values that are set in policyDefaults . See policyDefaults for additional fields and descriptions. policies.description Optional The description of the policy you want to create. policies[].name Required The name of the policy to create. policies[].manifests Required The list of Kubernetes object manifests to include in the policy, along with overrides to either the default values, the values set in this policies item, or the values set in policyDefaults . See policyDefaults for additional fields and descriptions. When consolidateManifests is set to true , only complianceType , metadataComplianceType , and recordDiff can be overridden at the policies[].manifests level. policies[].manifests[].path Required Path to a single file, a flat directory of files, or a Kustomize directory relative to the kustomization.yaml file. If the directory is a Kustomize directory, the generator runs Kustomize against the directory before generating the policies. If there is a requirement to process Helm charts for the Kustomize directory, set POLICY_GEN_ENABLE_HELM to true in the environment where the policy generator is running to enable Helm for the Policy Generator. The following manifests are supported: Non-root policy type manifests such as CertificatePolicy , ConfigurationPolicy , OperatorPolicy that have a Policy suffix. The manifests are not modified except for patches and are directly added as a policy-templates entry of the Policy resource. Gatekeeper constraints are also directly included if informGatekeeperPolicies is set to false . Manifests containing only an object-templates-raw key. The corresponding value is used directly in a generated ConfigurationPolicy resource without modification, which is added as a policy-templates entry of the Policy entry. For everything else, ConfigurationPolicy objects are generated to wrap the previously mentioned manifests. The resulting ConfigurationPolicy manifest is added as a policy-templates entry of the Policy resource. policies[].manifests[].patches Optional A list of Kustomize patches to apply to the manifest at the path. If there are multiple manifests, the patch requires the apiVersion , kind , metadata.name , and metadata.namespace (if applicable) fields to be set so Kustomize can identify the manifest that the patch applies to. If there is a single manifest, the metadata.name and metadata.namespace fields can be patched. policies.policyLabels Optional Labels that the policy includes in its metadata.labels section. The policyLabels parameter is applied for all policies unless specified in the policy. policySets Optional The list of policy sets to create, along with overrides to either the default values or the values that are set in policySetDefaults . To include a policy in a policy set, use policyDefaults.policySets , policies[].policySets , or policySets.policies . See policySetDefaults for additional fields and descriptions. policySets[].name Required The name of the policy set to create. policySets[].description Optional The description of the policy set to create. policySets[].policies Optional The list of policies to be included in the policy set. If policyDefaults.policySets or policies[].policySets is also specified, the lists are merged. 4.1.4. Additional resources Read Generating a policy to install GitOps Operator . Read to Policy set controller for more details. Read Applying Kustomize for more information. Read the Governance documentation for more topics. See an example of a kustomization.yaml file. Refer to the Kubernetes labels and selectors documentation. Refer Gatekeeper for more details. Refer to the Kustomize documentation . 4.2. Generating a policy that installs the Compliance Operator Generate a policy that installs the Compliance Operator onto your clusters. For an operator that uses the namespaced installation mode, such as the Compliance Operator, an OperatorGroup manifest is also required. Complete the following steps: Create a YAML file with a Namespace , a Subscription , and an OperatorGroup manifest called compliance-operator.yaml . The following example installs these manifests in the compliance-operator namespace: apiVersion: v1 kind: Namespace metadata: name: openshift-compliance --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: channel: release-0.1 name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace Create a PolicyGenerator configuration file. View the following PolicyGenerator policy example that installs the Compliance Operator on all OpenShift Container Platform managed clusters: apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: install-compliance-operator policyDefaults: namespace: policies placement: labelSelector: matchExpressions: - key: vendor operator: In values: - "OpenShift" policies: - name: install-compliance-operator manifests: - path: compliance-operator.yaml Add the policy generator to your kustomization.yaml file. The generators section might resemble the following configuration: generators: - policy-generator-config.yaml As a result, the generated policy resembles the following file: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement-install-compliance-operator namespace: policies spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: - key: vendor operator: In values: - OpenShift --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-install-compliance-operator namespace: policies placementRef: apiGroup: cluster.open-cluster-management.io kind: Placement name: placement-install-compliance-operator subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-compliance-operator --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/description: name: install-compliance-operator namespace: policies spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-compliance-operator spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-compliance - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: channel: release-0.1 name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - compliance-operator remediationAction: enforce severity: low As a result, the generated policy is displayed. | [
"generators: - policy-generator-config.yaml 1",
"apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: config-data-policies policyDefaults: namespace: policies policySets: [] policies: - name: config-data manifests: - path: configmap.yaml 1",
"apiVersion: v1 kind: ConfigMap metadata: name: my-config namespace: default data: key1: value1 key2: value2",
"object-templates-raw: | {{- range (lookup \"v1\" \"ConfigMap\" \"my-namespace\" \"\").items }} - complianceType: musthave objectDefinition: kind: ConfigMap apiVersion: v1 metadata: name: {{ .metadata.name }} namespace: {{ .metadata.namespace }} labels: i-am-from: template {{- end }}",
"apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: config-data-policies policyDefaults: namespace: policies policySets: [] policies: - name: config-data manifests: - path: manifest.yaml 1",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement-config-data namespace: policies spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: [] --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-config-data namespace: policies placementRef: apiGroup: cluster.open-cluster-management.io kind: Placement name: placement-config-data subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: config-data --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/description: name: config-data namespace: policies spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: config-data spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 data: key1: value1 key2: value2 kind: ConfigMap metadata: name: my-config namespace: default remediationAction: inform severity: low",
"apiVersion: v1 kind: Namespace metadata: name: openshift-compliance --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: channel: release-0.1 name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: install-compliance-operator policyDefaults: namespace: policies placement: labelSelector: matchExpressions: - key: vendor operator: In values: - \"OpenShift\" policies: - name: install-compliance-operator manifests: - path: compliance-operator.yaml",
"generators: - policy-generator-config.yaml",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement-install-compliance-operator namespace: policies spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: - key: vendor operator: In values: - OpenShift --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-install-compliance-operator namespace: policies placementRef: apiGroup: cluster.open-cluster-management.io kind: Placement name: placement-install-compliance-operator subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-compliance-operator --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/description: name: install-compliance-operator namespace: policies spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-compliance-operator spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-compliance - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: channel: release-0.1 name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - compliance-operator remediationAction: enforce severity: low"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/governance/integrate-policy-generator |
Chapter 6. Troubleshooting DSPA component errors | Chapter 6. Troubleshooting DSPA component errors This table displays common errors found in DataSciencePipelinesApplication (DSPA) components, along with the associated status, message, and proposed solution. The Ready condition type accumulates errors from various DSPA components, providing a status view of the DSPA deployment. Type Status Error message and solution ObjectStorageAvailable Ready False False Error message: Could not connect to Object Store: tls: failed to verify certificate: x509: certificate signed by unknown authority Solution: This issue occurs in clusters that use self-signed certificates with OpenShift AI version 2.9 or later. The data science pipelines manager cannot connect to the object storage because it does not trust the object storage SSL certificate. Therefore, the pipeline server cannot be created. Contact your IT operations administrator to add the relevant Certificate Authority bundle. For more information, see Working with certificates . ObjectStorageAvailable Ready False False Error message: Could not connect to Object Store Deployment for component "ds-pipeline-pipelines-definition" is missing - prerequisite component might not yet be available. Deployment for component "ds-pipeline-persistenceagent-pipelines-definition" is missing - prerequisite component might not yet be available. Deployment for component "ds-pipeline-scheduledworkflow-pipelines-definition" is missing - prerequisite component might not yet be available. Solution: In clusters running OpenShift AI 2.8.x, the data science pipelines manager might fail to connect to the object storage, and the pipeline server might not be created. Ensure that your object store credentials and connection information are accurate, and verify that the object store is accessible from within the data science project's associated OpenShift namespace. One common issue is that the object storage SSL certificate is not trusted, particularly if self-signed certificates are used. Verify and update your object storage credentials, then retry the operation. ObjectStorageAvailable Ready False False Error message: Wrong credentials for Object Storage: Could not connect to (minio-my-project.apps.my-cluster.com), Error: The request signature we calculated does not match the signature you provided. Check your key and signing method. Solution: Provide the correct credentials for your object storage and retry the operation. DatabaseAvailable Ready False False Error message: FailingToDeploy: Dial tcp XXX.XX.XXX.XXX:3306 : i/o timeout Solution: If the issue persists beyond startup, check for network issues or misconfigurations in the database connection settings. DatabaseAvailable Ready False False Error message: Unable to connect to external database: tls: failed to verify certificate: x509: certificate signed by unknown authority Solution: This issue can occur when you use any external database, such as Amazon RDS. The data science pipelines manager cannot connect to the database because it does not trust the database SSL certificate, preventing the creation of the pipeline server. Contact your IT operations administrator to add the relevant certificates. For more information, see Working with certificates . DatabaseAvailable Ready False False Error message: Error 1129: Host 'A.B.C.D' is blocked because of many connection errors. Solution: This issue might occur when using an external database, such as Amazon RDS. Initially, the pipeline server is created successfully. However, after some time, the OpenShift AI dashboard displays an "Error displaying pipelines" message, and the DSPA conditions indicate that the host is blocked due to multiple connection errors. For more information on how resolve this issue for an external Amazon RDS database, see Resolving "Host is blocked because of many connection errors" error in Amazon RDS for MySQL . Note: Clicking this link opens an external website. APIServerReady Ready False False Error message: Route creation failed due to lengthy project name: Route.route.openshift.io is invalid: spec.host exceeds 63 characters. Solution: Ensure that the project name in OpenShift is less than 40 characters. APIServerReady Ready False False Error message: FailingToDeploy: Component replica failed to create. Message: serviceaccount "ds-pipeline-sample" not found. Solution: If the failure persists for more than 25 seconds during DSPA startup, recreate the missing service account. PersistenceAgentReady Ready False False Error message: FailingToDeploy: Component's replica failed to create. Message: serviceaccount "ds-pipeline-persistenceagent-sample" not found. Solution: If the failure persists for more than 25 seconds during DSPA startup, recreate the missing service account. ScheduledWorkflowReady Ready False False Error message: FailingToDeploy: Component's replica failed to create. Message: serviceaccount "ds-pipeline-scheduledworkflow-sample" not found. Solution: If the failure persists for more than 25 seconds during DSPA startup, recreate the missing service account. MLMDProxyReady Ready False False Error message: Deploying: Component [ds-pipeline-scheduledworkflow-sample] is still deploying. Solution: Wait for DSPA startup to complete. If deployment fails after 25 seconds, check the logs for further information. 6.1. Common errors across DSP components The following table lists errors that might occur across multiple DSPA components: Deployment condition and condition type Status Error message and solution Condition: Component Deployment Not Found Condition type: ComponentDeploymentNotFound False Error message: Deployment for component <component> is missing - prerequisite component might not yet be available. Solution: The deployment for the component does not exist. Typically, this issue occurs due to missing deployments or issues that occurred during creation. Condition: Deployment Scaled Down Condition type: MinimumReplicasAvailable False Error message: Deployment for component <component> is scaled down. Solution: The component is unavailable as the deployment replica count is set to zero. Condition: Component Failing to Progress Condition type: FailingToDeploy False Error message: Component <component> has failed to progress. Reason: <progressingCond.Reason>. Message: <progressingCond.Message> Solution: The deployment has stalled due to ProgressDeadlineExceeded or ReplicaSetCreateError issues, or similar. Condition: Replica Creation Failure Condition type: FailingToDeploy False Error message: Component's replica <component> has failed to create. Reason: <replicaFailureCond.Reason>. Message: <replicaFailureCond.Message> Solution: Replica creation has failed, typically due to an error in the replica set or with the service accounts. Condition: Pod-Level Failures Condition type: FailingToDeploy False Error message: Concatenated failure messages for each pod. Solution: Deployment pods are in a failed state. Check the pod logs for further information. Condition: Pod in CrashLoopBackOff Condition type: FailingToDeploy False Error message: Component <component> is in CrashLoopBackOff. Message from pod: <crashLoopBackOffMessage> Solution: Pod containers are failing repeatedly, often due to incorrect environment variables or missing service accounts. Condition: Component Deploying (No Errors) Condition: type: Deploying False Error message: Component <component> is deploying. Solution: The component deployment process is ongoing with no errors detected. Condition: Component Minimally Available Condition type: MinimumReplicasAvailable True Error message: Component <component> is minimally available. Solution: The component is available, but with only the minimum number of replicas running. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_with_data_science_pipelines/troubleshooting-dspa-component-errors_ds-pipelines |
Chapter 2. Introduction | Chapter 2. Introduction 2.1. Features of ModeShape Tools ModeShape Tools possesses following functionalities: the ability to publish or upload resources from Eclipse workspaces to ModeShape repositories, and the ability to create and edit Compact Node Type Definition (CND) files using the included CND form-based editor. Note The CND editor does not require a connection to a ModeShape or any other JCR repository. ModeShape repositories are persisted within the workspace, from session to session. The ModeShape registry is persisted from session to session. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_2_modeshape_tools/chap-introduction |
Chapter 5. Adopting the data plane | Chapter 5. Adopting the data plane Adopting the Red Hat OpenStack Services on OpenShift (RHOSO) data plane involves the following steps: Stop any remaining services on the Red Hat OpenStack Platform (RHOSP) 17.1 control plane. Deploy the required custom resources. Perform a fast-forward upgrade on Compute services from RHOSP 17.1 to RHOSO 18.0. If applicable, adopt Networker nodes to the RHOSO data plane. Warning After the RHOSO control plane manages the newly deployed data plane, you must not re-enable services on the RHOSP 17.1 control plane and data plane. If you re-enable services, workloads are managed by two control planes or two data planes, resulting in data corruption, loss of control of existing workloads, inability to start new workloads, or other issues. 5.1. Stopping infrastructure management and Compute services You must stop cloud Controller nodes, database nodes, and messaging nodes on the Red Hat OpenStack Platform 17.1 control plane. Do not stop nodes that are running the Compute, Storage, or Networker roles on the control plane. The following procedure applies to a single node standalone director deployment. You must remove conflicting repositories and packages from your Compute hosts, so that you can install libvirt packages when these hosts are adopted as data plane nodes, where modular libvirt daemons are no longer running in podman containers. Prerequisites Define the shell variables. Replace the following example values with values that apply to your environment: Replace ["standalone.localdomain"]="192.168.122.100" with the name and IP address of the Compute node. Replace <path_to_SSH_key> with the path to your SSH key. Procedure Remove the conflicting repositories and packages from all Compute hosts: 5.2. Adopting Compute services to the RHOSO data plane Adopt your Compute (nova) services to the Red Hat OpenStack Services on OpenShift (RHOSO) data plane. Prerequisites You have stopped the remaining control plane nodes, repositories, and packages on the Compute service (nova) hosts. For more information, see Stopping infrastructure management and Compute services . You have configured the Ceph back end for the NovaLibvirt service. For more information, see Configuring a Ceph back end . You have configured IP Address Management (IPAM): If neutron-sriov-nic-agent is running on your Compute service nodes, ensure that the physical device mappings match the values that are defined in the OpenStackDataPlaneNodeSet custom resource (CR). For more information, see Pulling the configuration from a director deployment . You have defined the shell variables to run the script that runs the fast-forward upgrade: Replace ["standalone.localdomain"]="192.168.122.100" with the name and IP address of the Compute service node. Note Do not set a value for the CEPH_FSID parameter if the local storage back end is configured by the Compute service for libvirt. The storage back end must match the source cloud storage back end. You cannot change the storage back end during adoption. Procedure Create an SSH authentication secret for the data plane nodes: Replace <path_to_SSH_key> with the path to your SSH key. Generate an ssh key-pair nova-migration-ssh-key secret: If you use a local storage back end for libvirt, create a nova-compute-extra-config service to remove pre-fast-forward workarounds and configure Compute services to use a local storage back end: USD oc apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: nova-extra-config namespace: openstack data: 19-nova-compute-cell1-workarounds.conf: | [workarounds] disable_compute_service_check_for_ffu=true EOF Note The secret nova-cell<X>-compute-config auto-generates for each cell<X> . You must specify values for the nova-cell<X>-compute-config and nova-migration-ssh-key parameters for each custom OpenStackDataPlaneService CR that is related to the Compute service. If TLS Everywhere is enabled, append the following content to the OpenStackDataPlaneService CR: tlsCerts: contents: - dnsnames - ips networks: - ctlplane issuer: osp-rootca-issuer-internal caCerts: combined-ca-bundle edpmServiceType: nova If you use a Ceph back end for libvirt, create a nova-compute-extra-config service to remove pre-fast-forward upgrade workarounds and configure Compute services to use a Ceph back end: USD oc apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: nova-extra-config namespace: openstack data: 19-nova-compute-cell1-workarounds.conf: | [workarounds] disable_compute_service_check_for_ffu=true 03-ceph-nova.conf: | [libvirt] images_type=rbd images_rbd_pool=vms images_rbd_ceph_conf=/etc/ceph/ceph.conf images_rbd_glance_store_name=default_backend images_rbd_glance_copy_poll_interval=15 images_rbd_glance_copy_timeout=600 rbd_user=openstack rbd_secret_uuid=USDCEPH_FSID EOF The resources in the ConfigMap contain cell-specific configurations. Create a secret for the subscription manager: Replace <subscription_manager_username> with the applicable user name. Replace <subscription_manager_password> with the applicable password. Create a secret for the Red Hat registry: Replace <registry_username> with the applicable user name. Replace <registry_password> with the applicable password. Deploy the OpenStackDataPlaneNodeSet CR: USD oc apply -f - <<EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack spec: tlsEnabled: false 1 networkAttachments: - ctlplane preProvisioned: true services: - redhat - bootstrap - download-cache - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - reboot-os - install-certs - libvirt - nova - ovn - neutron-metadata - telemetry env: - name: ANSIBLE_CALLBACKS_ENABLED value: "profile_tasks" - name: ANSIBLE_FORCE_COLOR value: "True" nodes: standalone: hostName: standalone 2 ansible: ansibleHost: USD{computes[standalone.localdomain]} networks: - defaultRoute: true fixedIP: USD{computes[standalone.localdomain]} name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 nodeTemplate: ansibleSSHPrivateKeySecret: dataplane-adoption-secret ansible: ansibleUser: root ansibleVarsFrom: - secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: rhc_release: 9.2 rhc_repositories: - {name: "*", state: disabled} - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled} - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled} - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled} - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled} edpm_bootstrap_release_version_package: [] # edpm_network_config # Default nic config template for a EDPM node # These vars are edpm_network_config role vars edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} edpm_network_config_hide_sensitive_logs: false # # These vars are for the network config templates themselves and are # considered EDPM network defaults. neutron_physical_bridge_name: br-ctlplane neutron_public_interface_name: eth0 # edpm_nodes_validation edpm_nodes_validation_validate_controllers_icmp: false edpm_nodes_validation_validate_gateway_icmp: false # edpm ovn-controller configuration edpm_ovn_bridge_mappings: <bridge_mappings> 3 edpm_ovn_bridge: br-int edpm_ovn_encap_type: geneve ovn_monitor_all: true edpm_ovn_remote_probe_interval: 60000 edpm_ovn_ofctrl_wait_before_clear: 8000 timesync_ntp_servers: - hostname: clock.redhat.com - hostname: clock2.redhat.com edpm_bootstrap_command: | # FIXME: perform dnf upgrade for other packages in EDPM ansible # here we only ensuring that decontainerized libvirt can start dnf -y upgrade openstack-selinux rm -f /run/virtlogd.pid gather_facts: false # edpm firewall, change the allowed CIDR if needed edpm_sshd_configure_firewall: true edpm_sshd_allowed_ranges: ['192.168.122.0/24'] # Do not attempt OVS major upgrades here edpm_ovs_packages: - openvswitch3.3 EOF 1 If TLS Everywhere is enabled, change spec:tlsEnabled to true . 2 If your deployment has a custom DNS Domain, modify the spec:nodes:[NODE NAME]:hostName to use fqdn for the node. 3 Replace <bridge_mappings> with the value of the bridge mappings in your configuration, for example, "datacentre:br-ctlplane" . Ensure that you use the same ovn-controller settings in the OpenStackDataPlaneNodeSet CR that you used in the Compute service nodes before adoption. This configuration is stored in the external_ids column in the Open_vSwitch table in the Open vSwitch database: Replace <bridge_mappings> with the value of the bridge mappings in your configuration, for example, "datacentre:br-ctlplane" . If you use a Ceph back end for Block Storage service (cinder), prepare the adopted data plane workloads: USD oc patch osdpns/openstack --type=merge --patch " spec: services: - redhat - bootstrap - download-cache - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - reboot-os - ceph-client - install-certs - ovn - neutron-metadata - libvirt - nova - telemetry nodeTemplate: extraMounts: - extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true " Note Ensure that you use the same list of services from the original OpenStackDataPlaneNodeSet CR, except for the inserted ceph-client service. Optional: Enable neutron-sriov-nic-agent in the OpenStackDataPlaneNodeSet CR: USD oc patch openstackdataplanenodeset openstack --type='json' --patch='[ { "op": "add", "path": "/spec/services/-", "value": "neutron-sriov" }, { "op": "add", "path": "/spec/nodeTemplate/ansible/ansibleVars/edpm_neutron_sriov_agent_SRIOV_NIC_physical_device_mappings", "value": "dummy_sriov_net:dummy-dev" }, { "op": "add", "path": "/spec/nodeTemplate/ansible/ansibleVars/edpm_neutron_sriov_agent_SRIOV_NIC_resource_provider_bandwidths", "value": "dummy-dev:40000000:40000000" }, { "op": "add", "path": "/spec/nodeTemplate/ansible/ansibleVars/edpm_neutron_sriov_agent_SRIOV_NIC_resource_provider_hypervisors", "value": "dummy-dev:standalone.localdomain" } ]' Optional: Enable neutron-dhcp in the OpenStackDataPlaneNodeSet CR: USD oc patch openstackdataplanenodeset openstack --type='json' --patch='[ { "op": "add", "path": "/spec/services/-", "value": "neutron-dhcp" }]' Note To use neutron-dhcp with OVN for the Bare Metal Provisioning service (ironic), you must set the disable_ovn_dhcp_for_baremetal_ports configuration option for the Networking service (neutron) to true . You can set this configuration in the NeutronAPI spec: .. spec: serviceUser: neutron ... customServiceConfig: | [DEFAULT] dhcp_agent_notification = True [ovn] disable_ovn_dhcp_for_baremetal_ports = true Run the pre-adoption validation: Create the validation service: USD oc apply -f - <<EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: pre-adoption-validation spec: playbook: osp.edpm.pre_adoption_validation EOF Create a OpenStackDataPlaneDeployment CR that runs only the validation: USD oc apply -f - <<EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-pre-adoption spec: nodeSets: - openstack servicesOverride: - pre-adoption-validation EOF When the validation is finished, confirm that the status of the Ansible EE pods is Completed : Wait for the deployment to reach the Ready status: Important If any openstack-pre-adoption validations fail, you must reference the Ansible logs to determine which ones were unsuccessful, and then try the following troubleshooting options: If the hostname validation failed, check that the hostname of the data plane node is correctly listed in the OpenStackDataPlaneNodeSet CR. If the kernel argument check failed, ensure that the kernel argument configuration in the edpm_kernel_args and edpm_kernel_hugepages variables in the OpenStackDataPlaneNodeSet CR is the same as the kernel argument configuration that you used in the Red Hat OpenStack Platform (RHOSP) 17.1 node. If the tuned profile check failed, ensure that the edpm_tuned_profile variable in the OpenStackDataPlaneNodeSet CR is configured to use the same profile as the one set on the RHOSP 17.1 node. Remove the remaining director services: Create an OpenStackDataPlaneService CR to clean up the data plane services you are adopting: USD oc apply -f - <<EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: tripleo-cleanup spec: playbook: osp.edpm.tripleo_cleanup EOF Create the OpenStackDataPlaneDeployment CR to run the clean-up: USD oc apply -f - <<EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: tripleo-cleanup spec: nodeSets: - openstack servicesOverride: - tripleo-cleanup EOF When the clean-up is finished, deploy the OpenStackDataPlaneDeployment CR: USD oc apply -f - <<EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack spec: nodeSets: - openstack EOF Note If you have other node sets to deploy, such as Networker nodes, you can add them in the nodeSets list in this step, or create separate OpenStackDataPlaneDeployment CRs later. You cannot add new node sets to an OpenStackDataPlaneDeployment CR after deployment. Verification Confirm that all the Ansible EE pods reach a Completed status: Wait for the data plane node set to reach the Ready status: Verify that the Networking service (neutron) agents are running: steps You must perform a fast-forward upgrade on your Compute services. For more information, see Performing a fast-forward upgrade on Compute services . 5.3. Performing a fast-forward upgrade on Compute services You must upgrade the Compute services from Red Hat OpenStack Platform 17.1 to Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 on the control plane and data plane by completing the following tasks: Update the cell1 Compute data plane services version. Remove pre-fast-forward upgrade workarounds from the Compute control plane services and Compute data plane services. Run Compute database online migrations to update live data. Procedure Wait for cell1 Compute data plane services version to update: Note The query returns an empty result when the update is completed. No downtime is expected for virtual machine workloads. Review any errors in the nova Compute agent logs on the data plane, and the nova-conductor journal records on the control plane. Patch the OpenStackControlPlane CR to remove the pre-fast-forward upgrade workarounds from the Compute control plane services: USD oc patch openstackcontrolplane openstack -n openstack --type=merge --patch ' spec: nova: template: cellTemplates: cell0: conductorServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=false cell1: metadataServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=false conductorServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=false apiServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=false metadataServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=false schedulerServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=false ' Wait until the Compute control plane services CRs are ready: Complete the steps in Adopting Compute services to the RHOSO data plane . Remove the pre-fast-forward upgrade workarounds from the Compute data plane services: USD oc apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: nova-extra-config namespace: openstack data: 20-nova-compute-cell1-workarounds.conf: | [workarounds] disable_compute_service_check_for_ffu=false --- apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-nova-compute-ffu namespace: openstack spec: nodeSets: - openstack servicesOverride: - nova EOF Note The service included in the servicesOverride key must match the name of the service that you included in the OpenStackDataPlaneNodeSet CR. For example, if you use a custom service called nova-custom , ensure that you add it to the servicesOverride key. Wait for the Compute data plane services to be ready: Run Compute database online migrations to complete the fast-forward upgrade: Verification Discover the Compute hosts in the cell: Verify if the existing test VM instance is running: Verify if the Compute services can stop the existing test VM instance: Verify if the Compute services can start the existing test VM instance: Note After the data plane adoption, the Compute hosts continue to run Red Hat Enterprise Linux (RHEL) 9.2. To take advantage of RHEL 9.4, perform a minor update procedure after finishing the adoption procedure. 5.4. Adopting Networker services to the RHOSO data plane Adopt the Networker nodes in your existing Red Hat OpenStack Platform deployment to the Red Hat OpenStack Services on OpenShift (RHOSO) data plane. You decide which services you want to run on the Networker nodes, and create a separate OpenStackDataPlaneNodeSet custom resource (CR) for the Networker nodes. You might also decide to implement the following options if they apply to your environment: Depending on your topology, you might need to run the neutron-metadata service on the nodes, specifically when you want to serve metadata to SR-IOV ports that are hosted on Compute nodes. If you want to continue running OVN gateway services on Networker nodes, keep ovn service in the list to deploy. Optional: You can run the neutron-dhcp service on your Networker nodes instead of your Compute nodes. You might not need to use neutron-dhcp with OVN, unless your deployment uses DHCP relays, or advanced DHCP options that are supported by dnsmasq but not by the OVN DHCP implementation. Prerequisites Define the shell variable. The following value is an example from a single node standalone director deployment: Replace ["standalone.localdomain"]="192.168.122.100" with the name and IP address of the Networker node. Procedure Deploy the OpenStackDataPlaneNodeSet CR for your Networker nodes: Note You can reuse most of the nodeTemplate section from the OpenStackDataPlaneNodeSet CR that is designated for your Compute nodes. You can omit some of the variables because of the limited set of services that are running on Networker nodes. USD oc apply -f - <<EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-networker spec: tlsEnabled: false 1 networkAttachments: - ctlplane preProvisioned: true services: - bootstrap - download-cache - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - install-certs - ovn env: - name: ANSIBLE_CALLBACKS_ENABLED value: "profile_tasks" - name: ANSIBLE_FORCE_COLOR value: "True" nodes: standalone: hostName: standalone ansible: ansibleHost: USD{networkers[standalone.localdomain]} networks: - defaultRoute: true fixedIP: USD{networkers[standalone.localdomain]} name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 nodeTemplate: ansibleSSHPrivateKeySecret: dataplane-adoption-secret ansible: ansibleUser: root ansibleVarsFrom: - prefix: subscription_manager_ secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: edpm_bootstrap_release_version_package: [] # edpm_network_config # Default nic config template for a EDPM node # These vars are edpm_network_config role vars edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} edpm_network_config_hide_sensitive_logs: false # # These vars are for the network config templates themselves and are # considered EDPM network defaults. neutron_physical_bridge_name: br-ctlplane neutron_public_interface_name: eth0 # edpm_nodes_validation edpm_nodes_validation_validate_controllers_icmp: false edpm_nodes_validation_validate_gateway_icmp: false # edpm ovn-controller configuration edpm_ovn_bridge_mappings: <bridge_mappings> 2 edpm_ovn_bridge: br-int edpm_ovn_encap_type: geneve ovn_monitor_all: true edpm_ovn_remote_probe_interval: 60000 edpm_ovn_ofctrl_wait_before_clear: 8000 # serve as a OVN gateway edpm_enable_chassis_gw: true 3 timesync_ntp_servers: - hostname: clock.redhat.com - hostname: clock2.redhat.com edpm_bootstrap_command: | subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }} subscription-manager release --set=9.2 subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms gather_facts: false enable_debug: false # edpm firewall, change the allowed CIDR if needed edpm_sshd_configure_firewall: true edpm_sshd_allowed_ranges: ['192.168.122.0/24'] # SELinux module edpm_selinux_mode: enforcing # Do not attempt OVS major upgrades here edpm_ovs_packages: - openvswitch3.1 EOF 1 If TLS Everywhere is enabled, change spec:tlsEnabled to true . 2 Set to the same values that you used in your Red Hat OpenStack Platform 17.1 deployment. 3 Set to true to run ovn-controller in gateway mode. Ensure that you use the same ovn-controller settings in the OpenStackDataPlaneNodeSet CR that you used in the Networker nodes before adoption. This configuration is stored in the external_ids column in the Open_vSwitch table in the Open vSwitch database: Replace <bridge_mappings> with the value of the bridge mappings in your configuration, for example, "datacentre:br-ctlplane" . Optional: Enable neutron-metadata in the OpenStackDataPlaneNodeSet CR: USD oc patch openstackdataplanenodeset <networker_CR_name> --type='json' --patch='[ { "op": "add", "path": "/spec/services/-", "value": "neutron-metadata" }]' Replace <networker_CR_name> with the name of the CR that you deployed for your Networker nodes, for example, openstack-networker . Optional: Enable neutron-dhcp in the OpenStackDataPlaneNodeSet CR: USD oc patch openstackdataplanenodeset <networker_CR_name> --type='json' --patch='[ { "op": "add", "path": "/spec/services/-", "value": "neutron-dhcp" }]' Run the pre-adoption-validation service for Networker nodes: Create a OpenStackDataPlaneDeployment CR that runs only the validation: USD oc apply -f - <<EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-pre-adoption-networker spec: nodeSets: - openstack-networker servicesOverride: - pre-adoption-validation EOF When the validation is finished, confirm that the status of the Ansible EE pods is Completed : Wait for the deployment to reach the Ready status: Deploy the OpenStackDataPlaneDeployment CR for Networker nodes: USD oc apply -f - <<EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-networker spec: nodeSets: - openstack-networker EOF Note Alternatively, you can include the Networker node set in the nodeSets list before you deploy the main OpenStackDataPlaneDeployment CR. You cannot add new node sets to the OpenStackDataPlaneDeployment CR after deployment. Verification Confirm that all the Ansible EE pods reach a Completed status: Wait for the data plane node set to reach the Ready status: Replace <networker_CR_name> with the name of the CR that you deployed for your Networker nodes, for example, openstack-networker . Verify that the Networking service (neutron) agents are running. The list of agents varies depending on the services you enabled: | [
"EDPM_PRIVATEKEY_PATH=\"<path_to_SSH_key>\" declare -A computes computes=( [\"standalone.localdomain\"]=\"192.168.122.100\" # )",
"PacemakerResourcesToStop=( \"galera-bundle\" \"haproxy-bundle\" \"rabbitmq-bundle\") echo \"Stopping pacemaker services\" for i in {1..3}; do SSH_CMD=CONTROLLERUSD{i}_SSH if [ ! -z \"USD{!SSH_CMD}\" ]; then echo \"Using controller USDi to run pacemaker commands\" for resource in USD{PacemakerResourcesToStop[*]}; do if USD{!SSH_CMD} sudo pcs resource config USDresource; then USD{!SSH_CMD} sudo pcs resource disable USDresource fi done break fi done",
"oc apply -f - <<EOF apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: netconfig spec: networks: - name: ctlplane dnsDomain: ctlplane.example.com subnets: - name: subnet1 allocationRanges: - end: 192.168.122.120 start: 192.168.122.100 - end: 192.168.122.200 start: 192.168.122.150 cidr: 192.168.122.0/24 gateway: 192.168.122.1 - name: internalapi dnsDomain: internalapi.example.com subnets: - name: subnet1 allocationRanges: - end: 172.17.0.250 start: 172.17.0.100 cidr: 172.17.0.0/24 vlan: 20 - name: External dnsDomain: external.example.com subnets: - name: subnet1 allocationRanges: - end: 10.0.0.250 start: 10.0.0.100 cidr: 10.0.0.0/24 gateway: 10.0.0.1 - name: storage dnsDomain: storage.example.com subnets: - name: subnet1 allocationRanges: - end: 172.18.0.250 start: 172.18.0.100 cidr: 172.18.0.0/24 vlan: 21 - name: storagemgmt dnsDomain: storagemgmt.example.com subnets: - name: subnet1 allocationRanges: - end: 172.20.0.250 start: 172.20.0.100 cidr: 172.20.0.0/24 vlan: 23 - name: tenant dnsDomain: tenant.example.com subnets: - name: subnet1 allocationRanges: - end: 172.19.0.250 start: 172.19.0.100 cidr: 172.19.0.0/24 vlan: 22 EOF",
"PODIFIED_DB_ROOT_PASSWORD=USD(oc get -o json secret/osp-secret | jq -r .data.DbRootPassword | base64 -d) CEPH_FSID=USD(oc get secret ceph-conf-files -o json | jq -r '.data.\"ceph.conf\"' | base64 -d | grep fsid | sed -e 's/fsid = //' alias openstack=\"oc exec -t openstackclient -- openstack\" declare -A computes export computes=( [\"standalone.localdomain\"]=\"192.168.122.100\" # )",
"oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: dataplane-adoption-secret namespace: openstack data: ssh-privatekey: | USD(cat <path_to_SSH_key> | base64 | sed 's/^/ /') EOF",
"cd \"USD(mktemp -d)\" ssh-keygen -f ./id -t ecdsa-sha2-nistp521 -N '' get secret nova-migration-ssh-key || oc create secret generic nova-migration-ssh-key -n openstack --from-file=ssh-privatekey=id --from-file=ssh-publickey=id.pub --type kubernetes.io/ssh-auth rm -f id* cd -",
"oc apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: nova-extra-config namespace: openstack data: 19-nova-compute-cell1-workarounds.conf: | [workarounds] disable_compute_service_check_for_ffu=true EOF",
"tlsCerts: contents: - dnsnames - ips networks: - ctlplane issuer: osp-rootca-issuer-internal caCerts: combined-ca-bundle edpmServiceType: nova",
"oc apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: nova-extra-config namespace: openstack data: 19-nova-compute-cell1-workarounds.conf: | [workarounds] disable_compute_service_check_for_ffu=true 03-ceph-nova.conf: | [libvirt] images_type=rbd images_rbd_pool=vms images_rbd_ceph_conf=/etc/ceph/ceph.conf images_rbd_glance_store_name=default_backend images_rbd_glance_copy_poll_interval=15 images_rbd_glance_copy_timeout=600 rbd_user=openstack rbd_secret_uuid=USDCEPH_FSID EOF",
"oc create secret generic subscription-manager --from-literal rhc_auth='{\"login\": {\"username\": \"<subscription_manager_username>\", \"password\": \"<subscription_manager_password>\"}}'",
"oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{\"registry.redhat.io\": {\"<registry_username>\": \"<registry_password>\"}}'",
"oc apply -f - <<EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack spec: tlsEnabled: false 1 networkAttachments: - ctlplane preProvisioned: true services: - redhat - bootstrap - download-cache - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - reboot-os - install-certs - libvirt - nova - ovn - neutron-metadata - telemetry env: - name: ANSIBLE_CALLBACKS_ENABLED value: \"profile_tasks\" - name: ANSIBLE_FORCE_COLOR value: \"True\" nodes: standalone: hostName: standalone 2 ansible: ansibleHost: USD{computes[standalone.localdomain]} networks: - defaultRoute: true fixedIP: USD{computes[standalone.localdomain]} name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 nodeTemplate: ansibleSSHPrivateKeySecret: dataplane-adoption-secret ansible: ansibleUser: root ansibleVarsFrom: - secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: rhc_release: 9.2 rhc_repositories: - {name: \"*\", state: disabled} - {name: \"rhel-9-for-x86_64-baseos-eus-rpms\", state: enabled} - {name: \"rhel-9-for-x86_64-appstream-eus-rpms\", state: enabled} - {name: \"rhel-9-for-x86_64-highavailability-eus-rpms\", state: enabled} - {name: \"rhoso-18.0-for-rhel-9-x86_64-rpms\", state: enabled} - {name: \"fast-datapath-for-rhel-9-x86_64-rpms\", state: enabled} - {name: \"rhceph-7-tools-for-rhel-9-x86_64-rpms\", state: enabled} edpm_bootstrap_release_version_package: [] # edpm_network_config # Default nic config template for a EDPM node # These vars are edpm_network_config role vars edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} edpm_network_config_hide_sensitive_logs: false # # These vars are for the network config templates themselves and are # considered EDPM network defaults. neutron_physical_bridge_name: br-ctlplane neutron_public_interface_name: eth0 # edpm_nodes_validation edpm_nodes_validation_validate_controllers_icmp: false edpm_nodes_validation_validate_gateway_icmp: false # edpm ovn-controller configuration edpm_ovn_bridge_mappings: <bridge_mappings> 3 edpm_ovn_bridge: br-int edpm_ovn_encap_type: geneve ovn_monitor_all: true edpm_ovn_remote_probe_interval: 60000 edpm_ovn_ofctrl_wait_before_clear: 8000 timesync_ntp_servers: - hostname: clock.redhat.com - hostname: clock2.redhat.com edpm_bootstrap_command: | # FIXME: perform dnf upgrade for other packages in EDPM ansible # here we only ensuring that decontainerized libvirt can start dnf -y upgrade openstack-selinux rm -f /run/virtlogd.pid gather_facts: false # edpm firewall, change the allowed CIDR if needed edpm_sshd_configure_firewall: true edpm_sshd_allowed_ranges: ['192.168.122.0/24'] # Do not attempt OVS major upgrades here edpm_ovs_packages: - openvswitch3.3 EOF",
"ovs-vsctl list Open . external_ids : {hostname=standalone.localdomain, ovn-bridge=br-int, ovn-bridge-mappings=<bridge_mappings>, ovn-chassis-mac-mappings=\"datacentre:1e:0a:bb:e6:7c:ad\", ovn-encap-ip=\"172.19.0.100\", ovn-encap-tos=\"0\", ovn-encap-type=geneve, ovn-match-northd-version=False, ovn-monitor-all=True, ovn-ofctrl-wait-before-clear=\"8000\", ovn-openflow-probe-interval=\"60\", ovn-remote=\"tcp:ovsdbserver-sb.openstack.svc:6642\", ovn-remote-probe-interval=\"60000\", rundir=\"/var/run/openvswitch\", system-id=\"2eec68e6-aa21-4c95-a868-31aeafc11736\"}",
"oc patch osdpns/openstack --type=merge --patch \" spec: services: - redhat - bootstrap - download-cache - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - reboot-os - ceph-client - install-certs - ovn - neutron-metadata - libvirt - nova - telemetry nodeTemplate: extraMounts: - extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: \"/etc/ceph\" readOnly: true \"",
"oc patch openstackdataplanenodeset openstack --type='json' --patch='[ { \"op\": \"add\", \"path\": \"/spec/services/-\", \"value\": \"neutron-sriov\" }, { \"op\": \"add\", \"path\": \"/spec/nodeTemplate/ansible/ansibleVars/edpm_neutron_sriov_agent_SRIOV_NIC_physical_device_mappings\", \"value\": \"dummy_sriov_net:dummy-dev\" }, { \"op\": \"add\", \"path\": \"/spec/nodeTemplate/ansible/ansibleVars/edpm_neutron_sriov_agent_SRIOV_NIC_resource_provider_bandwidths\", \"value\": \"dummy-dev:40000000:40000000\" }, { \"op\": \"add\", \"path\": \"/spec/nodeTemplate/ansible/ansibleVars/edpm_neutron_sriov_agent_SRIOV_NIC_resource_provider_hypervisors\", \"value\": \"dummy-dev:standalone.localdomain\" } ]'",
"oc patch openstackdataplanenodeset openstack --type='json' --patch='[ { \"op\": \"add\", \"path\": \"/spec/services/-\", \"value\": \"neutron-dhcp\" }]'",
".. spec: serviceUser: neutron customServiceConfig: | [DEFAULT] dhcp_agent_notification = True [ovn] disable_ovn_dhcp_for_baremetal_ports = true",
"oc apply -f - <<EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: pre-adoption-validation spec: playbook: osp.edpm.pre_adoption_validation EOF",
"oc apply -f - <<EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-pre-adoption spec: nodeSets: - openstack servicesOverride: - pre-adoption-validation EOF",
"watch oc get pod -l app=openstackansibleee",
"oc logs -l app=openstackansibleee -f --max-log-requests 20",
"oc wait --for condition=Ready openstackdataplanedeployment/openstack-pre-adoption --timeout=10m",
"oc apply -f - <<EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: tripleo-cleanup spec: playbook: osp.edpm.tripleo_cleanup EOF",
"oc apply -f - <<EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: tripleo-cleanup spec: nodeSets: - openstack servicesOverride: - tripleo-cleanup EOF",
"oc apply -f - <<EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack spec: nodeSets: - openstack EOF",
"watch oc get pod -l app=openstackansibleee",
"oc logs -l app=openstackansibleee -f --max-log-requests 20",
"oc wait --for condition=Ready osdpns/openstack --timeout=30m",
"oc exec openstackclient -- openstack network agent list +--------------------------------------+------------------------------+------------------------+-------------------+-------+-------+----------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+------------------------------+------------------------+-------------------+-------+-------+----------------------------+ | 174fc099-5cc9-4348-b8fc-59ed44fcfb0e | DHCP agent | standalone.localdomain | nova | :-) | UP | neutron-dhcp-agent | | 10482583-2130-5b0d-958f-3430da21b929 | OVN Metadata agent | standalone.localdomain | | :-) | UP | neutron-ovn-metadata-agent | | a4f1b584-16f1-4937-b2b0-28102a3f6eaa | OVN Controller agent | standalone.localdomain | | :-) | UP | ovn-controller | +--------------------------------------+------------------------------+------------------------+-------------------+-------+-------+----------------------------+",
"oc exec openstack-cell1-galera-0 -c galera -- mysql -rs -uroot -pUSDPODIFIED_DB_ROOT_PASSWORD -e \"select a.version from nova_cell1.services a join nova_cell1.services b where a.version!=b.version and a.binary='nova-compute';\"",
"oc patch openstackcontrolplane openstack -n openstack --type=merge --patch ' spec: nova: template: cellTemplates: cell0: conductorServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=false cell1: metadataServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=false conductorServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=false apiServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=false metadataServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=false schedulerServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=false '",
"oc wait --for condition=Ready --timeout=300s Nova/nova",
"oc apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: nova-extra-config namespace: openstack data: 20-nova-compute-cell1-workarounds.conf: | [workarounds] disable_compute_service_check_for_ffu=false --- apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-nova-compute-ffu namespace: openstack spec: nodeSets: - openstack servicesOverride: - nova EOF",
"oc wait --for condition=Ready openstackdataplanedeployment/openstack-nova-compute-ffu --timeout=5m",
"oc exec -it nova-cell0-conductor-0 -- nova-manage db online_data_migrations oc exec -it nova-cell1-conductor-0 -- nova-manage db online_data_migrations",
"oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose",
"USD{BASH_ALIASES[openstack]} server --os-compute-api-version 2.48 show --diagnostics test 2>&1 || echo FAIL",
"USD{BASH_ALIASES[openstack]} server list -c Name -c Status -f value | grep -qF \"test ACTIVE\" && USD{BASH_ALIASES[openstack]} server stop test || echo PASS USD{BASH_ALIASES[openstack]} server list -c Name -c Status -f value | grep -qF \"test SHUTOFF\" || echo FAIL USD{BASH_ALIASES[openstack]} server --os-compute-api-version 2.48 show --diagnostics test 2>&1 || echo PASS",
"USD{BASH_ALIASES[openstack]} server list -c Name -c Status -f value | grep -qF \"test SHUTOFF\" && USD{BASH_ALIASES[openstack]} server start test || echo PASS USD{BASH_ALIASES[openstack]} server list -c Name -c Status -f value | grep -qF \"test ACTIVE\" && USD{BASH_ALIASES[openstack]} server --os-compute-api-version 2.48 show --diagnostics test --fit-width -f json | jq -r '.state' | grep running || echo FAIL",
"declare -A networkers networkers=( [\"standalone.localdomain\"]=\"192.168.122.100\" # )",
"oc apply -f - <<EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-networker spec: tlsEnabled: false 1 networkAttachments: - ctlplane preProvisioned: true services: - bootstrap - download-cache - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - install-certs - ovn env: - name: ANSIBLE_CALLBACKS_ENABLED value: \"profile_tasks\" - name: ANSIBLE_FORCE_COLOR value: \"True\" nodes: standalone: hostName: standalone ansible: ansibleHost: USD{networkers[standalone.localdomain]} networks: - defaultRoute: true fixedIP: USD{networkers[standalone.localdomain]} name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 nodeTemplate: ansibleSSHPrivateKeySecret: dataplane-adoption-secret ansible: ansibleUser: root ansibleVarsFrom: - prefix: subscription_manager_ secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: edpm_bootstrap_release_version_package: [] # edpm_network_config # Default nic config template for a EDPM node # These vars are edpm_network_config role vars edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} edpm_network_config_hide_sensitive_logs: false # # These vars are for the network config templates themselves and are # considered EDPM network defaults. neutron_physical_bridge_name: br-ctlplane neutron_public_interface_name: eth0 # edpm_nodes_validation edpm_nodes_validation_validate_controllers_icmp: false edpm_nodes_validation_validate_gateway_icmp: false # edpm ovn-controller configuration edpm_ovn_bridge_mappings: <bridge_mappings> 2 edpm_ovn_bridge: br-int edpm_ovn_encap_type: geneve ovn_monitor_all: true edpm_ovn_remote_probe_interval: 60000 edpm_ovn_ofctrl_wait_before_clear: 8000 # serve as a OVN gateway edpm_enable_chassis_gw: true 3 timesync_ntp_servers: - hostname: clock.redhat.com - hostname: clock2.redhat.com edpm_bootstrap_command: | subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }} subscription-manager release --set=9.2 subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms gather_facts: false enable_debug: false # edpm firewall, change the allowed CIDR if needed edpm_sshd_configure_firewall: true edpm_sshd_allowed_ranges: ['192.168.122.0/24'] # SELinux module edpm_selinux_mode: enforcing # Do not attempt OVS major upgrades here edpm_ovs_packages: - openvswitch3.1 EOF",
"ovs-vsctl list Open . external_ids : {hostname=standalone.localdomain, ovn-bridge=br-int, ovn-bridge-mappings=<bridge_mappings>, ovn-chassis-mac-mappings=\"datacentre:1e:0a:bb:e6:7c:ad\", ovn-cms-options=enable-chassis-as-gw, ovn-encap-ip=\"172.19.0.100\", ovn-encap-tos=\"0\", ovn-encap-type=geneve, ovn-match-northd-version=False, ovn-monitor-all=True, ovn-ofctrl-wait-before-clear=\"8000\", ovn-openflow-probe-interval=\"60\", ovn-remote=\"tcp:ovsdbserver-sb.openstack.svc:6642\", ovn-remote-probe-interval=\"60000\", rundir=\"/var/run/openvswitch\", system-id=\"2eec68e6-aa21-4c95-a868-31aeafc11736\"}",
"oc patch openstackdataplanenodeset <networker_CR_name> --type='json' --patch='[ { \"op\": \"add\", \"path\": \"/spec/services/-\", \"value\": \"neutron-metadata\" }]'",
"oc patch openstackdataplanenodeset <networker_CR_name> --type='json' --patch='[ { \"op\": \"add\", \"path\": \"/spec/services/-\", \"value\": \"neutron-dhcp\" }]'",
"oc apply -f - <<EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-pre-adoption-networker spec: nodeSets: - openstack-networker servicesOverride: - pre-adoption-validation EOF",
"watch oc get pod -l app=openstackansibleee",
"oc logs -l app=openstackansibleee -f --max-log-requests 20",
"oc wait --for condition=Ready openstackdataplanedeployment/openstack-pre-adoption-networker --timeout=10m",
"oc apply -f - <<EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-networker spec: nodeSets: - openstack-networker EOF",
"watch oc get pod -l app=openstackansibleee",
"oc logs -l app=openstackansibleee -f --max-log-requests 20",
"oc wait --for condition=Ready osdpns/<networker_CR_name> --timeout=30m",
"oc exec openstackclient -- openstack network agent list +--------------------------------------+------------------------------+------------------------+-------------------+-------+-------+----------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+------------------------------+------------------------+-------------------+-------+-------+----------------------------+ | 174fc099-5cc9-4348-b8fc-59ed44fcfb0e | DHCP agent | standalone.localdomain | nova | :-) | UP | neutron-dhcp-agent | | 10482583-2130-5b0d-958f-3430da21b929 | OVN Metadata agent | standalone.localdomain | | :-) | UP | neutron-ovn-metadata-agent | | a4f1b584-16f1-4937-b2b0-28102a3f6eaa | OVN Controller Gateway agent | standalone.localdomain | | :-) | UP | ovn-controller | +--------------------------------------+------------------------------+------------------------+-------------------+-------+-------+----------------------------+"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/adopting_a_red_hat_openstack_platform_17.1_deployment/adopting-data-plane_adopt-control-plane |
Chapter 14. Creating exports using NFS | Chapter 14. Creating exports using NFS This section describes how to create exports using NFS that can then be accessed externally from the OpenShift cluster. Follow the instructions below to create exports and access them externally from the OpenShift Cluster: Section 14.1, "Enabling the NFS feature" Section 14.2, "Creating NFS exports" Section 14.3, "Consuming NFS exports in-cluster" Section 14.4, "Consuming NFS exports externally from the OpenShift cluster" 14.1. Enabling the NFS feature To use NFS feature, you need to enable it in the storage cluster using the command-line interface (CLI) after the cluster is created. You can also enable the NFS feature while creating the storage cluster using the user interface. Prerequisites OpenShift Data Foundation is installed and running in the openshift-storage namespace. The OpenShift Data Foundation installation includes a CephFilesystem. Procedure Run the following command to enable the NFS feature from CLI: Verification steps NFS installation and configuration is complete when the following conditions are met: The CephNFS resource named ocs-storagecluster-cephnfs has a status of Ready . Check if all the csi-nfsplugin-* pods are running: Output has multiple pods. For example: 14.2. Creating NFS exports NFS exports are created by creating a Persistent Volume Claim (PVC) against the ocs-storagecluster-ceph-nfs StorageClass. You can create NFS PVCs two ways: Create NFS PVC using a yaml. The following is an example PVC. Note volumeMode: Block will not work for NFS volumes. <desired_name> Specify a name for the PVC, for example, my-nfs-export . The export is created once the PVC reaches the Bound state. Create NFS PVCs from the OpenShift Container Platform web console. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and the NFS feature is enabled for the storage cluster. Procedure In the OpenShift Web Console, click Storage Persistent Volume Claims Set the Project to openshift-storage . Click Create PersistentVolumeClaim . Specify Storage Class , ocs-storagecluster-ceph-nfs . Specify the PVC Name , for example, my-nfs-export . Select the required Access Mode . Specify a Size as per application requirement. Select Volume mode as Filesystem . Note: Block mode is not supported for NFS PVCs Click Create and wait until the PVC is in Bound status. 14.3. Consuming NFS exports in-cluster Kubernetes application pods can consume NFS exports created by mounting a previously created PVC. You can mount the PVC one of two ways: Using a YAML: Below is an example pod that uses the example PVC created in Section 14.2, "Creating NFS exports" : <pvc_name> Specify the PVC you have previously created, for example, my-nfs-export . Using the OpenShift Container Platform web console. Procedure On the OpenShift Container Platform web console, navigate to Workloads Pods . Click Create Pod to create a new application pod. Under the metadata section add a name. For example, nfs-export-example , with namespace as openshift-storage . Under the spec: section, add containers: section with image and volumeMounts sections: For example: Under the spec: section, add volumes: section to add the NFS PVC as a volume for the application pod: For example: 14.4. Consuming NFS exports externally from the OpenShift cluster NFS clients outside of the OpenShift cluster can mount NFS exports created by a previously-created PVC. Procedure After the nfs flag is enabled, singe-server CephNFS is deployed by Rook. You need to fetch the value of the ceph_nfs field for the nfs-ganesha server to use in the step: For example: Expose the NFS server outside of the OpenShift cluster by creating a Kubernetes LoadBalancer Service. The example below creates a LoadBalancer Service and references the NFS server created by OpenShift Data Foundation. Replace <my-nfs> with the value you got in step 1. Collect connection information. The information external clients need to connect to an export comes from the Persistent Volume (PV) created for the PVC, and the status of the LoadBalancer Service created in the step. Get the share path from the PV. Get the name of the PV associated with the NFS export's PVC: Replace <pvc_name> with your own PVC name. For example: Use the PV name obtained previously to get the NFS export's share path: Get an ingress address for the NFS server. A service's ingress status may have multiple addresses. Choose the one desired to use for external clients. In the example below, there is only a single address: the host name ingress-id.somedomain.com . Connect the external client using the share path and ingress address from the steps. The following example mounts the export to the client's directory path /export/mount/path : If this does not work immediately, it could be that the Kubernetes environment is still taking time to configure the network resources to allow ingress to the NFS server. | [
"oc --namespace openshift-storage patch storageclusters.ocs.openshift.io ocs-storagecluster --type merge --patch '{\"spec\": {\"nfs\":{\"enable\": true}}}'",
"-n openshift-storage describe cephnfs ocs-storagecluster-cephnfs",
"-n openshift-storage get pod | grep csi-nfsplugin",
"csi-nfsplugin-47qwq 2/2 Running 0 10s csi-nfsplugin-77947 2/2 Running 0 10s csi-nfsplugin-ct2pm 2/2 Running 0 10s csi-nfsplugin-provisioner-f85b75fbb-2rm2w 2/2 Running 0 10s csi-nfsplugin-provisioner-f85b75fbb-8nj5h 2/2 Running 0 10s",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <desired_name> spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-ceph-nfs",
"apiVersion: v1 kind: Pod metadata: name: nfs-export-example spec: containers: - name: web-server image: nginx volumeMounts: - name: nfs-export-pvc mountPath: /var/lib/www/html volumes: - name: nfs-export-pvc persistentVolumeClaim: claimName: <pvc_name> readOnly: false",
"apiVersion: v1 kind: Pod metadata: name: nfs-export-example namespace: openshift-storage spec: containers: - name: web-server image: nginx volumeMounts: - name: <volume_name> mountPath: /var/lib/www/html",
"apiVersion: v1 kind: Pod metadata: name: nfs-export-example namespace: openshift-storage spec: containers: - name: web-server image: nginx volumeMounts: - name: nfs-export-pvc mountPath: /var/lib/www/html",
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: nfs-export-pvc persistentVolumeClaim: claimName: my-nfs-export",
"oc get pods -n openshift-storage | grep rook-ceph-nfs",
"oc describe pod <name of the rook-ceph-nfs pod> | grep ceph_nfs",
"oc describe pod rook-ceph-nfs-ocs-storagecluster-cephnfs-a-7bb484b4bf-bbdhs | grep ceph_nfs ceph_nfs=my-nfs",
"apiVersion: v1 kind: Service metadata: name: rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer namespace: openshift-storage spec: ports: - name: nfs port: 2049 type: LoadBalancer externalTrafficPolicy: Local selector: app: rook-ceph-nfs ceph_nfs: <my-nfs> instance: a",
"oc get pvc <pvc_name> --output jsonpath='{.spec.volumeName}' pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d",
"get pvc pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.volumeName}' pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d",
"oc get pv pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.csi.volumeAttributes.share}' /0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215",
"oc -n openshift-storage get service rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer --output jsonpath='{.status.loadBalancer.ingress}' [{\"hostname\":\"ingress-id.somedomain.com\"}]",
"mount -t nfs4 -o proto=tcp ingress-id.somedomain.com:/0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215 /export/mount/path"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/managing_and_allocating_storage_resources/creating-exports-using-nfs_rhodf |
2.4. Adding a Modeshape User | 2.4. Adding a Modeshape User 2.4.1. Create a Modeshape Publishing User To use Modeshape publishing, you need to ensure that at least one user has the connect and readwrite roles or the connect and admin roles. By default, all the roles are assigned to all anonymous-roles users. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/sect-adding_a_modeshape_user |
3.10. Migrating from ext4 to XFS | 3.10. Migrating from ext4 to XFS Starting with Red Hat Enterprise Linux 7.0, XFS is the default file system instead of ext4. This section highlights the differences when using or administering an XFS file system. The ext4 file system is still fully supported in Red Hat Enterprise Linux 7 and can be selected at installation. While it is possible to migrate from ext4 to XFS, it is not required. 3.10.1. Differences Between Ext3/4 and XFS File system repair Ext3/4 runs e2fsck in userspace at boot time to recover the journal as needed. XFS, by comparison, performs journal recovery in kernelspace at mount time. An fsck.xfs shell script is provided but does not perform any useful action as it is only there to satisfy initscript requirements. When an XFS file system repair or check is requested, use the xfs_repair command. Use the -n option for a read-only check. The xfs_repair command will not operate on a file system with a dirty log. To repair such a file system mount and unmount must first be performed to replay the log. If the log is corrupt and cannot be replayed, the -L option can be used to zero out in the log. For more information on file system repair of XFS file systems, see Section 12.2.2, "XFS" Metadata error behavior The ext3/4 file system has configurable behavior when metadata errors are encountered, with the default being to simply continue. When XFS encounters a metadata error that is not recoverable it will shut down the file system and return a EFSCORRUPTED error. The system logs will contain details of the error encountered and will recommend running xfs_repair if necessary. Quotas XFS quotas are not a remountable option. The -o quota option must be specified on the initial mount for quotas to be in effect. While the standard tools in the quota package can perform basic quota administrative tasks (tools such as setquota and repquota ), the xfs_quota tool can be used for XFS-specific features, such as Project Quota administration. The quotacheck command has no effect on an XFS file system. The first time quota accounting is turned on XFS does an automatic quotacheck internally. Because XFS quota metadata is a first-class, journaled metadata object, the quota system will always be consistent until quotas are manually turned off. File system resize The XFS file system has no utility to shrink a file system. XFS file systems can be grown online via the xfs_growfs command. Inode numbers For file systems larger than 1 TB with 256-byte inodes, or larger than 2 TB with 512-byte inodes, XFS inode numbers might exceed 2^32. Such large inode numbers cause 32-bit stat calls to fail with the EOVERFLOW return value. The described problem might occur when using the default Red Hat Enterprise Linux 7 configuration: non-striped with four allocation groups. A custom configuration, for example file system extension or changing XFS file system parameters, might lead to a different behavior. Applications usually handle such larger inode numbers correctly. If needed, mount the XFS file system with the -o inode32 parameter to enforce inode numbers below 2^32. Note that using inode32 does not affect inodes that are already allocated with 64-bit numbers. Important Do not use the inode32 option unless it is required by a specific environment. The inode32 option changes allocation behavior. As a consequence, the ENOSPC error might occur if no space is available to allocate inodes in the lower disk blocks. Speculative preallocation XFS uses speculative preallocation to allocate blocks past EOF as files are written. This avoids file fragmentation due to concurrent streaming write workloads on NFS servers. By default, this preallocation increases with the size of the file and will be apparent in "du" output. If a file with speculative preallocation is not dirtied for five minutes the preallocation will be discarded. If the inode is cycled out of cache before that time, then the preallocation will be discarded when the inode is reclaimed. If premature ENOSPC problems are seen due to speculative preallocation, a fixed preallocation amount may be specified with the -o allocsize= amount mount option. Fragmentation-related tools Fragmentation is rarely a significant issue on XFS file systems due to heuristics and behaviors, such as delayed allocation and speculative preallocation. However, tools exist for measuring file system fragmentation as well as defragmenting file systems. Their use is not encouraged. The xfs_db frag command attempts to distill all file system allocations into a single fragmentation number, expressed as a percentage. The output of the command requires significant expertise to understand its meaning. For example, a fragmentation factor of 75% means only an average of 4 extents per file. For this reason the output of xfs_db's frag is not considered useful and more careful analysis of any fragmentation problems is recommended. Warning The xfs_fsr command may be used to defragment individual files, or all files on a file system. The later is especially not recommended as it may destroy locality of files and may fragment free space. Commands Used with ext3 and ext4 Compared to XFS The following table compares common commands used with ext3 and ext4 to their XFS-specific counterparts. Table 3.1. Common Commands for ext3 and ext4 Compared to XFS Task ext3/4 XFS Create a file system mkfs.ext4 or mkfs.ext3 mkfs.xfs File system check e2fsck xfs_repair Resizing a file system resize2fs xfs_growfs Save an image of a file system e2image xfs_metadump and xfs_mdrestore Label or tune a file system tune2fs xfs_admin Backup a file system dump and restore xfsdump and xfsrestore The following table lists generic tools that function on XFS file systems as well, but the XFS versions have more specific functionality and as such are recommended. Table 3.2. Generic Tools for ext4 and XFS Task ext4 XFS Quota quota xfs_quota File mapping filefrag xfs_bmap More information on many the listed XFS commands is included in Chapter 3, The XFS File System . You can also consult the manual pages of the listed XFS administration tools for more information. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/migrating-ext4-xfs |
Chapter 15. Determining the automation hub Route | Chapter 15. Determining the automation hub Route Use the following procedure to determine the hub route. Procedure Navigate to Networking Routes . Select the project you used for the install. Copy the location of the private-ah-web-svc service. The name of the service is different if you used a different name when creating the automation hub instance. This is used later to update the Red Hat Single Sign-On client. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/proc-determine-hub-route_using-a-rhsso-operator |
5.4. scl_source: command not found | 5.4. scl_source: command not found This error message is usually caused by having an old version of the scl-utils package installed. To update the scl-utils package, run the following command: | [
"yum update scl-utils"
] | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-scl_source_command_not_found |
Chapter 4. Specifics of Individual Software Collections | Chapter 4. Specifics of Individual Software Collections This chapter is focused on the specifics of certain Software Collections and provides additional details concerning these components. 4.1. Red Hat Developer Toolset Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. Red Hat Developer Toolset provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. Similarly to other Software Collections, an additional set of tools is installed into the /opt/ directory. These tools are enabled by the user on demand using the supplied scl utility. Similarly to other Software Collections, these do not replace the Red Hat Enterprise Linux system versions of these tools, nor will they be used in preference to those system versions unless explicitly invoked using the scl utility. For an overview of features, refer to the Main Features section of the Red Hat Developer Toolset Release Notes . For a complete list of components, see the Red Hat Developer Toolset Components table in the Red Hat Developer Toolset User Guide . 4.2. Ruby on Rails 5.0 Red Hat Software Collections 3.3 provides the rh-ruby24 Software Collection together with the rh-ror50 Collection. To install Ruby on Rails 5.0 , type the following command as root : yum install rh-ror50 Installing any package from the rh-ror50 Software Collection automatically pulls in rh-ruby24 and rh-nodejs6 as dependencies. The rh-nodejs6 Collection is used by certain gems in an asset pipeline to post-process web resources, for example, sass or coffee-script source files. Additionally, the Action Cable framework uses rh-nodejs6 for handling WebSockets in Rails. To run the rails s command without requiring rh-nodejs6 , disable the coffee-rails and uglifier gems in the Gemfile . To run Ruby on Rails without Node.js , run the following command, which will automatically enable rh-ruby24 : scl enable rh-ror50 bash To run Ruby on Rails with all features, enable also the rh-nodejs6 Software Collection: scl enable rh-ror50 rh-nodejs6 bash The rh-ror50 Software Collection is supported together with the rh-ruby24 and rh-nodejs6 components. 4.3. MongoDB 3.6 The rh-mongodb36 Software Collection is available only for Red Hat Enterprise Linux 7. See Section 4.4, "MongoDB 3.4" for instructions on how to use MongoDB 3.4 on Red Hat Enterprise Linux 6. To install the rh-mongodb36 collection, type the following command as root : yum install rh-mongodb36 To run the MongoDB shell utility, type the following command: scl enable rh-mongodb36 'mongo' Note The rh-mongodb36-mongo-cxx-driver package has been built with the -std=gnu++14 option using GCC from Red Hat Developer Toolset 6. Binaries using the shared library for the MongoDB C++ Driver that use C++11 (or later) features have to be built also with Red Hat Developer Toolset 6 or later. See C++ compatibility details in the Red Hat Developer Toolset 6 User Guide . To start the MongoDB daemon, type the following command as root : systemctl start rh-mongodb36-mongod.service To start the MongoDB daemon on boot, type this command as root : systemctl enable rh-mongodb36-mongod.service To start the MongoDB sharding server, type the following command as root : systemctl start rh-mongodb36-mongos.service To start the MongoDB sharding server on boot, type this command as root : systemctl enable rh-mongodb36-mongos.service Note that the MongoDB sharding server does not work unless the user starts at least one configuration server and specifies it in the mongos.conf file. 4.4. MongoDB 3.4 To install the rh-mongodb34 collection, type the following command as root : yum install rh-mongodb34 To run the MongoDB shell utility, type the following command: scl enable rh-mongodb34 'mongo' Note The rh-mongodb34-mongo-cxx-driver package has been built with the -std=gnu++14 option using GCC from Red Hat Developer Toolset 6. Binaries using the shared library for the MongoDB C++ Driver that use C++11 (or later) features have to be built also with Red Hat Developer Toolset 6. See C++ compatibility details in the Red Hat Developer Toolset 6 User Guide . MongoDB 3.4 on Red Hat Enterprise Linux 6 If you are using Red Hat Enterprise Linux 6, the following instructions apply to your system. To start the MongoDB daemon, type the following command as root : service rh-mongodb34-mongod start To start the MongoDB daemon on boot, type this command as root : chkconfig rh-mongodb34-mongod on To start the MongoDB sharding server, type this command as root : service rh-mongodb34-mongos start To start the MongoDB sharding server on boot, type the following command as root : chkconfig rh-mongodb34-mongos on Note that the MongoDB sharding server does not work unless the user starts at least one configuration server and specifies it in the mongos.conf file. MongoDB 3.4 on Red Hat Enterprise Linux 7 When using Red Hat Enterprise Linux 7, the following commands are applicable. To start the MongoDB daemon, type the following command as root : systemctl start rh-mongodb34-mongod.service To start the MongoDB daemon on boot, type this command as root : systemctl enable rh-mongodb34-mongod.service To start the MongoDB sharding server, type the following command as root : systemctl start rh-mongodb34-mongos.service To start the MongoDB sharding server on boot, type this command as root : systemctl enable rh-mongodb34-mongos.service Note that the MongoDB sharding server does not work unless the user starts at least one configuration server and specifies it in the mongos.conf file. 4.5. Maven The rh-maven35 Software Collection, available only for Red Hat Enterprise Linux 7, provides a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting, and documentation from a central piece of information. To install the rh-maven35 Collection, type the following command as root : yum install rh-maven35 To enable this collection, type the following command at a shell prompt: scl enable rh-maven35 bash Global Maven settings, such as remote repositories or mirrors, can be customized by editing the /opt/rh/rh-maven35/root/etc/maven/settings.xml file. For more information about using Maven, refer to the Maven documentation . Usage of plug-ins is described in this section ; to find documentation regarding individual plug-ins, see the index of plug-ins . 4.6. Passenger The rh-passenger40 Software Collection provides Phusion Passenger , a web and application server designed to be fast, robust and lightweight. The rh-passenger40 Collection supports multiple versions of Ruby , particularly the ruby193 , ruby200 , and rh-ruby22 Software Collections together with Ruby on Rails using the ror40 or rh-ror41 Collections. Prior to using Passenger with any of the Ruby Software Collections, install the corresponding package from the rh-passenger40 Collection: the rh-passenger-ruby193 , rh-passenger-ruby200 , or rh-passenger-ruby22 package. The rh-passenger40 Software Collection can also be used with Apache httpd from the httpd24 Software Collection. To do so, install the rh-passenger40-mod_passenger package. Refer to the default configuration file /opt/rh/httpd24/root/etc/httpd/conf.d/passenger.conf for an example of Apache httpd configuration, which shows how to use multiple Ruby versions in a single Apache httpd instance. Additionally, the rh-passenger40 Software Collection can be used with the nginx 1.6 web server from the nginx16 Software Collection. To use nginx 1.6 with rh-passenger40 , you can run Passenger in Standalone mode using the following command in the web appplication's directory: scl enable nginx16 rh-passenger40 'passenger start' Alternatively, edit the nginx16 configuration files as described in the upstream Passenger documentation . 4.7. Database Connectors Database connector packages provide the database client functionality, which is necessary for local or remote connection to a database server. Table 4.1, "Interoperability Between Languages and Databases" lists Software Collections with language runtimes that include connectors for certain database servers: yes - the combination is supported no - the combination is not supported Table 4.1. Interoperability Between Languages and Databases Database Language (Software Collection) MariaDB MongoDB MySQL PostgreSQL Redis rh-nodejs4 no no no no no rh-nodejs6 no no no no no rh-nodejs8 no no no no no rh-nodejs10 no no no no no rh-perl520 yes no yes yes no rh-perl524 yes no yes yes no rh-perl526 yes no yes yes no rh-php56 yes yes yes yes no rh-php70 yes no yes yes no rh-php71 yes no yes yes no rh-php72 yes no yes yes no python27 yes yes yes yes no rh-python34 no yes no yes no rh-python35 yes yes yes yes no rh-python36 yes yes yes yes no rh-ror41 yes yes yes yes no rh-ror42 yes yes yes yes no rh-ror50 yes yes yes yes no rh-ruby25 yes yes yes yes no rh-ruby26 yes yes yes yes no | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.3_release_notes/chap-Individual_Collections |
Chapter 2. Enabling and using the Red Hat Quay API | Chapter 2. Enabling and using the Red Hat Quay API By leveraging the Red Hat Quay API, you can streamline container registry management, automate tasks, and integrate Red Hat Quay's functionalities into your existing workflow. This can improve efficiency, offer enhanced flexibility (by way of repository management, user management, user permissions, image management, and so on), increase the stability of your organization, repository, or overall deployment, and more. Detailed instructions for how to use the Red Hat Quay API can be found in the Red Hat Quay API guide . In that guide, the following topics are covered: Red Hat Quay token types, including OAuth 2 access tokens, robot account tokens, and OCI referrers tokens, and how to generate these tokens. Enabling the Red Hat Quay API by configuring your config.yaml file. How to use the Red Hat Quay API by passing in your OAuth 2 account token into the desired endpoint. API examples, including one generic example of how an administrator might automate certain tasks. See the Red Hat Quay API guide before attempting to use the API endpoints offered in this chapter. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/use_red_hat_quay/enabling-using-the-api |
Chapter 13. Storage | Chapter 13. Storage /proc/diskstats no longer becomes corrupted Partitions are protected by read-copy-update (RCU) for performance reasons and are not properly protected against race conditions in two circumstances: When partitions are modified while there are in-flight requests. When partitions overlap, which is possible for DOS extended and logical partitions. As a consequence, some fields of the /proc/diskstats file could become corrupted. This update fixes the problem by caching the partition lookup in the request structure. As a result, /proc/diskstats no longer becomes corrupted in the described situations. (BZ#1273339) multipathd no longer reports success after a failed device resizing If the multipathd service failed to resize a device, multipathd did not reset the size back to the original value internally. As a consequence, on future attempts to resize a device, multipathd reported a success even when multipathd did not resize the device. If resizing fails, multipathd now reverts the size of the device back to the original value internally. As a result, multipathd now only reports success if a device is resized successfully. (BZ# 1328077 ) multipath no longer crashes due to libdevmapper version mismatches Previously, the multipath code was not linking to the correct libraries during a part of compilation. As a consequence, if device-mapper-multipath was used with a newer version of the libdevmapper library than it was compiled with, multipath sometimes terminated unexpectedly. Now, multipath correctly links to the necessary libraries during compilation. As a result, multipath no longer crashes due to the library version mismatch. (BZ# 1349376 ) Failures on some devices no longer keep multipath from creating other devices Previously, the multipath command sometimes failed to set up working devices because of failures on unrelated devices, as multipath quit early if it failed to get the information on any of the devices that multipath was trying to create. With this fix, multipath no longer quits early if it fails to get information on some of the devices and failures on some devices no longer keep multipath from creating others. (BZ# 1343747 ) multipath no longer modifies devices with a DM table type of multipath that were created by other programs Previously, the multipath tools assumed that they were in charge of managing all Device Mapper (DM) devices with a multipath table. As a consequence, the multipathd service modified the tables of devices that were not created by the multipath tools. Now, multipath tools now only operate on devices with DM Universally Unique Identifiers (UUIDs) starting with mpath- , which is the UUID prefix that multipath uses on all the devices it creates. As a result, multipath no longer modifies devices with a DM table type of multipath that were created by other programs. (BZ#1364879) Change now takes effect immediately after using lvchange --zero n against an active thin pool Previously, when the lvchange --zero n command was used against an active thin pool, the change did not take effect until the time the pool was deactivated. With this update, the change takes effect immediately. (BZ# 1328245 ) An incorrect exit status of mdadm -IRs no longer causes error messages at boot time Previously, the load_container() function was incorrectly trying to load a container from the member array. As a consequence, the mdadm -IRs command incorrectly returned the 1 exit status, which led to error messages at boot time. The load_container() function has been modified to prevent loading a container from a member array. As a result, error messages at boot time no longer occur. (BZ# 1348925 ) With IMSM, migrating two RAIDs in a container no longer causes both arrays to become degraded The Intel Matrix Storage Manager (IMSM) does not allow a change to the RAID level of arrays in a container with two arrays. Previously, IMSM performed an array count check after disks were removed. As a consequence, changing, for example, RAID 1 to RAID 0 in a container with two RAIDs returned an error message, but a degraded RAID 1 was left. With this update, the array count check happens before the disk removal, and the described problem no longer occurs. (BZ#1413615) The IMSM array is now correctly assembled and successfully started Previously, the Intel Matrix Storage Manager (IMSM) events field was not set with a generation number. As a consequence, the mdadm utility sometimes re-assemblied a container with outdated metadata, and a failure occurred. With this update, the IMSM events field is correctly set with the generation number. As a result, the IMSM array is correctly assembled and successfully started. (BZ#1413937) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_technical_notes/bug_fixes_storage |
Chapter 23. Upgrading AMQ Streams | Chapter 23. Upgrading AMQ Streams Upgrade your AMQ Streams installation to version 2.5 and benefit from new features, performance improvements, and enhanced security options. During the upgrade, Kafka is also be updated to the latest supported version, introducing additional features and bug fixes to your AMQ Streams deployment. If you encounter any issues with the new version, AMQ Streams can be downgraded to the version. Released AMQ Streams versions can be found at AMQ Streams software downloads page . Upgrade without downtime For topics configured with high availability (replication factor of at least 3 and evenly distributed partitions), the upgrade process should not cause any downtime for consumers and producers. The upgrade triggers rolling updates, where brokers are restarted one by one at different stages of the process. During this time, overall cluster availability is temporarily reduced, which may increase the risk of message loss in the event of a broker failure. 23.1. AMQ Streams upgrade paths Two upgrade paths are available for AMQ Streams. Incremental upgrade An incremental upgrade involves upgrading AMQ Streams from the minor version to version 2.5. Multi-version upgrade A multi-version upgrade involves upgrading an older version of AMQ Streams to version 2.5 within a single upgrade, skipping one or more intermediate versions. For example, upgrading directly from AMQ Streams 2.3 to AMQ Streams 2.5 is possible. 23.1.1. Support for Kafka versions when upgrading When upgrading AMQ Streams, it is important to ensure compatibility with the Kafka version being used. Multi-version upgrades are possible even if the supported Kafka versions differ between the old and new versions. However, if you attempt to upgrade to a new AMQ Streams version that does not support the current Kafka version, an error indicating that the Kafka version is not supported is generated . In this case, you must upgrade the Kafka version as part of the AMQ Streams upgrade by changing the spec.kafka.version in the Kafka custom resource to the supported version for the new AMQ Streams version. Kafka 3.5.0 is supported for production use. Kafka 3.4.0 is supported only for the purpose of upgrading to AMQ Streams 2.5. 23.1.2. Upgrading from an AMQ Streams version earlier than 1.7 If you are upgrading to the latest version of AMQ Streams from a version prior to version 1.7, do the following: Upgrade AMQ Streams to version 1.7 following the standard sequence . Convert AMQ Streams custom resources to v1beta2 using the API conversion tool provided with AMQ Streams. Do one of the following: Upgrade to AMQ Streams 1.8 (where the ControlPlaneListener feature gate is disabled by default). Upgrade to AMQ Streams 2.0 or 2.2 (where the ControlPlaneListener feature gate is enabled by default) with the ControlPlaneListener feature gate disabled. Enable the ControlPlaneListener feature gate. Upgrade to AMQ Streams 2.5 following the standard sequence . AMQ Streams custom resources started using the v1beta2 API version in release 1.7. CRDs and custom resources must be converted before upgrading to AMQ Streams 1.8 or newer. For information on using the API conversion tool, see the AMQ Streams 1.7 upgrade documentation . Note As an alternative to first upgrading to version 1.7, you can install the custom resources from version 1.7 and then convert the resources. The ControlPlaneListener feature is now permanently enabled in AMQ Streams. You must upgrade to a version of AMQ Streams where it is disabled, then enable it using the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration. Disabling the ControlPlaneListener feature gate env: - name: STRIMZI_FEATURE_GATES value: -ControlPlaneListener Enabling the ControlPlaneListener feature gate env: - name: STRIMZI_FEATURE_GATES value: +ControlPlaneListener 23.2. Required upgrade sequence To upgrade brokers and clients without downtime, you must complete the AMQ Streams upgrade procedures in the following order: Make sure your OpenShift cluster version is supported. AMQ Streams 2.5 is supported by OpenShift 4.12 and later. You can upgrade OpenShift with minimal downtime . Upgrade the Cluster Operator . Upgrade all Kafka brokers and client applications to the latest supported Kafka version. 23.3. Upgrading OpenShift with minimal downtime If you are upgrading OpenShift, refer to the OpenShift upgrade documentation to check the upgrade path and the steps to upgrade your nodes correctly. Before upgrading OpenShift, check the supported versions for your version of AMQ Streams . When performing your upgrade, you'll want to keep your Kafka clusters available. You can employ one of the following strategies: Configuring pod disruption budgets Rolling pods by one of these methods: Using the AMQ Streams Drain Cleaner Manually by applying an annotation to your pod When using either of the methods to roll the pods, you must set a pod disruption budget of zero using the maxUnavailable property. Note StrimziPodSet custom resources manage Kafka and ZooKeeper pods using a custom controller that cannot use the maxUnavailable value directly. Instead, the maxUnavailable value is converted to a minAvailable value. If there are three broker pods and the maxUnavailable property is set to 0 (zero), the minAvailable setting is 3 , requiring all three broker pods to be available and allowing zero pods to be unavailable. For Kafka to stay operational, topics must also be replicated for high availability. This requires topic configuration that specifies a replication factor of at least 3 and a minimum number of in-sync replicas to 1 less than the replication factor. Kafka topic replicated for high availability apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # ... min.insync.replicas: 2 # ... In a highly available environment, the Cluster Operator maintains a minimum number of in-sync replicas for topics during the upgrade process so that there is no downtime. 23.3.1. Rolling pods using the AMQ Streams Drain Cleaner You can use the AMQ Streams Drain Cleaner to evict nodes during an upgrade. The AMQ Streams Drain Cleaner annotates pods with a rolling update pod annotation. This informs the Cluster Operator to perform a rolling update of an evicted pod. A pod disruption budget allows only a specified number of pods to be unavailable at a given time. During planned maintenance of Kafka broker pods, a pod disruption budget ensures Kafka continues to run in a highly available environment. You specify a pod disruption budget using a template customization for a Kafka component. By default, pod disruption budgets allow only a single pod to be unavailable at a given time. In order to use the Drain Cleaner to roll pods, you set maxUnavailable to 0 (zero). Reducing the pod disruption budget to zero prevents voluntary disruptions, so pods must be evicted manually. Specifying a pod disruption budget apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... template: podDisruptionBudget: maxUnavailable: 0 # ... 23.3.2. Rolling pods manually while keeping topics available During an upgrade, you can trigger a manual rolling update of pods through the Cluster Operator. Using Pod resources, rolling updates restart the pods of resources with new pods. As with using the AMQ Streams Drain Cleaner, you'll need to set the maxUnavailable value to zero for the pod disruption budget. You need to watch the pods that need to be drained. You then add a pod annotation to make the update. Here, the annotation updates a Kafka broker. Performing a manual rolling update on a Kafka broker pod oc annotate pod <cluster_name> -kafka- <index> strimzi.io/manual-rolling-update=true You replace <cluster_name> with the name of the cluster. Kafka broker pods are named <cluster-name> -kafka- <index> , where <index> starts at zero and ends at the total number of replicas minus one. For example, my-cluster-kafka-0 . Additional resources Draining pods using the AMQ Streams Drain Cleaner Performing a rolling update using a pod annotation PodDisruptionBudgetTemplate schema reference OpenShift documentation 23.4. Upgrading the Cluster Operator Use the same method to upgrade the Cluster Operator as the initial method of deployment. Using installation files If you deployed the Cluster Operator using the installation YAML files, perform your upgrade by modifying the Operator installation files, as described in Upgrading the Cluster Operator using installation files . Using the OperatorHub If you deployed AMQ Streams from the OperatorHub, use the Operator Lifecycle Manager (OLM) to change the update channel for the AMQ Streams operators to a new AMQ Streams version. Updating the channel starts one of the following types of upgrade, depending on your chosen upgrade strategy: An automatic upgrade is initiated A manual upgrade that requires approval before installation begins Note If you subscribe to the stable channel, you can get automatic updates without changing channels. However, enabling automatic updates is not recommended because of the potential for missing any pre-installation upgrade steps. Use automatic upgrades only on version-specific channels. For more information on using the OperatorHub to upgrade Operators, see Upgrading installed Operators (OpenShift documentation) . 23.4.1. Upgrading the Cluster Operator returns Kafka version error If you upgrade the Cluster Operator to a version that does not support the current version of Kafka you are using, you get an unsupported Kafka version error. This error applies to all installation methods and means that you must upgrade Kafka to a supported Kafka version. Change the spec.kafka.version in the Kafka resource to the supported version. You can use oc to check for error messages like this in the status of the Kafka resource. Checking the Kafka status for errors oc get kafka <kafka_cluster_name> -n <namespace> -o jsonpath='{.status.conditions}' Replace <kafka_cluster_name> with the name of your Kafka cluster and <namespace> with the OpenShift namespace where the pod is running. 23.4.2. Upgrading from AMQ Streams 1.7 or earlier using the OperatorHub Action required if upgrading from AMQ Streams 1.7 or earlier using the OperatorHub Before you upgrade the AMQ Streams Operator to version 2.5, you need to make the following changes: Convert custom resources and CRDs to v1beta2 Upgrade to a version of AMQ Streams where the ControlPlaneListener feature gate is disabled These requirements are described in Section 23.1.2, "Upgrading from an AMQ Streams version earlier than 1.7" . If you are upgrading from AMQ Streams 1.7 or earlier, do the following: Upgrade to AMQ Streams 1.7. Download the Red Hat AMQ Streams API Conversion Tool provided with AMQ Streams 1.8 from the AMQ Streams software downloads page . Convert custom resources and CRDs to v1beta2 . For more information, see the AMQ Streams 1.7 upgrade documentation . In the OperatorHub, delete version 1.7 of the AMQ Streams Operator . If it also exists, delete version 2.5 of the AMQ Streams Operator . If it does not exist, go to the step. If the Approval Strategy for the AMQ Streams Operator was set to Automatic , version 2.5 of the operator might already exist in your cluster. If you did not convert custom resources and CRDs to the v1beta2 API version before release, the operator-managed custom resources and CRDs will be using the old API version. As a result, the 2.5 Operator is stuck in Pending status. In this situation, you need to delete version 2.5 of the AMQ Streams Operator as well as version 1.7. If you delete both operators, reconciliations are paused until the new operator version is installed. Follow the steps immediately so that any changes to custom resources are not delayed. In the OperatorHub, do one of the following: Upgrade to version 1.8 of the AMQ Streams Operator (where the ControlPlaneListener feature gate is disabled by default). Upgrade to version 2.0 or 2.2 of the AMQ Streams Operator (where the ControlPlaneListener feature gate is enabled by default) with the ControlPlaneListener feature gate disabled. Upgrade to version 2.5 of the AMQ Streams Operator immediately. The installed 2.5 operator begins to watch the cluster and performs rolling updates. You might notice a temporary decrease in cluster performance during this process. 23.4.3. Upgrading the Cluster Operator using installation files This procedure describes how to upgrade a Cluster Operator deployment to use AMQ Streams 2.5. Follow this procedure if you deployed the Cluster Operator using the installation YAML files. The availability of Kafka clusters managed by the Cluster Operator is not affected by the upgrade operation. Note Refer to the documentation supporting a specific version of AMQ Streams for information on how to upgrade to that version. Prerequisites An existing Cluster Operator deployment is available. You have downloaded the release artifacts for AMQ Streams 2.5 . Procedure Take note of any configuration changes made to the existing Cluster Operator resources (in the /install/cluster-operator directory). Any changes will be overwritten by the new version of the Cluster Operator. Update your custom resources to reflect the supported configuration options available for AMQ Streams version 2.5. Update the Cluster Operator. Modify the installation files for the new Cluster Operator version according to the namespace the Cluster Operator is running in. On Linux, use: On MacOS, use: If you modified one or more environment variables in your existing Cluster Operator Deployment , edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to use those environment variables. When you have an updated configuration, deploy it along with the rest of the installation resources: oc replace -f install/cluster-operator Wait for the rolling updates to complete. If the new Operator version no longer supports the Kafka version you are upgrading from, the Cluster Operator returns an error message to say the version is not supported. Otherwise, no error message is returned. If the error message is returned, upgrade to a Kafka version that is supported by the new Cluster Operator version: Edit the Kafka custom resource. Change the spec.kafka.version property to a supported Kafka version. If the error message is not returned, go to the step. You will upgrade the Kafka version later. Get the image for the Kafka pod to ensure the upgrade was successful: oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}' The image tag shows the new Operator version. For example: registry.redhat.io/amq-streams/strimzi-kafka-35-rhel8:2.5.2 Your Cluster Operator was upgraded to version 2.5 but the version of Kafka running in the cluster it manages is unchanged. Following the Cluster Operator upgrade, you must perform a Kafka upgrade . 23.5. Upgrading Kafka After you have upgraded your Cluster Operator to 2.5, the step is to upgrade all Kafka brokers to the latest supported version of Kafka. Kafka upgrades are performed by the Cluster Operator through rolling updates of the Kafka brokers. The Cluster Operator initiates rolling updates based on the Kafka cluster configuration. If Kafka.spec.kafka.config contains... The Cluster Operator initiates... Both the inter.broker.protocol.version and the log.message.format.version . A single rolling update. After the update, the inter.broker.protocol.version must be updated manually, followed by log.message.format.version . Changing each will trigger a further rolling update. Either the inter.broker.protocol.version or the log.message.format.version . Two rolling updates. No configuration for the inter.broker.protocol.version or the log.message.format.version . Two rolling updates. Important From Kafka 3.0.0, when the inter.broker.protocol.version is set to 3.0 or higher, the log.message.format.version option is ignored and doesn't need to be set. The log.message.format.version property for brokers and the message.format.version property for topics are deprecated and will be removed in a future release of Kafka. As part of the Kafka upgrade, the Cluster Operator initiates rolling updates for ZooKeeper. A single rolling update occurs even if the ZooKeeper version is unchanged. Additional rolling updates occur if the new version of Kafka requires a new ZooKeeper version. 23.5.1. Kafka versions Kafka's log message format version and inter-broker protocol version specify, respectively, the log format version appended to messages and the version of the Kafka protocol used in a cluster. To ensure the correct versions are used, the upgrade process involves making configuration changes to existing Kafka brokers and code changes to client applications (consumers and producers). The following table shows the differences between Kafka versions: Table 23.1. Kafka version differences AMQ Streams version Kafka version Inter-broker protocol version Log message format version ZooKeeper version 2.5 3.5.0 3.5 3.5 3.6.4 2.4 3.4.0 3.4 3.4 3.6.3 Note AMQ Streams 2.5 uses Kafka 3.5.0, but Kafka 3.4.0 is also supported for the purpose of upgrading. Inter-broker protocol version In Kafka, the network protocol used for inter-broker communication is called the inter-broker protocol . Each version of Kafka has a compatible version of the inter-broker protocol. The minor version of the protocol typically increases to match the minor version of Kafka, as shown in the preceding table. The inter-broker protocol version is set cluster wide in the Kafka resource. To change it, you edit the inter.broker.protocol.version property in Kafka.spec.kafka.config . Log message format version When a producer sends a message to a Kafka broker, the message is encoded using a specific format. The format can change between Kafka releases, so messages specify which version of the message format they were encoded with. The properties used to set a specific message format version are as follows: message.format.version property for topics log.message.format.version property for Kafka brokers From Kafka 3.0.0, the message format version values are assumed to match the inter.broker.protocol.version and don't need to be set. The values reflect the Kafka version used. When upgrading to Kafka 3.0.0 or higher, you can remove these settings when you update the inter.broker.protocol.version . Otherwise, set the message format version based on the Kafka version you are upgrading to. The default value of message.format.version for a topic is defined by the log.message.format.version that is set on the Kafka broker. You can manually set the message.format.version of a topic by modifying its topic configuration. 23.5.2. Strategies for upgrading clients Upgrading Kafka clients ensures that they benefit from the features, fixes, and improvements that are introduced in new versions of Kafka. Upgraded clients maintain compatibility with other upgraded Kafka components. The performance and stability of the clients might also be improved. Consider the best approach for upgrading Kafka clients and brokers to ensure a smooth transition. The chosen upgrade strategy depends on whether you are upgrading brokers or clients first. Since Kafka 3.0, you can upgrade brokers and client independently and in any order. The decision to upgrade clients or brokers first depends on several factors, such as the number of applications that need to be upgraded and how much downtime is tolerable. If you upgrade clients before brokers, some new features may not work as they are not yet supported by brokers. However, brokers can handle producers and consumers running with different versions and supporting different log message versions. Upgrading clients when using Kafka versions older than Kafka 3.0 Before Kafka 3.0, you would configure a specific message format for brokers using the log.message.format.version property (or the message.format.version property at the topic level). This allowed brokers to support older Kafka clients that were using an outdated message format. Otherwise, the brokers would need to convert the messages from the older clients, which came with a significant performance cost. Apache Kafka Java clients have supported the latest message format version since version 0.11. If all of your clients are using the latest message version, you can remove the log.message.format.version or message.format.version overrides when upgrading your brokers. However, if you still have clients that are using an older message format version, we recommend upgrading your clients first. Start with the consumers, then upgrade the producers before removing the log.message.format.version or message.format.version overrides when upgrading your brokers. This will ensure that all of your clients can support the latest message format version and that the upgrade process goes smoothly. You can track Kafka client names and versions using this metric: kafka.server:type=socket-server-metrics,clientSoftwareName=<name>,clientSoftwareVersion=<version>,listener=<listener>,networkProcessor=<processor> Tip The following Kafka broker metrics help monitor the performance of message down-conversion: kafka.network:type=RequestMetrics,name=MessageConversionsTimeMs,request={Produce|Fetch} provides metrics on the time taken to perform message conversion. kafka.server:type=BrokerTopicMetrics,name={Produce|Fetch}MessageConversionsPerSec,topic=([-.\w]+) provides metrics on the number of messages converted over a period of time. 23.5.3. Kafka version and image mappings When upgrading Kafka, consider your settings for the STRIMZI_KAFKA_IMAGES environment variable and the Kafka.spec.kafka.version property. Each Kafka resource can be configured with a Kafka.spec.kafka.version . The Cluster Operator's STRIMZI_KAFKA_IMAGES environment variable provides a mapping between the Kafka version and the image to be used when that version is requested in a given Kafka resource. If Kafka.spec.kafka.image is not configured, the default image for the given version is used. If Kafka.spec.kafka.image is configured, the default image is overridden. Warning The Cluster Operator cannot validate that an image actually contains a Kafka broker of the expected version. Take care to ensure that the given image corresponds to the given Kafka version. 23.5.4. Upgrading Kafka brokers and client applications Upgrade an AMQ Streams Kafka cluster to the latest supported Kafka version and inter-broker protocol version . You should also choose a strategy for upgrading clients . Kafka clients are upgraded in step 6 of this procedure. Prerequisites The Cluster Operator is up and running. Before you upgrade the AMQ Streams Kafka cluster, check that the Kafka.spec.kafka.config properties of the Kafka resource do not contain configuration options that are not supported in the new Kafka version. Procedure Update the Kafka cluster configuration: oc edit kafka <my_cluster> If configured, check that the inter.broker.protocol.version and log.message.format.version properties are set to the current version. For example, the current version is 3.4 if upgrading from Kafka version 3.4.0 to 3.5.0: kind: Kafka spec: # ... kafka: version: 3.4.0 config: log.message.format.version: "3.4" inter.broker.protocol.version: "3.4" # ... If log.message.format.version and inter.broker.protocol.version are not configured, AMQ Streams automatically updates these versions to the current defaults after the update to the Kafka version in the step. Note The value of log.message.format.version and inter.broker.protocol.version must be strings to prevent them from being interpreted as floating point numbers. Change the Kafka.spec.kafka.version to specify the new Kafka version; leave the log.message.format.version and inter.broker.protocol.version at the defaults for the current Kafka version. Note Changing the kafka.version ensures that all brokers in the cluster will be upgraded to start using the new broker binaries. During this process, some brokers are using the old binaries while others have already upgraded to the new ones. Leaving the inter.broker.protocol.version unchanged at the current setting ensures that the brokers can continue to communicate with each other throughout the upgrade. For example, if upgrading from Kafka 3.4.0 to 3.5.0: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: version: 3.5.0 1 config: log.message.format.version: "3.4" 2 inter.broker.protocol.version: "3.4" 3 # ... 1 Kafka version is changed to the new version. 2 Message format version is unchanged. 3 Inter-broker protocol version is unchanged. Warning You cannot downgrade Kafka if the inter.broker.protocol.version for the new Kafka version changes. The inter-broker protocol version determines the schemas used for persistent metadata stored by the broker, including messages written to __consumer_offsets . The downgraded cluster will not understand the messages. If the image for the Kafka cluster is defined in the Kafka custom resource, in Kafka.spec.kafka.image , update the image to point to a container image with the new Kafka version. See Kafka version and image mappings Save and exit the editor, then wait for rolling updates to complete. Check the progress of the rolling updates by watching the pod state transitions: oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}' The rolling updates ensure that each pod is using the broker binaries for the new version of Kafka. Depending on your chosen strategy for upgrading clients , upgrade all client applications to use the new version of the client binaries. If required, set the version property for Kafka Connect and MirrorMaker as the new version of Kafka: For Kafka Connect, update KafkaConnect.spec.version . For MirrorMaker, update KafkaMirrorMaker.spec.version . For MirrorMaker 2, update KafkaMirrorMaker2.spec.version . If configured, update the Kafka resource to use the new inter.broker.protocol.version version. Otherwise, go to step 9. For example, if upgrading to Kafka 3.5.0: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: version: 3.5.0 config: log.message.format.version: "3.4" inter.broker.protocol.version: "3.5" # ... Wait for the Cluster Operator to update the cluster. If configured, update the Kafka resource to use the new log.message.format.version version. Otherwise, go to step 10. For example, if upgrading to Kafka 3.5.0: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: version: 3.5.0 config: log.message.format.version: "3.5" inter.broker.protocol.version: "3.5" # ... Important From Kafka 3.0.0, when the inter.broker.protocol.version is set to 3.0 or higher, the log.message.format.version option is ignored and doesn't need to be set. Wait for the Cluster Operator to update the cluster. The Kafka cluster and clients are now using the new Kafka version. The brokers are configured to send messages using the inter-broker protocol version and message format version of the new version of Kafka. 23.6. Switching to FIPS mode when upgrading AMQ Streams Upgrade AMQ Streams to run in FIPS mode on FIPS-enabled OpenShift clusters. Until AMQ Streams 2.4, running on FIPS-enabled OpenShift clusters was possible only by disabling FIPS mode using the FIPS_MODE environment variable. From release 2.4, AMQ Streams supports FIPS mode. If you run AMQ Streams on a FIPS-enabled OpenShift cluster with the FIPS_MODE set to disabled , you can enable it by following this procedure. Prerequisites FIPS-enabled OpenShift cluster An existing Cluster Operator deployment with the FIPS_MODE environment variable set to disabled Procedure Upgrade the Cluster Operator to version 2.4 or newer but keep the FIPS_MODE environment variable set to disabled . If you initially deployed an AMQ Streams version older than 2.3, it might use old encryption and digest algorithms in its PKCS #12 stores, which are not supported with FIPS enabled. To recreate the certificates with updated algorithms, renew the cluster and clients CA certificates. To renew the CAs generated by the Cluster Operator, add the force-renew annotation to the CA secrets to trigger a renewal . To renew your own CAs, add the new certificate to the CA secret and update the ca-cert-generation annotation with a higher incremental value to capture the update . If you use SCRAM-SHA-512 authentication, check the password length of your users. If they are less than 32 characters long, generate a new password in one of the following ways: Delete the user secret so that the User Operator generates a new one with a new password of sufficient length. If you provided your password using the .spec.authentication.password properties of the KafkaUser custom resource, update the password in the OpenShift secret referenced in the same password configuration. Don't forget to update your clients to use the new passwords. Ensure that the CA certificates are using the correct algorithms and the SCRAM-SHA-512 passwords are of sufficient length. You can then enable the FIPS mode. Remove the FIPS_MODE environment variable from the Cluster Operator deployment. This restarts the Cluster Operator and rolls all the operands to enable the FIPS mode. After the restart is complete, all Kafka clusters now run with FIPS mode enabled. | [
"env: - name: STRIMZI_FEATURE_GATES value: -ControlPlaneListener",
"env: - name: STRIMZI_FEATURE_GATES value: +ControlPlaneListener",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # min.insync.replicas: 2 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # template: podDisruptionBudget: maxUnavailable: 0",
"annotate pod <cluster_name> -kafka- <index> strimzi.io/manual-rolling-update=true",
"get kafka <kafka_cluster_name> -n <namespace> -o jsonpath='{.status.conditions}'",
"sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml",
"replace -f install/cluster-operator",
"get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'",
"registry.redhat.io/amq-streams/strimzi-kafka-35-rhel8:2.5.2",
"edit kafka <my_cluster>",
"kind: Kafka spec: # kafka: version: 3.4.0 config: log.message.format.version: \"3.4\" inter.broker.protocol.version: \"3.4\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.5.0 1 config: log.message.format.version: \"3.4\" 2 inter.broker.protocol.version: \"3.4\" 3 #",
"get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.5.0 config: log.message.format.version: \"3.4\" inter.broker.protocol.version: \"3.5\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.5.0 config: log.message.format.version: \"3.5\" inter.broker.protocol.version: \"3.5\" #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/assembly-upgrade-str |
Appendix G. Ceph Monitor and OSD configuration options | Appendix G. Ceph Monitor and OSD configuration options When modifying heartbeat settings, include them in the [global] section of the Ceph configuration file. mon_osd_min_up_ratio Description The minimum ratio of up Ceph OSD Daemons before Ceph will mark Ceph OSD Daemons down . Type Double Default .3 mon_osd_min_in_ratio Description The minimum ratio of in Ceph OSD Daemons before Ceph will mark Ceph OSD Daemons out . Type Double Default 0.75 mon_osd_laggy_halflife Description The number of seconds laggy estimates will decay. Type Integer Default 60*60 mon_osd_laggy_weight Description The weight for new samples in laggy estimation decay. Type Double Default 0.3 mon_osd_laggy_max_interval Description Maximum value of laggy_interval in laggy estimations (in seconds). The monitor uses an adaptive approach to evaluate the laggy_interval of a certain OSD. This value will be used to calculate the grace time for that OSD. Type Integer Default 300 mon_osd_adjust_heartbeat_grace Description If set to true , Ceph will scale based on laggy estimations. Type Boolean Default true mon_osd_adjust_down_out_interval Description If set to true , Ceph will scaled based on laggy estimations. Type Boolean Default true mon_osd_auto_mark_in Description Ceph will mark any booting Ceph OSD Daemons as in the Ceph Storage Cluster. Type Boolean Default false mon_osd_auto_mark_auto_out_in Description Ceph will mark booting Ceph OSD Daemons auto marked out of the Ceph Storage Cluster as in the cluster. Type Boolean Default true mon_osd_auto_mark_new_in Description Ceph will mark booting new Ceph OSD Daemons as in the Ceph Storage Cluster. Type Boolean Default true mon_osd_down_out_interval Description The number of seconds Ceph waits before marking a Ceph OSD Daemon down and out if it does not respond. Type 32-bit Integer Default 600 mon_osd_downout_subtree_limit Description The largest CRUSH unit type that Ceph will automatically mark out . Type String Default rack mon_osd_reporter_subtree_level Description This setting defines the parent CRUSH unit type for the reporting OSDs. The OSDs send failure reports to the monitor if they find an unresponsive peer. The monitor may mark the reported OSD down and then out after a grace period. Type String Default host mon_osd_report_timeout Description The grace period in seconds before declaring unresponsive Ceph OSD Daemons down . Type 32-bit Integer Default 900 mon_osd_min_down_reporters Description The minimum number of Ceph OSD Daemons required to report a down Ceph OSD Daemon. Type 32-bit Integer Default 2 osd_heartbeat_address Description An Ceph OSD Daemon's network address for heartbeats. Type Address Default The host address. osd_heartbeat_interval Description How often an Ceph OSD Daemon pings its peers (in seconds). Type 32-bit Integer Default 6 osd_heartbeat_grace Description The elapsed time when a Ceph OSD Daemon has not shown a heartbeat that the Ceph Storage Cluster considers it down . Type 32-bit Integer Default 20 osd_mon_heartbeat_interval Description How often the Ceph OSD Daemon pings a Ceph Monitor if it has no Ceph OSD Daemon peers. Type 32-bit Integer Default 30 osd_mon_report_interval_max Description The maximum time in seconds that a Ceph OSD Daemon can wait before it must report to a Ceph Monitor. Type 32-bit Integer Default 120 osd_mon_report_interval_min Description The minimum number of seconds a Ceph OSD Daemon may wait from startup or another reportable event before reporting to a Ceph Monitor. Type 32-bit Integer Default 5 Valid Range Should be less than osd mon report interval max osd_mon_ack_timeout Description The number of seconds to wait for a Ceph Monitor to acknowledge a request for statistics. Type 32-bit Integer Default 30 | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/configuration_guide/ceph-monitor-and-osd-configuration-options_conf |
2. Installer | 2. Installer The Red Hat Enterprise Linux installer (also known as anaconda ) assists in the installation of Red Hat Enterprise Linux 6. This section of the release notes provides an overview of the new features implemented in the installer for Red Hat Enterprise Linux 6. Note The Red Hat Enterprise Linux 6 Installation Guide provides detailed documentation of the installer and the installation process. 2.1. Installation Methods The installer provides three main interfaces to install Red Hat Enterprise Linux: kickstart , the graphical installer and the text-based installer . 2.1.1. Graphical Installer The Red Hat Enterprise Linux graphical installer steps the user through the major steps involved in preparing a system for installation. The Red Hat Enterprise Linux 6 installation GUI introduces major usability enhancements for disk partitioning and storage configuration. Early in the installation process, the user is now given the choice of basic storage devices or specialized storage devices . Basic Storage Devices typically do not need any additional configuration settings before the device is usable. A new interface has been implemented for configuring specialized storage devices. Firmware RAID devices, Fibre Channel over Ethernet (FCoE) devices, multipath devices, and other storage area network (SAN) devices can now be easily configured using the new interface. Figure 1. Specialized Storage Devices Configuration The interface for choosing partitioning layouts has been enhanced, providing detailed descriptions and diagrams for each default partitioning layout Figure 2. Partitioning layout choices The Installer allows storage devices to be specified as either install target devices or data storage devices prior to installation. Figure 3. Specifying Storage Devices 2.1.2. Kickstart Kickstart is an automated installation method that system administrators use to install Red Hat Enterprise Linux. Using kickstart, a single file is created, containing the answers to all the questions that would normally be asked during a typical installation. Red Hat Enterprise Linux 6 introduces improvements to the validation of kickstart files, allowing the installer to capture issues with kickstart file syntax before an installation commences. 2.1.3. Text-based Installer The text-based installer is provided primarily for systems with limited resources. The text-based installer has been simplified, permitting installation to the default disk layouts, and installation of new and updated packages. Figure 4. text-based installer Note Some installations require advanced installation options that are not present in the text-based installer. If the target system cannot run the graphical installer locally, use the Virtual Network Computing (VNC) display protocol to complete the installation. 2.2. Creating Backup Passphrases During Installation The installer in Red Hat Enterprise Linux 6 provides the ability to save encryption keys and create backup passphrases for encrypted filesystems. This feature is discussed in futher detail in Section 8.3, "Backup Passphrases for Encrypted Storage Devices" Note Currently, creating backup passphrases for encrypted devices during installation can only be achieved during a kickstart installation. More information on this new feature, including how to utilize this feature in a kickstart installation of Red Hat Enterprise Linux 6, refer to the Disk Encryption appendix in the Installation Guide. 2.3. DVD Media Boot Catalog Entries The DVD media for Red Hat Enterprise Linux 6 include boot catalog entries for both BIOS- and UEFI-based computers. This allows the media to boot systems based on either firmware interface. (UEFI is the Unified Extensible Firmware Interface, a standard software interface initially developed by Intel and now managed by the Unified EFI Forum. It is intended as a replacement for the older BIOS firmware.) Important Some systems with very old BIOS implementations will not boot from media which include more than one boot catalog entry. Such systems will not boot from a Red Hat Enterprise Linux 6 DVD but may be bootable using a USB drive or over a network using PXE. Note UEFI and BIOS boot configurations differ significantly from each other and are not interchangeable. An installed instance of Red Hat Enterprise Linux 6 will not boot if the firmware it was configured for is changed. You cannot, for example, install the operating system on a BIOS-based system and then boot the installed instance on a UEFI-based system. 2.4. Installation Crash Reporting Red Hat Enterprise Linux 6 features enhanced installation crash reporting in the installer. If the installer encounters an error during the installation process, details of the error are reported to the user. Figure 5. installation error reporting The details of the error can be instantly reported to the Red Hat Bugzilla bug tracking website , or in cases where there is no internet connectivity, saved locally to disk. Figure 6. Sending to Bugzilla 2.5. Installation Logs To assist troubleshooting and debugging of installations, additional details are now included in log files produced by the installer. Further information on installation logs, and how to use them for troubleshooting can be found in the following sections of the Installation Guide. Troubleshooting Installation on an Intel or AMD System Troubleshooting Installation on an IBM POWER System Troubleshooting Installation on an IBM System z System | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_release_notes/installer |
Chapter 2. Performing rolling upgrades for Data Grid Server clusters | Chapter 2. Performing rolling upgrades for Data Grid Server clusters Perform rolling upgrades of your Data Grid clusters to change between versions without downtime or data loss and migrate data over the Hot Rod protocol. 2.1. Setting up target Data Grid clusters Create a cluster that uses the Data Grid version to which you plan to upgrade and then connect the source cluster to the target cluster using a remote cache store. Prerequisites Install Data Grid Server nodes with the desired version for your target cluster. Important Ensure the network properties for the target cluster do not overlap with those for the source cluster. You should specify unique names for the target and source clusters in the JGroups transport configuration. Depending on your environment you can also use different network interfaces and port offsets to separate the target and source clusters. Procedure Create a remote cache store configuration, in JSON format, that allows the target cluster to connect to the source cluster. Remote cache stores on the target cluster use the Hot Rod protocol to retrieve data from the source cluster. { "remote-store": { "cache": "myCache", "shared": true, "raw-values": true, "security": { "authentication": { "digest": { "username": "username", "password": "changeme", "realm": "default" } } }, "remote-server": [ { "host": "127.0.0.1", "port": 12222 } ] } } Use the Data Grid Command Line Interface (CLI) or REST API to add the remote cache store configuration to the target cluster so it can connect to the source cluster. CLI: Use the migrate cluster connect command on the target cluster. REST API: Invoke a POST request that includes the remote store configuration in the payload with the rolling-upgrade/source-connection method. Repeat the preceding step for each cache that you want to migrate. Switch clients over to the target cluster, so it starts handling all requests. Update client configuration with the location of the target cluster. Restart clients. Important If you need to migrate Indexed caches you must first migrate the internal ___protobuf_metadata cache so that the .proto schemas defined on the source cluster will also be present on the target cluster. Additional resources Remote cache store configuration schema 2.2. Synchronizing data to target clusters When you set up a target Data Grid cluster and connect it to a source cluster, the target cluster can handle client requests using a remote cache store and load data on demand. To completely migrate data to the target cluster, so you can decommission the source cluster, you can synchronize data. This operation reads data from the source cluster and writes it to the target cluster. Data migrates to all nodes in the target cluster in parallel, with each node receiving a subset of the data. You must perform the synchronization for each cache that you want to migrate to the target cluster. Prerequisites Set up a target cluster with the appropriate Data Grid version. Procedure Start synchronizing each cache that you want to migrate to the target cluster with the Data Grid Command Line Interface (CLI) or REST API. CLI: Use the migrate cluster synchronize command. REST API: Use the ?action=sync-data parameter with a POST request. When the operation completes, Data Grid responds with the total number of entries copied to the target cluster. Disconnect each node in the target cluster from the source cluster. CLI: Use the migrate cluster disconnect command. REST API: Invoke a DELETE request. steps After you synchronize all data from the source cluster, the rolling upgrade process is complete. You can now decommission the source cluster. | [
"{ \"remote-store\": { \"cache\": \"myCache\", \"shared\": true, \"raw-values\": true, \"security\": { \"authentication\": { \"digest\": { \"username\": \"username\", \"password\": \"changeme\", \"realm\": \"default\" } } }, \"remote-server\": [ { \"host\": \"127.0.0.1\", \"port\": 12222 } ] } }",
"[//containers/default]> migrate cluster connect -c myCache --file=remote-store.json",
"POST /rest/v2/caches/myCache/rolling-upgrade/source-connection",
"migrate cluster synchronize -c myCache",
"POST /rest/v2/caches/myCache?action=sync-data",
"migrate cluster disconnect -c myCache",
"DELETE /rest/v2/caches/myCache/rolling-upgrade/source-connection"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/upgrading_data_grid/rolling-upgrades |
Chapter 3. Packaging software | Chapter 3. Packaging software 3.1. RPM packages This section covers the basics of the RPM packaging format. 3.1.1. What an RPM is An RPM package is a file containing other files and their metadata (information about the files that are needed by the system). Specifically, an RPM package consists of the cpio archive. The cpio archive contains: Files RPM header (package metadata) The rpm package manager uses this metadata to determine dependencies, where to install files, and other information. Types of RPM packages There are two types of RPM packages. Both types share the file format and tooling, but have different contents and serve different purposes: Source RPM (SRPM) An SRPM contains source code and a SPEC file, which describes how to build the source code into a binary RPM. Optionally, the patches to source code are included as well. Binary RPM A binary RPM contains the binaries built from the sources and patches. 3.1.2. Listing RPM packaging tool's utilities The following procedures show how to list the utilities provided by the rpmdevtools package. Prerequisites To be able to use the RPM packaging tools, you need to install the rpmdevtools package, which provides several utilities for packaging RPMs. Procedure List RPM packaging tool's utilities: Additional information For more information on the above utilities, see their manual pages or help dialogs. 3.1.3. Setting up RPM packaging workspace This section describes how to set up a directory layout that is the RPM packaging workspace by using the rpmdev-setuptree utility. Prerequisites The rpmdevtools package must be installed on your system: Procedure Run the rpmdev-setuptree utility: The created directories serve these purposes: Directory Purpose BUILD When packages are built, various %buildroot directories are created here. This is useful for investigating a failed build if the logs output do not provide enough information. RPMS Binary RPMs are created here, in subdirectories for different architectures, for example in subdirectories x86_64 and noarch . SOURCES Here, the packager puts compressed source code archives and patches. The rpmbuild command looks for them here. SPECS The packager puts SPEC files here. SRPMS When rpmbuild is used to build an SRPM instead of a binary RPM, the resulting SRPM is created here. 3.1.4. What a SPEC file is You can understand a SPEC file as a recipe that the rpmbuild utility uses to build an RPM. A SPEC file provides necessary information to the build system by defining instructions in a series of sections. The sections are defined in the Preamble and the Body part. The Preamble part contains a series of metadata items that are used in the Body part. The Body part represents the main part of the instructions. 3.1.4.1. Preamble Items The table below presents some of the directives that are used frequently in the Preamble section of the RPM SPEC file. Table 3.1. Items used in the Preamble section of the RPM SPEC file SPEC Directive Definition Name The base name of the package, which should match the SPEC file name. Version The upstream version number of the software. Release The number of times this version of the software was released. Normally, set the initial value to 1%{?dist}, and increment it with each new release of the package. Reset to 1 when a new Version of the software is built. Summary A brief, one-line summary of the package. License The license of the software being packaged. URL The full URL for more information about the program. Most often this is the upstream project website for the software being packaged. Source0 Path or URL to the compressed archive of the upstream source code (unpatched, patches are handled elsewhere). This should point to an accessible and reliable storage of the archive, for example, the upstream page and not the packager's local storage. If needed, more SourceX directives can be added, incrementing the number each time, for example: Source1, Source2, Source3, and so on. Patch The name of the first patch to apply to the source code if necessary. The directive can be applied in two ways: with or without numbers at the end of Patch. If no number is given, one is assigned to the entry internally. It is also possible to give the numbers explicitly using Patch0, Patch1, Patch2, Patch3, and so on. These patches can be applied one by one using the %patch0, %patch1, %patch2 macro and so on. The macros are applied within the %prep directive in the Body section of the RPM SPEC file. Alternatively, you can use the %autopatch macro which automatically applies all patches in the order they are given in the SPEC file. BuildArch If the package is not architecture dependent, for example, if written entirely in an interpreted programming language, set this to BuildArch: noarch . If not set, the package automatically inherits the Architecture of the machine on which it is built, for example x86_64 . BuildRequires A comma or whitespace-separated list of packages required for building the program written in a compiled language. There can be multiple entries of BuildRequires , each on its own line in the SPEC file. Requires A comma- or whitespace-separated list of packages required by the software to run once installed. There can be multiple entries of Requires , each on its own line in the SPEC file. ExcludeArch If a piece of software can not operate on a specific processor architecture, you can exclude that architecture here. Conflicts Conflicts are inverse to Requires . If there is a package matching Conflicts , the package cannot be installed independently on whether the Conflict tag is on the package that has already been installed or on a package that is going to be installed. Obsoletes This directive alters the way updates work depending on whether the rpm command is used directly on the command line or the update is performed by an updates or dependency solver. When used on a command line, RPM removes all packages matching obsoletes of packages being installed. When using an update or dependency resolver, packages containing matching Obsoletes: are added as updates and replace the matching packages. Provides If Provides is added to a package, the package can be referred to by dependencies other than its name. The Name , Version , and Release directives comprise the file name of the RPM package. RPM package maintainers and system administrators often call these three directives N-V-R or NVR , because RPM package filenames have the NAME-VERSION-RELEASE format. The following example shows how to obtain the NVR information for a specific package by querying the rpm command. Example 3.1. Querying rpm to provide the NVR information for the bash package Here, bash is the package name, 4.2.46 is the version, and 34.el7 is the release. The final marker is x86_64 , which signals the architecture. Unlike the NVR , the architecture marker is not under direct control of the RPM packager, but is defined by the rpmbuild build environment. The exception to this is the architecture-independent noarch package. 3.1.4.2. Body Items The items used in the Body section of the RPM SPEC file are listed in the table below. Table 3.2. Items used in the Body section of the RPM SPEC file SPEC Directive Definition %description A full description of the software packaged in the RPM. This description can span multiple lines and can be broken into paragraphs. %prep Command or series of commands to prepare the software to be built, for example, unpacking the archive in Source0 . This directive can contain a shell script. %build Command or series of commands for building the software into machine code (for compiled languages) or byte code (for some interpreted languages). %install Command or series of commands for copying the desired build artifacts from the %builddir (where the build happens) to the %buildroot directory (which contains the directory structure with the files to be packaged). This usually means copying files from ~/rpmbuild/BUILD to ~/rpmbuild/BUILDROOT and creating the necessary directories in ~/rpmbuild/BUILDROOT . This is only run when creating a package, not when the end-user installs the package. See Section 3.2, "Working with SPEC files" for details. %check Command or series of commands to test the software. This normally includes things such as unit tests. %files The list of files that will be installed in the end user's system. %changelog A record of changes that have happened to the package between different Version or Release builds. 3.1.4.3. Advanced items The SPEC file can also contain advanced items, such as Scriptlets or Triggers . They take effect at different points during the installation process on the end user's system, not the build process. 3.1.5. BuildRoots In the context of RPM packaging, buildroot is a chroot environment. This means that the build artifacts are placed here using the same file system hierarchy as the future hierarchy in end user's system, with buildroot acting as the root directory. The placement of build artifacts should comply with the file system hierarchy standard of the end user's system. The files in buildroot are later put into a cpio archive, which becomes the main part of the RPM. When RPM is installed on the end user's system, these files are extracted in the root directory, preserving the correct hierarchy. Note Starting from 6, the rpmbuild program has its own defaults. Overriding these defaults leads to several problems; hence, {RH} does not recommend to define your own value of this macro. You can use the %{buildroot} macro with the defaults from the rpmbuild directory. 3.1.6. RPM macros An rpm macro is a straight text substitution that can be conditionally assigned based on the optional evaluation of a statement when certain built-in functionality is used. Hence, RPM can perform text substitutions for you. An example use is referencing the packaged software Version multiple times in a SPEC file. You define Version only once in the %{version} macro, and use this macro throughout the SPEC file. Every occurrence will be automatically substituted by Version that you defined previously. Note If you see an unfamiliar macro, you can evaluate it with the following command: Evaluating the %{_bindir} and the %{_libexecdir} macros On of the commonly-used macros is the %{?dist} macro, which signals which distribution is used for the build (distribution tag). 3.2. Working with SPEC files This section describes how to create and modify a SPEC file. Prerequisites This section uses the three example implementations of the Hello World! program that were described in Section 2.1.1, "Source code examples" . Each of the programs is also fully described in the below table. Software Name Explanation of example bello A program written in a raw interpreted programming language. It demonstrates when the source code does not need to be built, but only needs to be installed. If a pre-compiled binary needs to be packaged, you can also use this method since the binary would also just be a file. pello A program written in a byte-compiled interpreted programming language. It demonstrates byte-compiling the source code and installating the bytecode - the resulting pre-optimized files. cello A program written in a natively compiled programming language. It demonstrates a common process of compiling the source code into machine code and installing the resulting executables. The implementations of Hello World! are: bello-0.1.tar.gz pello-0.1.2.tar.gz cello-1.0.tar.gz cello-output-first-patch.patch As a prerequisite, these implementations need to be placed into the ~/rpmbuild/SOURCES directory. 3.2.1. Ways to create a new SPEC file To package new software, you need to create a new SPEC file. There are two to achieve this: Writing the new SPEC file manually from scratch Use the rpmdev-newspec utility This utility creates an unpopulated SPEC file, and you fill in the necessary directives and fields. Note Some programmer-focused text editors pre-populate a new .spec file with their own SPEC template. The rpmdev-newspec utility provides an editor-agnostic method. 3.2.2. Creating a new SPEC file with rpmdev-newspec The following procedure shows how to create a SPEC file for each of the three aforementioned Hello World! programs using the rpmdev-newspec utility. Procedure Change to the ~/rpmbuild/SPECS directory and use the rpmdev-newspec utility: The ~/rpmbuild/SPECS/ directory now contains three SPEC files named bello.spec , cello.spec , and pello.spec . fd. Examine the files: + Note The rpmdev-newspec utility does not use guidelines or conventions specific to any particular Linux distribution. However, this document targets , so the %{buildroot} notation is preferred over the USDRPM_BUILD_ROOT notation when referencing RPM's Buildroot for consistency with all other defined or provided macros throughout the SPEC file. 3.2.3. Modifying an original SPEC file for creating RPMs The following procedure shows how to modify the output SPEC file provided by rpmdev-newspec for creating the RPMs. Prerequisites Make sure that: The source code of the particular program has been placed into the ~/rpmbuild/SOURCES/ directory. The unpopulated SPEC file ~/rpmbuild/SPECS/<name>.spec file has been created by the rpmdev-newspec utility. Procedure Open the output template of the ~/rpmbuild/SPECS/<name>.spec file provided by the rpmdev-newspec utility: Populate the first section of the SPEC file: The first section includes these directives that rpmdev-newspec grouped together: Name Version Release Summary The Name was already specified as an argument to rpmdev-newspec . Set the Version to match the upstream release version of the source code. The Release is automatically set to 1%{?dist} , which is initially 1 . Increment the initial value whenever updating the package without a change in the upstream release Version - such as when including a patch. Reset Release to 1 when a new upstream release happens. The Summary is a short, one-line explanation of what this software is. Populate the License , URL , and Source0 directives: The License field is the Software License associated with the source code from the upstream release. The exact format for how to label the License in your SPEC file will vary depending on which specific RPM based Linux distribution guidelines you are following. For example, you can use GPLv3+ . The URL field provides URL to the upstream software website. For consistency, utilize the RPM macro variable of %{name} , and use https://example.com/%{name} . The Source0 field provides URL to the upstream software source code. It should link directly to the specific version of software that is being packaged. Note that the example URLs given in this documentation include hard-coded values that are possible subject to change in the future. Similarly, the release version can change as well. To simplify these potential future changes, use the %{name} and %{version} macros. By using these, you need to update only one field in the SPEC file. Populate the BuildRequires , Requires and BuildArch directives: BuildRequires specifies build-time dependencies for the package. Requires specifies run-time dependencies for the package. This is a software written in an interpreted programming language with no natively compiled extensions. Hence, add the BuildArch directive with the noarch value. This tells RPM that this package does not need to be bound to the processor architecture on which it is built. Populate the %description , %prep , %build , %install , %files , and %license directives: These directives can be thought of as section headings, because they are directives that can define multi-line, multi-instruction, or scripted tasks to occur. The %description is a longer, fuller description of the software than Summary , containing one or more paragraphs. The %prep section specifies how to prepare the build environment. This usually involves expansion of compressed archives of the source code, application of patches, and, potentially, parsing of information provided in the source code for use in a later portion of the SPEC file. In this section you can use the built-in %setup -q macro. The %build section specifies how to build the software. The %install section contains instructions for rpmbuild on how to install the software, once it has been built, into the BUILDROOT directory. This directory is an empty chroot base directory, which resembles the end user's root directory. Here you can create any directories that will contain the installed files. To create such directories, you can use the RPM macros without having to hardcode the paths. The %files section specifies the list of files provided by this RPM and their full path location on the end user's system. Within this section, you can indicate the role of various files using built-in macros. This is useful for querying the package file manifest metadata using the command[] rpm command. For example, to indicate that the LICENSE file is a software license file, use the %license macro. The last section, %changelog , is a list of datestamped entries for each Version-Release of the package. They log packaging changes, not software changes. Examples of packaging changes: adding a patch, changing the build procedure in the %build section. Follow this format for the first line: Start with an * character followed by Day-of-Week Month Day Year Name Surname <email> - Version-Release Follow this format for the actual change entry: Each change entry can contain multiple items, one for each change. Each item starts on a new line. Each item begins with a - character. You have now written an entire SPEC file for the required program. For examples of SPEC file written in different programming languages, see: 3.2.4. An example SPEC file for a program written in bash This section shows an example SPEC file for the bello program that was written in bash. For more information about bello , see Section 2.1.1, "Source code examples" . An example SPEC file for the bello program written in bash The BuildRequires directive, which specifies build-time dependencies for the package, was deleted because there is no building step for bello . Bash is a raw interpreted programming language, and the files are just installed to their location on the system. The Requires directive, which specifies run-time dependencies for the package, include only bash , because the bello script requires only the bash shell environment to execute. The %build section, which specifies how to build the software, is blank, because a bash does not need to be built. For installing bello you only need to create the destination directory and install the executable bash script file there. Hence, you can use the install command in the %install section. RPM macros allow to do this without hardcoding paths. 3.2.5. An example SPEC file for a program written in Python This section shows an example SPEC file for the pello program written in the Python programming language. For more information about pello , see Section 2.1.1, "Source code examples" . An example SPEC file for the pello program written in Python Important The pello program is written in a byte-compiled interpreted language. Hence, the shebang is not applicable because the resulting file does not contain the entry. Because the shebang is not applicable, you may want to apply one of the following approaches: Create a non-byte-compiled shell script that will call the executable. Provide a small bit of the Python code that is not byte-compiled as the entry point into the program's execution. These approaches are useful especially for large software projects with many thousands of lines of code, where the performance increase of pre-byte-compiled code is sizeable. The BuildRequires directive, which specifies build-time dependencies for the package, includes two packages: The python package is needed to perform the byte-compile build process The bash package is needed to execute the small entry-point script The Requires directive, which specifies run-time dependencies for the package, includes only the python package. The pello program requires the python package to execute the byte-compiled code at runtime. The %build section, which specifies how to build the software, corresponds to the fact that the software is byte-compiled. To install pello , you need to create a wrapper script because the shebang is not applicable in byte-compiled languages. There are multiple options to accomplish this, such as: Making a separate script and using that as a separate SourceX directive. Creating the file in-line in the SPEC file. This example shows creating a wrapper script in-line in the SPEC file to demonstrate that the SPEC file itself is scriptable. This wrapper script will execute the Python byte-compiled code by using a here document. The %install section in this example also corresponds to the fact that you will need to install the byte-compiled file into a library directory on the system such that it can be accessed. 3.2.6. An example SPEC file for a program written in C This section shows an example SPEC file for the cello program that was written in the C programming language. For more information about cello , see Section 2.1.1, "Source code examples" . An example SPEC file for the cello program written in C The BuildRequires directive, which specifies build-time dependencies for the package, includes two packages that are needed to perform the compilation build process: The gcc package The make package The Requires directive, which specifies run-time dependencies for the package, is omitted in this example. All runtime requirements are handled by rpmbuild , and the cello program does not require anything outside of the core C standard libraries. The %build section reflects the fact that in this example a Makefile for the cello program was written, hence the GNU make command provided by the rpmdev-newspec utility can be used. However, you need to remove the call to %configure because you did not provide a configure script. The installation of the cello program can be accomplished by using the %make_install macro that was provided by the rpmdev-newspec command. This is possible because the Makefile for the cello program is available. 3.3. Building RPMs This section describes how to build an RPM after a SPEC file for a program has been created. RPMs are built with the rpmbuild command. This command expects a certain directory and file structure, which is the same as the structure that was set up by the rpmdev-setuptree utility. Different use cases and desired outcomes require different combinations of arguments to the rpmbuild command. This section describes the two main use cases: Building source RPMs Building binary RPMs 3.3.1. Building source RPMs This paragraph is the procedure module introduction: a short description of the procedure. Prerequisites A SPEC file for the program that we want to package must already exist. For more information on creating SPEC files, see Working with SPEC files . Procedure The following procedure describes how to build a source RPM. Run the rpmbuild command with the specified SPEC file: Substitute SPECFILE with the SPEC file. The -bs option stands for the build source. The following example shows building source RPMs for the bello , pello , and cello projects. Building source RPMs for bello, pello, and cello. Verification steps Make sure that the rpmbuild/SRPMS directory includes the resulting source RPMs. The directory is a part of the structure expected by rpmbuild . 3.3.2. Building binary RPMs The following methods are vailable for building binary RPMs: Rebuilding a binary RPM from a source RPM Building a binary RPM from the SPEC file Building a binary RPM from a source RPM 3.3.2.1. Rebuilding a binary RPM from a source RPM The following procedure shows how to rebuild a binary RPM from a source RPM (SRPM). Procedure To rebuild bello , pello , and cello from their SRPMs, run: Note Invoking rpmbuild --rebuild involves: Installing the contents of the SRPM - the SPEC file and the source code - into the ~/rpmbuild/ directory. Building using the installed contents. Removing the SPEC file and the source code. To retain the SPEC file and the source code after building, you can: When building, use the rpmbuild command with the --recompile option instead of the --rebuild option. Install the SRPMs using these commands: The output generated when creating a binary RPM is verbose, which is helpful for debugging. The output varies for different examples and corresponds to their SPEC files. The resulting binary RPMs are in the ~/rpmbuild/RPMS/YOURARCH directory where YOURARCH is your architecture or in the ~/rpmbuild/RPMS/noarch/ directory, if the package is not architecture-specific. 3.3.2.2. Building a binary RPM from the SPEC file The following procedure shows how to build bello , pello , and cello binary RPMs from their SPEC files. Procedure Run the rpmbuild command with the bb option: 3.3.2.3. Building RPMs from source RPMs It is also possible to build any kind of RPM from a source RPM. To do so, use the following procedure. Procedure Run the rpmbuild command with one of the below options and with the source package specified: Additional resources For more details on building RPMs from source RPMs, see the BUILDING PACKAGES section on the rpmbuild(8) man page. 3.4. Checking RPMs for sanity After creating a package, check the quality of the package. The main tool for checking package quality is rpmlint . The rpmlint tool does the following: Improves RPM maintainability. Enables sanity checking by performing static analysis of the RPM. Enables error checking by performing static analysis of the RPM. The rpmlint tool can check binary RPMs, source RPMs (SRPMs), and SPEC files, so it is useful for all stages of packaging, as shown in the following examples. Note that rpmlint has very strict guidelines; hence it is sometimes acceptable to skip some of its errors and warnings, as shown in the following examples. Note In the following examples, rpmlint is run without any options, which produces a non-verbose output. For detailed explanations of each error or warning, you can run rpmlint -i instead. 3.4.1. Checking bello for sanity This section shows possible warnings and errors that can occur when checking RPM sanity on the example of the bello SPEC file and bello binary RPM. 3.4.1.1. Checking the bello SPEC File Example 3.2. Output of running the rpmlint command on the SPEC file for bello For bello.spec , there is only one warning, which says that the URL listed in the Source0 directive is unreachable. This is expected, because the specified example.com URL does not exist. Presuming that we expect this URL to work in the future, we can ignore this warning. Example 3.3. Output of running the rpmlint command on the SRPM for bello For the bello SRPM, there is a new warning, which says that the URL specified in the URL directive is unreachable. Assuming the link will be working in the future, we can ignore this warning. 3.4.1.2. Checking the bello binary RPM When checking binary RPMs, rpmlint checks for the following items: Documentation Manual pages Consistent use of the filesystem hierarchy standard Example 3.4. Output of running the rpmlint command on the binary RPM for bello The no-documentation and no-manual-page-for-binary warnings say that the RPM has no documentation or manual pages, because we did not provide any. Apart from the above warnings, the RPM passed rpmlint checks. 3.4.2. Checking pello for sanity This section shows possible warnings and errors that can occur when checking RPM sanity on the example of the pello SPEC file and pello binary RPM. 3.4.2.1. Checking the pello SPEC File Example 3.5. Output of running the rpmlint command on the SPEC file for pello The invalid-url Source0 warning says that the URL listed in the Source0 directive is unreachable. This is expected, because the specified example.com URL does not exist. Presuming that this URL will work in the future, you can ignore this warning. The hardcoded-library-path errors suggest to use the %{_libdir} macro instead of hard-coding the library path. For the sake of this example, you can safely ignore these errors. However, for packages going into production make sure to check all errors carefully. Example 3.6. Output of running the rpmlint command on the SRPM for pello The new invalid-url URL error here is about the URL directive, which is unreachable. Assuming that the URL will be valid in the future, you can safely ignore this error. 3.4.2.2. Checking the pello binary RPM When checking binary RPMs, rpmlint checks for the following items: Documentation Manual pages Consistent use of the Filesystem Hierarchy Standard Example 3.7. Output of running the rpmlint command on the binary RPM for pello The no-documentation and no-manual-page-for-binary warnings say that the RPM has no documentation or manual pages, because you did not provide any. The only-non-binary-in-usr-lib warning says that you provided only non-binary artifacts in /usr/lib/ . This directory is normally reserved for shared object files, which are binary files. Therefore, rpmlint expects at least one or more files in /usr/lib/ directory to be binary. This is an example of an rpmlint check for compliance with Filesystem Hierarchy Standard. Normally, use RPM macros to ensure the correct placement of files. For the sake of this example, you can safely ignore this warning. The non-executable-script error warns that the /usr/lib/pello/pello.py file has no execute permissions. The rpmlint tool expects the file to be executable, because the file contains the shebang. For the purpose of this example, you can leave this file without execute permissions and ignore this error. Apart from the above warnings and errors, the RPM passed rpmlint checks. 3.4.3. Checking cello for sanity This section shows possible warnings and errors that can occur when checking RPM sanity on the example of the cello SPEC file and pello binary RPM. 3.4.3.1. Checking the cello SPEC File Example 3.8. Output of running the rpmlint command on the SPEC file for cello For cello.spec , there is only one warning, which says that the URL listed in the Source0 directive is unreachable. This is expected, because the specified example.com URL does not exist. Presuming that this URL will work in the future, you can ignore this warning. Example 3.9. Output of running the rpmlint command on the SRPM for cello For the cello SRPM, there is a new warning, which says that the URL specified in the URL directive is unreachable. Assuming the link will be working in the future, you can ignore this warning. 3.4.3.2. Checking the cello binary RPM When checking binary RPMs, rpmlint checks for the following items: Documentation Manual pages Consistent use of the filesystem hierarchy standard Example 3.10. Output of running the rpmlint command on the binary RPM for cello The no-documentation and no-manual-page-for-binary warnings say that he RPM has no documentation or manual pages, because you did not provide any. Apart from the above warnings, the RPM passed rpmlint checks. | [
"yum install rpmdevtools",
"rpm -ql rpmdevtools | grep bin",
"yum install rpmdevtools",
"rpmdev-setuptree tree ~/rpmbuild/ /home/<username>/rpmbuild/ |-- BUILD |-- RPMS |-- SOURCES |-- SPECS `-- SRPMS 5 directories, 0 files",
"rpm -q bash bash-4.2.46-34.el7.x86_64",
"rpm --eval %{_MACRO}",
"rpm --eval %{_bindir} /usr/bin rpm --eval %{_libexecdir} /usr/libexec",
"On a RHEL 8.x machine rpm --eval %{?dist} .el8",
"cd ~/rpmbuild/SPECS rpmdev-newspec bello bello.spec created; type minimal, rpm version >= 4.11. rpmdev-newspec cello cello.spec created; type minimal, rpm version >= 4.11. rpmdev-newspec pello pello.spec created; type minimal, rpm version >= 4.11.",
"Name: bello Version: 0.1 Release: 1%{?dist} Summary: Hello World example implemented in bash script License: GPLv3+ URL: https://www.example.com/%{name} Source0: https://www.example.com/%{name}/releases/%{name}-%{version}.tar.gz Requires: bash BuildArch: noarch %description The long-tail description for our Hello World Example implemented in bash script. %prep %setup -q %build %install mkdir -p %{buildroot}/%{_bindir} install -m 0755 %{name} %{buildroot}/%{_bindir}/%{name} %files %license LICENSE %{_bindir}/%{name} %changelog * Tue May 31 2016 Adam Miller < [email protected] > - 0.1-1 - First bello package - Example second item in the changelog for version-release 0.1-1",
"Name: pello Version: 0.1.1 Release: 1%{?dist} Summary: Hello World example implemented in Python License: GPLv3+ URL: https://www.example.com/%{name} Source0: https://www.example.com/%{name}/releases/%{name}-%{version}.tar.gz BuildRequires: python Requires: python Requires: bash BuildArch: noarch %description The long-tail description for our Hello World Example implemented in Python. %prep %setup -q %build python -m compileall %{name}.py %install mkdir -p %{buildroot}/%{_bindir} mkdir -p %{buildroot}/usr/lib/%{name} cat > %{buildroot}/%{_bindir}/%{name} <<-EOF #!/bin/bash /usr/bin/python /usr/lib/%{name}/%{name}.pyc EOF chmod 0755 %{buildroot}/%{_bindir}/%{name} install -m 0644 %{name}.py* %{buildroot}/usr/lib/%{name}/ %files %license LICENSE %dir /usr/lib/%{name}/ %{_bindir}/%{name} /usr/lib/%{name}/%{name}.py* %changelog * Tue May 31 2016 Adam Miller < [email protected] > - 0.1.1-1 - First pello package",
"Name: cello Version: 1.0 Release: 1%{?dist} Summary: Hello World example implemented in C License: GPLv3+ URL: https://www.example.com/%{name} Source0: https://www.example.com/%{name}/releases/%{name}-%{version}.tar.gz Patch0: cello-output-first-patch.patch BuildRequires: gcc BuildRequires: make %description The long-tail description for our Hello World Example implemented in C. %prep %setup -q %patch0 %build make %{?_smp_mflags} %install %make_install %files %license LICENSE %{_bindir}/%{name} %changelog * Tue May 31 2016 Adam Miller < [email protected] > - 1.0-1 - First cello package",
"rpmbuild -bs SPECFILE",
"cd ~/rpmbuild/SPECS/ 8USD rpmbuild -bs bello.spec Wrote: /home/<username>/rpmbuild/SRPMS/bello-0.1-1.el8.src.rpm rpmbuild -bs pello.spec Wrote: /home/<username>/rpmbuild/SRPMS/pello-0.1.2-1.el8.src.rpm rpmbuild -bs cello.spec Wrote: /home/<username>/rpmbuild/SRPMS/cello-1.0-1.el8.src.rpm",
"rpmbuild --rebuild ~/rpmbuild/SRPMS/bello-0.1-1.el8.src.rpm [output truncated] rpmbuild --rebuild ~/rpmbuild/SRPMS/pello-0.1.2-1.el8.src.rpm [output truncated] rpmbuild --rebuild ~/rpmbuild/SRPMS/cello-1.0-1.el8.src.rpm [output truncated]",
"rpm -Uvh ~/rpmbuild/SRPMS/bello-0.1-1.el8.src.rpm Updating / installing... 1:bello-0.1-1.el8 [100%] rpm -Uvh ~/rpmbuild/SRPMS/pello-0.1.2-1.el8.src.rpm Updating / installing... ... 1:pello-0.1.2-1.el8 [100%] rpm -Uvh ~/rpmbuild/SRPMS/cello-1.0-1.el8.src.rpm Updating / installing... ... 1:cello-1.0-1.el8 [100%]",
"rpmbuild -bb ~/rpmbuild/SPECS/bello.spec rpmbuild -bb ~/rpmbuild/SPECS/pello.spec rpmbuild -bb ~/rpmbuild/SPECS/cello.spec",
"rpmbuild {-ra|-rb|-rp|-rc|-ri|-rl|-rs} [rpmbuild-options] SOURCEPACKAGE",
"rpmlint bello.spec bello.spec: W: invalid-url Source0: https://www.example.com/bello/releases/bello-0.1.tar.gz HTTP Error 404: Not Found 0 packages and 1 specfiles checked; 0 errors, 1 warnings.",
"rpmlint ~/rpmbuild/SRPMS/bello-0.1-1.el8.src.rpm bello.src: W: invalid-url URL: https://www.example.com/bello HTTP Error 404: Not Found bello.src: W: invalid-url Source0: https://www.example.com/bello/releases/bello-0.1.tar.gz HTTP Error 404: Not Found 1 packages and 0 specfiles checked; 0 errors, 2 warnings.",
"rpmlint ~/rpmbuild/RPMS/noarch/bello-0.1-1.el8.noarch.rpm bello.noarch: W: invalid-url URL: https://www.example.com/bello HTTP Error 404: Not Found bello.noarch: W: no-documentation bello.noarch: W: no-manual-page-for-binary bello 1 packages and 0 specfiles checked; 0 errors, 3 warnings.",
"rpmlint pello.spec pello.spec:30: E: hardcoded-library-path in %{buildroot}/usr/lib/%{name} pello.spec:34: E: hardcoded-library-path in /usr/lib/%{name}/%{name}.pyc pello.spec:39: E: hardcoded-library-path in %{buildroot}/usr/lib/%{name}/ pello.spec:43: E: hardcoded-library-path in /usr/lib/%{name}/ pello.spec:45: E: hardcoded-library-path in /usr/lib/%{name}/%{name}.py* pello.spec: W: invalid-url Source0: https://www.example.com/pello/releases/pello-0.1.2.tar.gz HTTP Error 404: Not Found 0 packages and 1 specfiles checked; 5 errors, 1 warnings.",
"rpmlint ~/rpmbuild/SRPMS/pello-0.1.2-1.el8.src.rpm pello.src: W: invalid-url URL: https://www.example.com/pello HTTP Error 404: Not Found pello.src:30: E: hardcoded-library-path in %{buildroot}/usr/lib/%{name} pello.src:34: E: hardcoded-library-path in /usr/lib/%{name}/%{name}.pyc pello.src:39: E: hardcoded-library-path in %{buildroot}/usr/lib/%{name}/ pello.src:43: E: hardcoded-library-path in /usr/lib/%{name}/ pello.src:45: E: hardcoded-library-path in /usr/lib/%{name}/%{name}.py* pello.src: W: invalid-url Source0: https://www.example.com/pello/releases/pello-0.1.2.tar.gz HTTP Error 404: Not Found 1 packages and 0 specfiles checked; 5 errors, 2 warnings.",
"rpmlint ~/rpmbuild/RPMS/noarch/pello-0.1.2-1.el8.noarch.rpm pello.noarch: W: invalid-url URL: https://www.example.com/pello HTTP Error 404: Not Found pello.noarch: W: only-non-binary-in-usr-lib pello.noarch: W: no-documentation pello.noarch: E: non-executable-script /usr/lib/pello/pello.py 0644L /usr/bin/env pello.noarch: W: no-manual-page-for-binary pello 1 packages and 0 specfiles checked; 1 errors, 4 warnings.",
"rpmlint ~/rpmbuild/SPECS/cello.spec /home/<username>/rpmbuild/SPECS/cello.spec: W: invalid-url Source0: https://www.example.com/cello/releases/cello-1.0.tar.gz HTTP Error 404: Not Found 0 packages and 1 specfiles checked; 0 errors, 1 warnings.",
"rpmlint ~/rpmbuild/SRPMS/cello-1.0-1.el8.src.rpm cello.src: W: invalid-url URL: https://www.example.com/cello HTTP Error 404: Not Found cello.src: W: invalid-url Source0: https://www.example.com/cello/releases/cello-1.0.tar.gz HTTP Error 404: Not Found 1 packages and 0 specfiles checked; 0 errors, 2 warnings.",
"rpmlint ~/rpmbuild/RPMS/x86_64/cello-1.0-1.el8.x86_64.rpm cello.x86_64: W: invalid-url URL: https://www.example.com/cello HTTP Error 404: Not Found cello.x86_64: W: no-documentation cello.x86_64: W: no-manual-page-for-binary cello 1 packages and 0 specfiles checked; 0 errors, 3 warnings."
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/rpm_packaging_guide/packaging-software |
Chapter 9. Asynchronous errata updates | Chapter 9. Asynchronous errata updates 9.1. RHSA-2025:0323 OpenShift Data Foundation 4.14.13 bug fixes and security updates OpenShift Data Foundation release 4.14.13 is now available. The bug fixes that are included in the update are listed in the RHSA-2025:0323 advisory. 9.2. RHBA-2024:10129 OpenShift Data Foundation 4.14.12 bug fixes and security updates OpenShift Data Foundation release 4.14.12 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:10129 advisory. 9.3. RHSA-2024:7624 OpenShift Data Foundation 4.14.11 bug fixes and security updates OpenShift Data Foundation release 4.14.11 is now available. The bug fixes that are included in the update are listed in the RHSA-2024:7624 advisory. 9.4. RHBA-2024:6398 OpenShift Data Foundation 4.14.10 bug fixes and security updates OpenShift Data Foundation release 4.14.10 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:6398 advisory. 9.5. RHBA-2024:4217 OpenShift Data Foundation 4.14.9 bug fixes and security updates OpenShift Data Foundation release 4.14.9 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:4217 advisory. 9.6. RHBA-2024:3861 OpenShift Data Foundation 4.14.8 bug fixes and security updates OpenShift Data Foundation release 4.14.8 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:3861 advisory. 9.7. RHBA-2024:3443 OpenShift Data Foundation 4.14.7 bug fixes and security updates OpenShift Data Foundation release 4.14.7 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:3443 advisory. 9.8. RHBA-2024:1579 OpenShift Data Foundation 4.14.6 bug fixes and security updates OpenShift Data Foundation release 4.14.6 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:1579 advisory. 9.9. RHBA-2024:1043 OpenShift Data Foundation 4.14.5 bug fixes and security updates OpenShift Data Foundation release 4.14.5 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:1043 advisory. 9.10. RHBA-2024:0315 OpenShift Data Foundation 4.14.4 bug fixes and security updates OpenShift Data Foundation release 4.14.4 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:0315 advisory. 9.11. RHBA-2023:7869 OpenShift Data Foundation 4.14.3 bug fixes and security updates OpenShift Data Foundation release 4.14.3 is now available. The bug fixes that are included in the update are listed in the RHBA-2023:7869 advisory. 9.12. RHBA-2023:7776 OpenShift Data Foundation 4.14.2 bug fixes and security updates OpenShift Data Foundation release 4.14.2 is now available. The bug fixes that are included in the update are listed in the RHBA-2023:7776 advisory. 9.13. RHBA-2023:7696 OpenShift Data Foundation 4.14.1 bug fixes and security updates OpenShift Data Foundation release 4.14.1 is now available. The bug fixes that are included in the update are listed in the RHBA-2023:7696 advisory. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/4.14_release_notes/asynchronous_errata_updates |
Chapter 4. Creating a JFR recording with Cryostat | Chapter 4. Creating a JFR recording with Cryostat With Cryostat, you can create a JDK Flight Recorder (JFR) recording that monitors the performance of your JVM in your containerized application. Additionally, you can take a snapshot of an active JFR recording to capture any collected data, up to a specific point in time, for your target JVM application. 4.1. Creating a JFR recording in the Cryostat web console You can create a JFR recording that monitors the performance of your JVM located in your containerized application. After you create a JFR recording, you can start the JFR to capture real-time data for your JVM, such as heap and non-heap memory usage. Prerequisites Installed Cryostat 2.4 on Red Hat OpenShift by using the OperatorHub option. Created a Cryostat instance in your Red Hat OpenShift project. Logged in to your Cryostat web console. You can retrieve your Cryostat application's URL by using the Red Hat OpenShift web console. Procedure On the Dashboard panel for your Cryostat web console, select a target JVM from the Target list. Note Depending on how you configured your target applications, your target JVMs might be using a JMX connection or an agent HTTP connection. For more information about configuring your target applications, see Configuring Java applications . Important If your target JVM is using an agent HTTP connection, ensure that you set the cryostat.agent.api.writes-enabled property to true when you configured your target application to load the Cryostat agent. Otherwise, the Cryostat agent cannot accept requests to start and stop JFR recordings. Figure 4.1. Example of selecting a Target JVM for your Cryostat instance Optional: On the Dashboard panel, you can create a target JVM. From the Target list, click Create Target . The Create Custom Target window opens. In the Connection URL field, enter the URL for your JVM's Java Management Extension (JMX) endpoint. Optional: To test if the Connection URL that you specified is valid, click the Click to test sample node image. If there is an issue with the Connection URL , an error message is displayed that provides a description of the issue and guidance to troubleshoot. Optional: In the Alias field, enter an alias for your JMX Service URL. Click Create . Figure 4.2. Create Custom Target window From the navigation menu on the Cryostat web console, click Recordings . Optional: Depending on how you configured your target JVM, an Authentication Required dialog might open on your web console. In the Authentication Required dialog box, enter your Username and Password . To provide your credentials to the target JVM, click Save . Figure 4.3. Example of a Cryostat Authentication Required window Note If the selected target JMX has Secure Socket Layer (SSL) certification enabled for JMX connections, you must add its certificate when prompted. Cryostat encrypts and stores credentials for a target JVM application in a database that is stored on a persistent volume claim (PVC) on Red Hat OpenShift. See Storing and managing credentials (Using Cryostat to manage a JFR recording). On the Active Recordings tab, click Create . Figure 4.4. Example of creating an active recording On the Custom Flight Recording tab: In the Name field, enter the name of the recording you want to create. If you enter a name in an invalid format, the web console displays an error message. If you want Cryostat to automatically restart an existing recording, select the Restart if recording already exists check box. Note If you enter a name that already exists but you do not select Restart if recording already exists , Cryostat refuses to create a custom recording when you click the Create button. In the Duration field, select whether you want this recording to stop after a specified duration or to run continuously without stopping. If you want Cryostat to automatically archive your new JFR recording after the recording stops, click Archive on Stop . In the Template field, select the template that you want to use for the recording. The following example shows continuous JVM monitoring, which you can enable by selecting Continuous from above the Duration field. This setting means that the recording will continue until you manually stop the recording. The example also shows selection of the Profiling template from the Template field. This provides additional JVM information to a JFR recording for troubleshooting purposes. Figure 4.5. Example of creating a custom flight recording To access more options, click the following expandable hyperlinks: Show advanced options , where you can select additional options for customizing your JFR recording. Show metadata options , where you can add custom labels and metadata to your JFR recording. To create your JFR recording, click Create . The Active Recordings tab opens and lists your JFR recording. Your active JFR recording starts collecting data on the target JVM location inside your containerized application. If you specified a fixed duration for your JFR recordings, the target JVM stops the recording when it reaches the fixed duration setting. Otherwise, you must manually stop the recording. Optional: On the Active Recording tab, you can also stop the recording. Select the checkbox to the JFR recording's name. On the toolbar in the Active Recordings tab, the Cryostat web console activates the Stop button. Click Stop . The JFR adopts the STOPPED status, so it stops monitoring the target JVM. The JFR still shows under the Active Recording tab. Figure 4.6. Example of stopping an active recording Important JFR recording data might be lost in the following situations: Target JVM fails Target JVM restarts Target JVM Red Hat OpenShift Deployment is scaled down Archive your JFR recordings to ensure you do not lose your JFR recording's data. Additional resources See Uploading an SSL certificate (Using Cryostat to manage a JFR recording). See Archiving JDK Flight Recorder (JFR) recordings (Using Cryostat to manage a JFR recording). 4.2. Creating snapshots from an active recording You can take a snapshot of an active JFR recording to capture any collected data, up to a specific point in time, for your target JVM application. A snapshot is like a checkpoint marker that has a start point and an end point for a given time segment in a running JFR recording. A snapshot gets stored in the memory of a target JVM application. This differs from an archive in that Cryostat stores an archive on a cloud storage disk, which is a more permanent solution for storing a JFR recording's data. You can take snapshots of recordings if you want to experiment with different configuration changes among active JFR recordings. When you create a snapshot for your JFR recording, Cryostat creates a new target JVM named snapshot - <snapshot_number> , where <snapshot_number> is the number that Cryostat automatically assigns to your snapshot. A target JVM recognizes a snapshot as an active recording. Cryostat sets any JFR snapshots in the STOPPED state, which means that the JFR snapshot does not record new data to the target JVM. Depending on the JFR configuration, active JFR recordings can continue to monitor the target JVM regardless of the number of snapshots taken. Note For a JFR recording that you set for continuous monitoring of a target JVM application, ensure that you create archived recordings to avoid losing JFR recording data. If you choose to take regular snapshots to store your JFR recording data, the target JVM application might free some of its data storage space by replacing older recording data with newer recording data. Prerequisites Entered your authentication details for your Cryostat instance. Created a target JVM recording and entered your authenticated details to access the Recordings menu. See Creating a JDK Flight Recorder (JFR) recording (Creating a JFR recording with Cryostat). Procedure On the Active Recordings tab, click the Create button. A new window opens on the web console. Figure 4.7. Example of creating an active recording Click the Snapshot Recording tab. Figure 4.8. Example of creating a snapshot recording Click Create . The Active Recordings table opens and it lists your JFR snapshot recording. The following example shows a JFR snapshot recording named snapshot-3 . Figure 4.9. Example of a completed snapshot recording Note You can identify snapshots by the snapshot prefix from the list of active recordings. steps To archive your JFR snapshot recording, see Archiving JDK Flight Recorder (JFR) recordings . 4.3. Labels for JFR recordings When you create a JDK Flight Recorder (JFR) recording on Cryostat 2.4, you can add metadata to the recording by specifying a series of key-value label pairs. Additionally, you can attach custom labels to JFR recordings that are inside a target JVM, so that you can easily identify and better manage your JFR recordings. The following list details some common recording label use cases: Attach metadata to your JFR recording. Perform batch operations on recordings that contain identical labels. Use labels when running queries on recordings. You can use Cryostat to create a JFR recording that monitors the performance of your JVM in your containerized application. Additionally, you can take a snapshot of an active JFR recording to capture any collected data, up to a specific point in time, for your target JVM application. 4.3.1. Adding labels to JFR recordings When you create a JFR recording on Cryostat 2.4, you can use labels to add metadata that contain key-value label pairs to the recording. Cryostat applies default recording labels to a created JFR recording. These default labels capture information about the event template that Cryostat used to create the JFR recording. You can add custom labels to your JFR recording so that you can run specific queries that meet your needs, such as identifying specific JFR recordings or performing batch operations on recordings with the same applied labels. Prerequisites Logged in to your Cryostat web console. Created or selected a target JVM for your Cryostat instance. Procedure From your Cryostat web console, click Recordings . Under the Active Recordings tab, click Create . On the Custom Flight Recording tab, expand Show metadata options . Note On the Custom Flight Recording tab, you must complete any mandatory field that is marked with an asterisk. Click Add label . Figure 4.10. The Add Label button that is displayed under the Custom Flight Recording tab Enter values in the provided Key and Value fields. For example, if you want to file an issue with the recordings, you could enter the reason in the Key field and then enter the issue type in the Value field. Click Create to create your JFR recording. Your recording is then shown under the Active Recordings tab along with any specified recording labels and custom labels. Tip You can access archived JFR recordings from the Archives menu. See Uploading a JFR recording to Cryostat archives location (Using Cryostat to manage a JFR recording). Example The following example shows two default recording labels, template.name: Profiling and template.type: TARGET , and one custom label, reason:service-outage . Figure 4.11. Example of an active recording with defined recording labels and a custom label 4.3.2. Editing a label for your JFR recording On the Cryostat web console, you can navigate to the Recordings menu and then edit a label and its metadata for your JFR recording. You can also edit the label and metadata for a JFR recording that you uploaded to archives. Prerequisites Logged in to your Cryostat web console. Created a JFR recording and attach labels to this recording. Procedure On your Cryostat web console, click the Recording menu. From the Active Recordings tab, locate your JFR recording, and then select the checkbox to it. Click Edit Labels . An Edit Recording Label pane opens in your Cryostat web console, which you can use to add, edit, or delete labels for your JFR recording. Tip You can select multiple JFR recordings by selecting the checkbox that is to each recording. Click the Edit Labels button if you want to bulk edit recordings that contain the same labels or add new identical labels to multiple recordings. Optional : You can perform any of the following actions from the Edit Recording Labels pane: Click Add to create a label. Delete a label by clicking the X to the label. Edit a label by modifying any content in a field. After you edit content, a green tick is shown in the field to indicate an edit. Click Save . Optional : You can archive your JFR recordings along with their labels by completing the following steps: Select the checkbox to the recording's name. Click the Archive button. You can locate your recording under the Archived Recordings tab. By archiving your recording with its labels, you can enhance your search capabilities when you want to locate the recording at a later stage. You can also add additional labels to any recording that you uploaded to the Cryostat archives. Note Cryostat preserves any labels with the recording for the lifetime of the archived recording. Verification From the Active Recordings tab, check that your changes display under the Labels section for your recording. Additional resources Archiving JDK Flight Recorder (JFR) recordings (Using Cryostat to manage a JFR recording) Revised on 2024-03-06 19:01:14 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/getting_started_with_cryostat/assembly_creating-recordings_assembly_configuing-java-applications |
2.5. DML Commands | 2.5. DML Commands 2.5.1. DML Commands JBoss Data Virtualization supports SQL for issuing queries and defining view transformations; see also Section 2.9.1, "Procedural Language" and Section 2.10.1, "Virtual Procedures" for how SQL is used in virtual procedures and update procedures. Nearly all these features follow standard SQL syntax and functionality, so any SQL reference can be used for more information. There are 4 basic commands for manipulating data in SQL, corresponding to the standard create, read, update and delete (CRUD) operations: INSERT, SELECT, UPDATE, and DELETE. A MERGE statement acts as a combination of INSERT and UPDATE. In addition, procedures can be executed using the EXECUTE command or through a procedural relational command. See Section 2.5.8, "Procedural Relational Command" . 2.5.2. SELECT Command The SELECT command is used to retrieve records for any number of relations. A SELECT command consists of several clauses: [WITH ...] SELECT ... [FROM ...] [WHERE ...] [GROUP BY ...] [HAVING ...] [ORDER BY ...] [(LIMIT ...) | ([OFFSET ...] [FETCH ...])] [OPTION ...] See Section 2.6.1, "DML Clauses" for more information about all of these clauses. All of these clauses other than OPTION are defined by the SQL specification. The specification also specifies the order that these clauses will be logically processed. Below is the processing order where each stage passes a set of rows to the following stage. Note that this processing model is logical and does not represent the way any actual database engine performs the processing, although it is a useful model for understanding questions about SQL. WITH stage - gathers all rows from all WITH items in the order listed. Subsequent WITH items and the main query can reference a WITH item as if it is a table. FROM stage - gathers all rows from all tables involved in the query and logically joins them with a Cartesian product, producing a single large table with all columns from all tables. Joins and join criteria are then applied to filter rows that do not match the join structure. WHERE stage - applies a criteria to every output row from the FROM stage, further reducing the number of rows. GROUP BY stage - groups sets of rows with matching values in the GROUP BY columns. HAVING stage - applies criteria to each group of rows. Criteria can only be applied to columns that will have constant values within a group (those in the grouping columns or aggregate functions applied across the group). SELECT stage - specifies the column expressions that should be returned from the query. Expressions are evaluated, including aggregate functions based on the groups of rows, which will no longer exist after this point. The output columns are named using either column aliases or an implicit name determined by the engine. If SELECT DISTINCT is specified, duplicate removal will be performed on the rows being returned from the SELECT stage. ORDER BY stage - sorts the rows returned from the SELECT stage as desired. Supports sorting on multiple columns in specified order, ascending or descending. The output columns will be identical to those columns returned from the SELECT stage and will have the same name. LIMIT stage - returns only the specified rows (with skip and limit values). This model helps to understand how SQL works. For example, columns aliased in the SELECT clause can only be referenced by alias in the ORDER BY clause. Without knowledge of the processing model, this can be somewhat confusing. Seen in light of the model, it is clear that the ORDER BY stage is the only stage occurring after the SELECT stage, which is where the columns are named. Because the WHERE clause is processed before the SELECT, the columns have not yet been named and the aliases are not yet known. Note The explicit table syntax TABLE x may be used as a shortcut for SELECT * FROM x . 2.5.3. INSERT Command The INSERT command is used to add a record to a table. Example Syntax INSERT INTO table (column,...) VALUES (value,...) INSERT INTO table (column,...) query 2.5.4. UPDATE Command The UPDATE command is used to modify records in a table. The operation will result in 1 or more records being updated, or in no records being updated if none match the criteria. Example Syntax UPDATE table SET (column=value,...) [WHERE criteria] 2.5.5. DELETE Command The DELETE command is used to remove records from a table. The operation will result in 1 or more records being deleted, or in no records being deleted if none match the criteria. Example Syntax DELETE FROM table [WHERE criteria] 2.5.6. MERGE Command The MERGE command, also known as UPSERT, is used to add and/or update records. The JBoss Data Virtualization (non-ANSI) MERGE is simply a modified INSERT statement that requires the target table to have a primary key and for the target columns to cover the primary key. The MERGE operation will then check the existence of each row prior to INSERT and instead perform an UPDATE if the row already exists. Example Syntax Note The MERGE statement is not currently pushed to sources, but rather will be broken down into the respective insert/update operations. 2.5.7. EXECUTE Command The EXECUTE command is used to execute a procedure, such as a virtual procedure or a stored procedure. Procedures may have zero or more scalar input parameters. The return value from a procedure is a result set or the set of inout/out/return scalars. Note that EXEC or CALL can be used as a short form of this command. Example Syntax EXECUTE proc() CALL proc(value, ...) EXECUTE proc(name1=>value1,name4=>param4, ...) - named parameter syntax Syntax Rules: The default order of parameter specification is the same as how they are defined in the procedure definition. You can specify the parameters in any order by name. Parameters that have default values and/or are nullable in the metadata, can be omitted from the named parameter call and will have the appropriate value passed at runtime. Positional parameters that are have default values and/or are nullable in the metadata, can be omitted from the end of the parameter list and will have the appropriate value passed at runtime. If the procedure does not return a result set, the values from the RETURN, OUT, and IN_OUT parameters will be returned as a single row when used as an inline view query. A VARIADIC parameter may be repeated 0 or more times as the last positional argument. 2.5.8. Procedural Relational Command Procedural relational commands use the syntax of a SELECT to emulate an EXEC. In a procedural relational command a procedure group name is used in a FROM clause in place of a table. That procedure will be executed in place of normal table access if all of the necessary input values can be found in criteria against the procedure. Each combination of input values found in the criteria results in an execution of the procedure. Example Syntax SELECT * FROM proc SELECT output_param1, output_param2 FROM proc WHERE input_param1 = 'x' SELECT output_param1, output_param2 FROM proc, table WHERE input_param1 = table.col1 AND input_param2 = table.col2 Syntax Rules: The procedure as a table projects the same columns as an exec with the addition of the input parameters. For procedures that do not return a result set, IN_OUT columns will be projected as two columns, one that represents the output value and one named {column name}_IN that represents the input of the parameter. Input values are passed via criteria. Values can be passed by '=','is null', or 'in' predicates. Disjuncts are not allowed. It is also not possible to pass the value of a non-comparable column through an equality predicate. The procedure view automatically has an access pattern on its IN and IN_OUT parameters which allows it to be planned correctly as a dependent join when necessary or fail when sufficient criteria cannot be found. Procedures containing duplicate names between the parameters (IN, IN_OUT, OUT, RETURN) and result set columns cannot be used in a procedural relational command. Default values for IN, IN_OUT parameters are not used if there is no criteria present for a given input. Default values are only valid for named procedure syntax. See Section 2.5.7, "EXECUTE Command" . Note The usage of 'in' or join criteria can result in the procedure being executed multiple times. Note None of the issues listed in the syntax rules above exist if a nested table reference is used. See Section 2.6.5, "FROM Clause" . 2.5.9. Set Operations JBoss Data Virtualization supports the UNION, UNION ALL, INTERSECT, EXCEPT set operations as ways of combining the results of query expressions. Usage: queryExpression (UNION|INTERSECT|EXCEPT) [ALL] queryExpression [ORDER BY...] Syntax Rules: The output columns will be named by the output columns of the first set operation branch. Each SELECT must have the same number of output columns and compatible data types for each relative column. Data type conversion will be performed if data types are inconsistent and implicit conversions exist. If UNION, INTERSECT, or EXCEPT is specified without all, then the output columns must be comparable types. INTERSECT ALL, and EXCEPT ALL are currently not supported. 2.5.10. Subqueries A subquery is an SQL query embedded within another SQL query. The query containing the subquery is the outer query. Supported subquery types: Scalar subquery - a subquery that returns only a single column with a single value. Scalar subqueries are a type of expression and can be used where single valued expressions are expected. Correlated subquery - a subquery that contains a column reference to form the outer query. Uncorrelated subquery - a subquery that contains no references to the outer subquery. 2.5.11. Inline Views Subqueries in the FROM clause of the outer query (also known as "inline views") can return any number of rows and columns. This type of subquery must always be given an alias. An inline view is nearly identical to a traditional view. SELECT a FROM (SELECT Y.b, Y.c FROM Y WHERE Y.d = 3) AS X WHERE a = X.c AND b = X.b See Also: Section 2.6.2, "WITH Clause" 2.5.12. Alternative Subquery Usage Subqueries are supported in quantified criteria, the EXISTS predicate, the IN predicate, and as Scalar Subqueries (see Section 2.3.9, "Scalar Subqueries" ). Example 2.5. Example Subquery in WHERE Using EXISTS Example 2.6. Example Quantified Comparison Subqueries SELECT a FROM X WHERE a >= ANY (SELECT b FROM Y WHERE c=3) SELECT a FROM X WHERE a < SOME (SELECT b FROM Y WHERE c=4) SELECT a FROM X WHERE a = ALL (SELECT b FROM Y WHERE c=2) Example 2.7. Example IN Subquery SELECT a FROM X WHERE a IN (SELECT b FROM Y WHERE c=3) See also Subquery Optimization . | [
"MERGE INTO table (column,...) VALUES (value,...)",
"MERGE INTO table (column,...) query",
"SELECT a FROM (SELECT Y.b, Y.c FROM Y WHERE Y.d = 3) AS X WHERE a = X.c AND b = X.b",
"SELECT a FROM X WHERE EXISTS (SELECT 1 FROM Y WHERE c=X.a)",
"SELECT a FROM X WHERE a >= ANY (SELECT b FROM Y WHERE c=3) SELECT a FROM X WHERE a < SOME (SELECT b FROM Y WHERE c=4) SELECT a FROM X WHERE a = ALL (SELECT b FROM Y WHERE c=2)",
"SELECT a FROM X WHERE a IN (SELECT b FROM Y WHERE c=3)"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-dml_commands |
Chapter 3. Getting started | Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites To build the example, Maven must be configured to use the Red Hat repository or a local repository . You must install the examples . You must have a message broker listening for connections on localhost . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named queue . For more information, see Creating a queue . 3.2. Running Hello World The Hello World example creates a connection to the broker, sends a message containing a greeting to the queue queue, and receives it back. On success, it prints the received message to the console. Procedure Use Maven to build the examples by running the following command in the <source-dir> /qpid-jms-examples directory: USD mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests The addition of dependency:copy-dependencies results in the dependencies being copied into the target/dependency directory. Use the java command to run the example. On Linux or UNIX: USD java -cp "target/classes:target/dependency/*" org.apache.qpid.jms.example.HelloWorld On Windows: > java -cp "target\classes;target\dependency\*" org.apache.qpid.jms.example.HelloWorld For example, running it on Linux results in the following output: USD java -cp "target/classes/:target/dependency/*" org.apache.qpid.jms.example.HelloWorld Hello world! The source code for the example is in the <source-dir> /qpid-jms-examples/src/main/java directory. The JNDI and logging configuration is in the <source-dir> /qpid-jms-examples/src/main/resources directory. | [
"mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests",
"java -cp \"target/classes:target/dependency/*\" org.apache.qpid.jms.example.HelloWorld",
"> java -cp \"target\\classes;target\\dependency\\*\" org.apache.qpid.jms.example.HelloWorld",
"java -cp \"target/classes/:target/dependency/*\" org.apache.qpid.jms.example.HelloWorld Hello world!"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_jms_client/getting_started |
Chapter 24. ExternalBackupService | Chapter 24. ExternalBackupService 24.1. UpdateExternalBackup PATCH /v1/externalbackups/{externalBackup.id} UpdateExternalBackup modifies a given external backup, with optional stored credential reconciliation. 24.1.1. Description 24.1.2. Parameters 24.1.2.1. Path Parameters Name Description Required Default Pattern externalBackup.id X null 24.1.2.2. Body Parameter Name Description Required Default Pattern body ExternalBackupServiceUpdateExternalBackupBody X 24.1.3. Return Type StorageExternalBackup 24.1.4. Content Type application/json 24.1.5. Responses Table 24.1. HTTP Response Codes Code Message Datatype 200 A successful response. StorageExternalBackup 0 An unexpected error response. GooglerpcStatus 24.1.6. Samples 24.1.7. Common object reference 24.1.7.1. ExternalBackupServiceUpdateExternalBackupBody Field Name Required Nullable Type Description Format externalBackup NextAvailableTag10 updatePassword Boolean When false, use the stored credentials of an existing external backup configuration given its ID. 24.1.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 24.1.7.3. NextAvailableTag10 Field Name Required Nullable Type Description Format name String type String schedule StorageSchedule backupsToKeep Integer int32 s3 StorageS3Config gcs StorageGCSConfig s3compatible StorageS3Compatible includeCertificates Boolean 24.1.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 24.1.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 24.1.7.5. ScheduleDaysOfMonth Field Name Required Nullable Type Description Format days List of integer int32 24.1.7.6. ScheduleDaysOfWeek Field Name Required Nullable Type Description Format days List of integer int32 24.1.7.7. ScheduleIntervalType Enum Values UNSET DAILY WEEKLY MONTHLY 24.1.7.8. ScheduleWeeklyInterval Field Name Required Nullable Type Description Format day Integer int32 24.1.7.9. StorageExternalBackup Field Name Required Nullable Type Description Format id String name String type String schedule StorageSchedule backupsToKeep Integer int32 s3 StorageS3Config gcs StorageGCSConfig s3compatible StorageS3Compatible includeCertificates Boolean 24.1.7.10. StorageGCSConfig Field Name Required Nullable Type Description Format bucket String serviceAccount String The service account for the storage integration. The server will mask the value of this credential in responses and logs. objectPrefix String useWorkloadId Boolean 24.1.7.11. StorageS3Compatible S3Compatible configures the backup integration with an S3 compatible storage provider. S3 compatible is intended for non-AWS providers. For AWS S3 use S3Config. Field Name Required Nullable Type Description Format bucket String accessKeyId String The access key ID to use. The server will mask the value of this credential in responses and logs. secretAccessKey String The secret access key to use. The server will mask the value of this credential in responses and logs. region String objectPrefix String endpoint String urlStyle StorageS3URLStyle S3_URL_STYLE_UNSPECIFIED, S3_URL_STYLE_VIRTUAL_HOSTED, S3_URL_STYLE_PATH, 24.1.7.12. StorageS3Config S3Config configures the backup integration with AWS S3. Field Name Required Nullable Type Description Format bucket String useIam Boolean accessKeyId String The access key ID for the storage integration. The server will mask the value of this credential in responses and logs. secretAccessKey String The secret access key for the storage integration. The server will mask the value of this credential in responses and logs. region String objectPrefix String endpoint String 24.1.7.13. StorageS3URLStyle Enum Values S3_URL_STYLE_UNSPECIFIED S3_URL_STYLE_VIRTUAL_HOSTED S3_URL_STYLE_PATH 24.1.7.14. StorageSchedule Field Name Required Nullable Type Description Format intervalType ScheduleIntervalType UNSET, DAILY, WEEKLY, MONTHLY, hour Integer int32 minute Integer int32 weekly ScheduleWeeklyInterval daysOfWeek ScheduleDaysOfWeek daysOfMonth ScheduleDaysOfMonth 24.2. GetExternalBackups GET /v1/externalbackups GetExternalBackups returns all external backup configurations. 24.2.1. Description 24.2.2. Parameters 24.2.3. Return Type V1GetExternalBackupsResponse 24.2.4. Content Type application/json 24.2.5. Responses Table 24.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetExternalBackupsResponse 0 An unexpected error response. GooglerpcStatus 24.2.6. Samples 24.2.7. Common object reference 24.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 24.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 24.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 24.2.7.3. ScheduleDaysOfMonth Field Name Required Nullable Type Description Format days List of integer int32 24.2.7.4. ScheduleDaysOfWeek Field Name Required Nullable Type Description Format days List of integer int32 24.2.7.5. ScheduleIntervalType Enum Values UNSET DAILY WEEKLY MONTHLY 24.2.7.6. ScheduleWeeklyInterval Field Name Required Nullable Type Description Format day Integer int32 24.2.7.7. StorageExternalBackup Field Name Required Nullable Type Description Format id String name String type String schedule StorageSchedule backupsToKeep Integer int32 s3 StorageS3Config gcs StorageGCSConfig s3compatible StorageS3Compatible includeCertificates Boolean 24.2.7.8. StorageGCSConfig Field Name Required Nullable Type Description Format bucket String serviceAccount String The service account for the storage integration. The server will mask the value of this credential in responses and logs. objectPrefix String useWorkloadId Boolean 24.2.7.9. StorageS3Compatible S3Compatible configures the backup integration with an S3 compatible storage provider. S3 compatible is intended for non-AWS providers. For AWS S3 use S3Config. Field Name Required Nullable Type Description Format bucket String accessKeyId String The access key ID to use. The server will mask the value of this credential in responses and logs. secretAccessKey String The secret access key to use. The server will mask the value of this credential in responses and logs. region String objectPrefix String endpoint String urlStyle StorageS3URLStyle S3_URL_STYLE_UNSPECIFIED, S3_URL_STYLE_VIRTUAL_HOSTED, S3_URL_STYLE_PATH, 24.2.7.10. StorageS3Config S3Config configures the backup integration with AWS S3. Field Name Required Nullable Type Description Format bucket String useIam Boolean accessKeyId String The access key ID for the storage integration. The server will mask the value of this credential in responses and logs. secretAccessKey String The secret access key for the storage integration. The server will mask the value of this credential in responses and logs. region String objectPrefix String endpoint String 24.2.7.11. StorageS3URLStyle Enum Values S3_URL_STYLE_UNSPECIFIED S3_URL_STYLE_VIRTUAL_HOSTED S3_URL_STYLE_PATH 24.2.7.12. StorageSchedule Field Name Required Nullable Type Description Format intervalType ScheduleIntervalType UNSET, DAILY, WEEKLY, MONTHLY, hour Integer int32 minute Integer int32 weekly ScheduleWeeklyInterval daysOfWeek ScheduleDaysOfWeek daysOfMonth ScheduleDaysOfMonth 24.2.7.13. V1GetExternalBackupsResponse Field Name Required Nullable Type Description Format externalBackups List of StorageExternalBackup 24.3. DeleteExternalBackup DELETE /v1/externalbackups/{id} DeleteExternalBackup removes an external backup configuration given its ID. 24.3.1. Description 24.3.2. Parameters 24.3.2.1. Path Parameters Name Description Required Default Pattern id X null 24.3.3. Return Type Object 24.3.4. Content Type application/json 24.3.5. Responses Table 24.3. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 24.3.6. Samples 24.3.7. Common object reference 24.3.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 24.3.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 24.3.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 24.4. GetExternalBackup GET /v1/externalbackups/{id} GetExternalBackup returns the external backup configuration given its ID. 24.4.1. Description 24.4.2. Parameters 24.4.2.1. Path Parameters Name Description Required Default Pattern id X null 24.4.3. Return Type StorageExternalBackup 24.4.4. Content Type application/json 24.4.5. Responses Table 24.4. HTTP Response Codes Code Message Datatype 200 A successful response. StorageExternalBackup 0 An unexpected error response. GooglerpcStatus 24.4.6. Samples 24.4.7. Common object reference 24.4.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 24.4.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 24.4.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 24.4.7.3. ScheduleDaysOfMonth Field Name Required Nullable Type Description Format days List of integer int32 24.4.7.4. ScheduleDaysOfWeek Field Name Required Nullable Type Description Format days List of integer int32 24.4.7.5. ScheduleIntervalType Enum Values UNSET DAILY WEEKLY MONTHLY 24.4.7.6. ScheduleWeeklyInterval Field Name Required Nullable Type Description Format day Integer int32 24.4.7.7. StorageExternalBackup Field Name Required Nullable Type Description Format id String name String type String schedule StorageSchedule backupsToKeep Integer int32 s3 StorageS3Config gcs StorageGCSConfig s3compatible StorageS3Compatible includeCertificates Boolean 24.4.7.8. StorageGCSConfig Field Name Required Nullable Type Description Format bucket String serviceAccount String The service account for the storage integration. The server will mask the value of this credential in responses and logs. objectPrefix String useWorkloadId Boolean 24.4.7.9. StorageS3Compatible S3Compatible configures the backup integration with an S3 compatible storage provider. S3 compatible is intended for non-AWS providers. For AWS S3 use S3Config. Field Name Required Nullable Type Description Format bucket String accessKeyId String The access key ID to use. The server will mask the value of this credential in responses and logs. secretAccessKey String The secret access key to use. The server will mask the value of this credential in responses and logs. region String objectPrefix String endpoint String urlStyle StorageS3URLStyle S3_URL_STYLE_UNSPECIFIED, S3_URL_STYLE_VIRTUAL_HOSTED, S3_URL_STYLE_PATH, 24.4.7.10. StorageS3Config S3Config configures the backup integration with AWS S3. Field Name Required Nullable Type Description Format bucket String useIam Boolean accessKeyId String The access key ID for the storage integration. The server will mask the value of this credential in responses and logs. secretAccessKey String The secret access key for the storage integration. The server will mask the value of this credential in responses and logs. region String objectPrefix String endpoint String 24.4.7.11. StorageS3URLStyle Enum Values S3_URL_STYLE_UNSPECIFIED S3_URL_STYLE_VIRTUAL_HOSTED S3_URL_STYLE_PATH 24.4.7.12. StorageSchedule Field Name Required Nullable Type Description Format intervalType ScheduleIntervalType UNSET, DAILY, WEEKLY, MONTHLY, hour Integer int32 minute Integer int32 weekly ScheduleWeeklyInterval daysOfWeek ScheduleDaysOfWeek daysOfMonth ScheduleDaysOfMonth 24.5. TriggerExternalBackup POST /v1/externalbackups/{id} TriggerExternalBackup initiates an external backup for the given configuration. 24.5.1. Description 24.5.2. Parameters 24.5.2.1. Path Parameters Name Description Required Default Pattern id X null 24.5.3. Return Type Object 24.5.4. Content Type application/json 24.5.5. Responses Table 24.5. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 24.5.6. Samples 24.5.7. Common object reference 24.5.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 24.5.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 24.5.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 24.6. PutExternalBackup PUT /v1/externalbackups/{id} PutExternalBackup modifies a given external backup, without using stored credential reconciliation. 24.6.1. Description 24.6.2. Parameters 24.6.2.1. Path Parameters Name Description Required Default Pattern id X null 24.6.2.2. Body Parameter Name Description Required Default Pattern body ExternalBackupServicePutExternalBackupBody X 24.6.3. Return Type StorageExternalBackup 24.6.4. Content Type application/json 24.6.5. Responses Table 24.6. HTTP Response Codes Code Message Datatype 200 A successful response. StorageExternalBackup 0 An unexpected error response. GooglerpcStatus 24.6.6. Samples 24.6.7. Common object reference 24.6.7.1. ExternalBackupServicePutExternalBackupBody Field Name Required Nullable Type Description Format name String type String schedule StorageSchedule backupsToKeep Integer int32 s3 StorageS3Config gcs StorageGCSConfig s3compatible StorageS3Compatible includeCertificates Boolean 24.6.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 24.6.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 24.6.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 24.6.7.4. ScheduleDaysOfMonth Field Name Required Nullable Type Description Format days List of integer int32 24.6.7.5. ScheduleDaysOfWeek Field Name Required Nullable Type Description Format days List of integer int32 24.6.7.6. ScheduleIntervalType Enum Values UNSET DAILY WEEKLY MONTHLY 24.6.7.7. ScheduleWeeklyInterval Field Name Required Nullable Type Description Format day Integer int32 24.6.7.8. StorageExternalBackup Field Name Required Nullable Type Description Format id String name String type String schedule StorageSchedule backupsToKeep Integer int32 s3 StorageS3Config gcs StorageGCSConfig s3compatible StorageS3Compatible includeCertificates Boolean 24.6.7.9. StorageGCSConfig Field Name Required Nullable Type Description Format bucket String serviceAccount String The service account for the storage integration. The server will mask the value of this credential in responses and logs. objectPrefix String useWorkloadId Boolean 24.6.7.10. StorageS3Compatible S3Compatible configures the backup integration with an S3 compatible storage provider. S3 compatible is intended for non-AWS providers. For AWS S3 use S3Config. Field Name Required Nullable Type Description Format bucket String accessKeyId String The access key ID to use. The server will mask the value of this credential in responses and logs. secretAccessKey String The secret access key to use. The server will mask the value of this credential in responses and logs. region String objectPrefix String endpoint String urlStyle StorageS3URLStyle S3_URL_STYLE_UNSPECIFIED, S3_URL_STYLE_VIRTUAL_HOSTED, S3_URL_STYLE_PATH, 24.6.7.11. StorageS3Config S3Config configures the backup integration with AWS S3. Field Name Required Nullable Type Description Format bucket String useIam Boolean accessKeyId String The access key ID for the storage integration. The server will mask the value of this credential in responses and logs. secretAccessKey String The secret access key for the storage integration. The server will mask the value of this credential in responses and logs. region String objectPrefix String endpoint String 24.6.7.12. StorageS3URLStyle Enum Values S3_URL_STYLE_UNSPECIFIED S3_URL_STYLE_VIRTUAL_HOSTED S3_URL_STYLE_PATH 24.6.7.13. StorageSchedule Field Name Required Nullable Type Description Format intervalType ScheduleIntervalType UNSET, DAILY, WEEKLY, MONTHLY, hour Integer int32 minute Integer int32 weekly ScheduleWeeklyInterval daysOfWeek ScheduleDaysOfWeek daysOfMonth ScheduleDaysOfMonth 24.7. PostExternalBackup POST /v1/externalbackups PostExternalBackup creates an external backup configuration. 24.7.1. Description 24.7.2. Parameters 24.7.2.1. Body Parameter Name Description Required Default Pattern body StorageExternalBackup X 24.7.3. Return Type StorageExternalBackup 24.7.4. Content Type application/json 24.7.5. Responses Table 24.7. HTTP Response Codes Code Message Datatype 200 A successful response. StorageExternalBackup 0 An unexpected error response. GooglerpcStatus 24.7.6. Samples 24.7.7. Common object reference 24.7.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 24.7.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 24.7.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 24.7.7.3. ScheduleDaysOfMonth Field Name Required Nullable Type Description Format days List of integer int32 24.7.7.4. ScheduleDaysOfWeek Field Name Required Nullable Type Description Format days List of integer int32 24.7.7.5. ScheduleIntervalType Enum Values UNSET DAILY WEEKLY MONTHLY 24.7.7.6. ScheduleWeeklyInterval Field Name Required Nullable Type Description Format day Integer int32 24.7.7.7. StorageExternalBackup Field Name Required Nullable Type Description Format id String name String type String schedule StorageSchedule backupsToKeep Integer int32 s3 StorageS3Config gcs StorageGCSConfig s3compatible StorageS3Compatible includeCertificates Boolean 24.7.7.8. StorageGCSConfig Field Name Required Nullable Type Description Format bucket String serviceAccount String The service account for the storage integration. The server will mask the value of this credential in responses and logs. objectPrefix String useWorkloadId Boolean 24.7.7.9. StorageS3Compatible S3Compatible configures the backup integration with an S3 compatible storage provider. S3 compatible is intended for non-AWS providers. For AWS S3 use S3Config. Field Name Required Nullable Type Description Format bucket String accessKeyId String The access key ID to use. The server will mask the value of this credential in responses and logs. secretAccessKey String The secret access key to use. The server will mask the value of this credential in responses and logs. region String objectPrefix String endpoint String urlStyle StorageS3URLStyle S3_URL_STYLE_UNSPECIFIED, S3_URL_STYLE_VIRTUAL_HOSTED, S3_URL_STYLE_PATH, 24.7.7.10. StorageS3Config S3Config configures the backup integration with AWS S3. Field Name Required Nullable Type Description Format bucket String useIam Boolean accessKeyId String The access key ID for the storage integration. The server will mask the value of this credential in responses and logs. secretAccessKey String The secret access key for the storage integration. The server will mask the value of this credential in responses and logs. region String objectPrefix String endpoint String 24.7.7.11. StorageS3URLStyle Enum Values S3_URL_STYLE_UNSPECIFIED S3_URL_STYLE_VIRTUAL_HOSTED S3_URL_STYLE_PATH 24.7.7.12. StorageSchedule Field Name Required Nullable Type Description Format intervalType ScheduleIntervalType UNSET, DAILY, WEEKLY, MONTHLY, hour Integer int32 minute Integer int32 weekly ScheduleWeeklyInterval daysOfWeek ScheduleDaysOfWeek daysOfMonth ScheduleDaysOfMonth 24.8. TestExternalBackup POST /v1/externalbackups/test TestExternalBackup tests an external backup configuration. 24.8.1. Description 24.8.2. Parameters 24.8.2.1. Body Parameter Name Description Required Default Pattern body StorageExternalBackup X 24.8.3. Return Type Object 24.8.4. Content Type application/json 24.8.5. Responses Table 24.8. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 24.8.6. Samples 24.8.7. Common object reference 24.8.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 24.8.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 24.8.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 24.8.7.3. ScheduleDaysOfMonth Field Name Required Nullable Type Description Format days List of integer int32 24.8.7.4. ScheduleDaysOfWeek Field Name Required Nullable Type Description Format days List of integer int32 24.8.7.5. ScheduleIntervalType Enum Values UNSET DAILY WEEKLY MONTHLY 24.8.7.6. ScheduleWeeklyInterval Field Name Required Nullable Type Description Format day Integer int32 24.8.7.7. StorageExternalBackup Field Name Required Nullable Type Description Format id String name String type String schedule StorageSchedule backupsToKeep Integer int32 s3 StorageS3Config gcs StorageGCSConfig s3compatible StorageS3Compatible includeCertificates Boolean 24.8.7.8. StorageGCSConfig Field Name Required Nullable Type Description Format bucket String serviceAccount String The service account for the storage integration. The server will mask the value of this credential in responses and logs. objectPrefix String useWorkloadId Boolean 24.8.7.9. StorageS3Compatible S3Compatible configures the backup integration with an S3 compatible storage provider. S3 compatible is intended for non-AWS providers. For AWS S3 use S3Config. Field Name Required Nullable Type Description Format bucket String accessKeyId String The access key ID to use. The server will mask the value of this credential in responses and logs. secretAccessKey String The secret access key to use. The server will mask the value of this credential in responses and logs. region String objectPrefix String endpoint String urlStyle StorageS3URLStyle S3_URL_STYLE_UNSPECIFIED, S3_URL_STYLE_VIRTUAL_HOSTED, S3_URL_STYLE_PATH, 24.8.7.10. StorageS3Config S3Config configures the backup integration with AWS S3. Field Name Required Nullable Type Description Format bucket String useIam Boolean accessKeyId String The access key ID for the storage integration. The server will mask the value of this credential in responses and logs. secretAccessKey String The secret access key for the storage integration. The server will mask the value of this credential in responses and logs. region String objectPrefix String endpoint String 24.8.7.11. StorageS3URLStyle Enum Values S3_URL_STYLE_UNSPECIFIED S3_URL_STYLE_VIRTUAL_HOSTED S3_URL_STYLE_PATH 24.8.7.12. StorageSchedule Field Name Required Nullable Type Description Format intervalType ScheduleIntervalType UNSET, DAILY, WEEKLY, MONTHLY, hour Integer int32 minute Integer int32 weekly ScheduleWeeklyInterval daysOfWeek ScheduleDaysOfWeek daysOfMonth ScheduleDaysOfMonth 24.9. TestUpdatedExternalBackup POST /v1/externalbackups/test/updated TestUpdatedExternalBackup checks if the given external backup is correctly configured, with optional stored credential reconciliation. 24.9.1. Description 24.9.2. Parameters 24.9.2.1. Body Parameter Name Description Required Default Pattern body V1UpdateExternalBackupRequest X 24.9.3. Return Type Object 24.9.4. Content Type application/json 24.9.5. Responses Table 24.9. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 24.9.6. Samples 24.9.7. Common object reference 24.9.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 24.9.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 24.9.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 24.9.7.3. ScheduleDaysOfMonth Field Name Required Nullable Type Description Format days List of integer int32 24.9.7.4. ScheduleDaysOfWeek Field Name Required Nullable Type Description Format days List of integer int32 24.9.7.5. ScheduleIntervalType Enum Values UNSET DAILY WEEKLY MONTHLY 24.9.7.6. ScheduleWeeklyInterval Field Name Required Nullable Type Description Format day Integer int32 24.9.7.7. StorageExternalBackup Field Name Required Nullable Type Description Format id String name String type String schedule StorageSchedule backupsToKeep Integer int32 s3 StorageS3Config gcs StorageGCSConfig s3compatible StorageS3Compatible includeCertificates Boolean 24.9.7.8. StorageGCSConfig Field Name Required Nullable Type Description Format bucket String serviceAccount String The service account for the storage integration. The server will mask the value of this credential in responses and logs. objectPrefix String useWorkloadId Boolean 24.9.7.9. StorageS3Compatible S3Compatible configures the backup integration with an S3 compatible storage provider. S3 compatible is intended for non-AWS providers. For AWS S3 use S3Config. Field Name Required Nullable Type Description Format bucket String accessKeyId String The access key ID to use. The server will mask the value of this credential in responses and logs. secretAccessKey String The secret access key to use. The server will mask the value of this credential in responses and logs. region String objectPrefix String endpoint String urlStyle StorageS3URLStyle S3_URL_STYLE_UNSPECIFIED, S3_URL_STYLE_VIRTUAL_HOSTED, S3_URL_STYLE_PATH, 24.9.7.10. StorageS3Config S3Config configures the backup integration with AWS S3. Field Name Required Nullable Type Description Format bucket String useIam Boolean accessKeyId String The access key ID for the storage integration. The server will mask the value of this credential in responses and logs. secretAccessKey String The secret access key for the storage integration. The server will mask the value of this credential in responses and logs. region String objectPrefix String endpoint String 24.9.7.11. StorageS3URLStyle Enum Values S3_URL_STYLE_UNSPECIFIED S3_URL_STYLE_VIRTUAL_HOSTED S3_URL_STYLE_PATH 24.9.7.12. StorageSchedule Field Name Required Nullable Type Description Format intervalType ScheduleIntervalType UNSET, DAILY, WEEKLY, MONTHLY, hour Integer int32 minute Integer int32 weekly ScheduleWeeklyInterval daysOfWeek ScheduleDaysOfWeek daysOfMonth ScheduleDaysOfMonth 24.9.7.13. V1UpdateExternalBackupRequest Field Name Required Nullable Type Description Format externalBackup StorageExternalBackup updatePassword Boolean When false, use the stored credentials of an existing external backup configuration given its ID. | [
"Next available tag: 10",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"1 for 1st, 2 for 2nd .... 31 for 31st",
"Sunday = 0, Monday = 1, .... Saturday = 6",
"Next available tag: 10",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"1 for 1st, 2 for 2nd .... 31 for 31st",
"Sunday = 0, Monday = 1, .... Saturday = 6",
"Next available tag: 10",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"1 for 1st, 2 for 2nd .... 31 for 31st",
"Sunday = 0, Monday = 1, .... Saturday = 6",
"Next available tag: 10",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next available tag: 10",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"1 for 1st, 2 for 2nd .... 31 for 31st",
"Sunday = 0, Monday = 1, .... Saturday = 6",
"Next available tag: 10",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"1 for 1st, 2 for 2nd .... 31 for 31st",
"Sunday = 0, Monday = 1, .... Saturday = 6",
"Next available tag: 10",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"1 for 1st, 2 for 2nd .... 31 for 31st",
"Sunday = 0, Monday = 1, .... Saturday = 6",
"Next available tag: 10",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"1 for 1st, 2 for 2nd .... 31 for 31st",
"Sunday = 0, Monday = 1, .... Saturday = 6",
"Next available tag: 10"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/externalbackupservice |
Chapter 7. Deploying Red Hat Quay using the Operator | Chapter 7. Deploying Red Hat Quay using the Operator Red Hat Quay on OpenShift Container Platform can be deployed using command-line interface or from the OpenShift Container Platform console. The steps are fundamentally the same. 7.1. Deploying Red Hat Quay from the command line Use the following procedure to deploy Red Hat Quay from using the command-line interface (CLI). Prerequisites You have logged into OpenShift Container Platform using the CLI. Procedure Create a namespace, for example, quay-enterprise , by entering the following command: USD oc new-project quay-enterprise Optional. If you want to pre-configure any aspects of your Red Hat Quay deployment, create a Secret for the config bundle: USD oc create secret generic quay-enterprise-config-bundle --from-file=config-bundle.tar.gz=/path/to/config-bundle.tar.gz Create a QuayRegistry custom resource in a file called quayregistry.yaml For a minimal deployment, using all the defaults: quayregistry.yaml: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise Optional. If you want to have some components unmanaged, add this information in the spec field. A minimal deployment might look like the following example: Example quayregistry.yaml with unmanaged components apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: clair managed: false - kind: horizontalpodautoscaler managed: false - kind: mirror managed: false - kind: monitoring managed: false Optional. If you have created a config bundle, for example, init-config-bundle-secret , reference it in the quayregistry.yaml file: Example quayregistry.yaml with a config bundle apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: init-config-bundle-secret Optional. If you have a proxy configured, you can add the information using overrides for Red Hat Quay, Clair, and mirroring: Example quayregistry.yaml with proxy configured kind: QuayRegistry metadata: name: quay37 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: mirror managed: true overrides: env: - name: DEBUGLOG value: "true" - name: HTTP_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: HTTPS_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: NO_PROXY value: svc.cluster.local,localhost,quay370.apps.quayperf370.perfscale.devcluster.openshift.com - kind: tls managed: false - kind: clair managed: true overrides: env: - name: HTTP_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: HTTPS_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: NO_PROXY value: svc.cluster.local,localhost,quay370.apps.quayperf370.perfscale.devcluster.openshift.com - kind: quay managed: true overrides: env: - name: DEBUGLOG value: "true" - name: NO_PROXY value: svc.cluster.local,localhost,quay370.apps.quayperf370.perfscale.devcluster.openshift.com - name: HTTP_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: HTTPS_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 Create the QuayRegistry in the specified namespace by entering the following command: USD oc create -n quay-enterprise -f quayregistry.yaml Enter the following command to see when the status.registryEndpoint is populated: USD oc get quayregistry -n quay-enterprise example-registry -o jsonpath="{.status.registryEndpoint}" -w Additional resources For more information about how to track the progress of your Red Hat Quay deployment, see Monitoring and debugging the deployment process . 7.1.1. Using the API to create the first user Use the following procedure to create the first user in your Red Hat Quay organization. Prerequisites The config option FEATURE_USER_INITIALIZE must be set to true . No users can already exist in the database. Procedure This procedure requests an OAuth token by specifying "access_token": true . Open your Red Hat Quay configuration file and update the following configuration fields: FEATURE_USER_INITIALIZE: true SUPER_USERS: - quayadmin Stop the Red Hat Quay service by entering the following command: USD sudo podman stop quay Start the Red Hat Quay service by entering the following command: USD sudo podman run -d -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv} Run the following CURL command to generate a new user with a username, password, email, and access token: USD curl -X POST -k http://quay-server.example.com/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ "username": "quayadmin", "password":"quaypass12345", "email": "[email protected]", "access_token": true}' If successful, the command returns an object with the username, email, and encrypted password. For example: {"access_token":"6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED", "email":"[email protected]","encrypted_password":"1nZMLH57RIE5UGdL/yYpDOHLqiNCgimb6W9kfF8MjZ1xrfDpRyRs9NUnUuNuAitW","username":"quayadmin"} # gitleaks:allow If a user already exists in the database, an error is returned: {"message":"Cannot initialize user in a non-empty database"} If your password is not at least eight characters or contains whitespace, an error is returned: {"message":"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace."} Log in to your Red Hat Quay deployment by entering the following command: USD sudo podman login -u quayadmin -p quaypass12345 http://quay-server.example.com --tls-verify=false Example output Login Succeeded! 7.1.2. Viewing created components using the command line Use the following procedure to view deployed Red Hat Quay components. Prerequisites You have deployed Red Hat Quay on OpenShift Container Platform. Procedure Enter the following command to view the deployed components: USD oc get pods -n quay-enterprise Example output NAME READY STATUS RESTARTS AGE example-registry-clair-app-5ffc9f77d6-jwr9s 1/1 Running 0 3m42s example-registry-clair-app-5ffc9f77d6-wgp7d 1/1 Running 0 3m41s example-registry-clair-postgres-54956d6d9c-rgs8l 1/1 Running 0 3m5s example-registry-quay-app-79c6b86c7b-8qnr2 1/1 Running 4 3m42s example-registry-quay-app-79c6b86c7b-xk85f 1/1 Running 4 3m41s example-registry-quay-app-upgrade-5kl5r 0/1 Completed 4 3m50s example-registry-quay-database-b466fc4d7-tfrnx 1/1 Running 2 3m42s example-registry-quay-mirror-6d9bd78756-6lj6p 1/1 Running 0 2m58s example-registry-quay-mirror-6d9bd78756-bv6gq 1/1 Running 0 2m58s example-registry-quay-postgres-init-dzbmx 0/1 Completed 0 3m43s example-registry-quay-redis-8bd67b647-skgqx 1/1 Running 0 3m42s 7.1.3. Horizontal Pod Autoscaling A default deployment shows the following running pods: Two pods for the Red Hat Quay application itself ( example-registry-quay-app-*` ) One Redis pod for Red Hat Quay logging ( example-registry-quay-redis-* ) One database pod for PostgreSQL used by Red Hat Quay for metadata storage ( example-registry-quay-database-* ) Two Quay mirroring pods ( example-registry-quay-mirror-* ) Two pods for the Clair application ( example-registry-clair-app-* ) One PostgreSQL pod for Clair ( example-registry-clair-postgres-* ) Horizontal PPod Autoscaling is configured by default to be managed , and the number of pods for Quay, Clair and repository mirroring is set to two. This facilitates the avoidance of downtime when updating or reconfiguring Red Hat Quay through the Red Hat Quay Operator or during rescheduling events. You can enter the following command to view information about HPA objects: USD oc get hpa -n quay-enterprise Example output NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE example-registry-clair-app Deployment/example-registry-clair-app 16%/90%, 0%/90% 2 10 2 13d example-registry-quay-app Deployment/example-registry-quay-app 31%/90%, 1%/90% 2 20 2 13d example-registry-quay-mirror Deployment/example-registry-quay-mirror 27%/90%, 0%/90% 2 20 2 13d Additional resources For more information on pre-configuring your Red Hat Quay deployment, see the section Pre-configuring Red Hat Quay for automation 7.1.4. Monitoring and debugging the deployment process Users can now troubleshoot problems during the deployment phase. The status in the QuayRegistry object can help you monitor the health of the components during the deployment an help you debug any problems that may arise. Procedure Enter the following command to check the status of your deployment: USD oc get quayregistry -n quay-enterprise -o yaml Example output Immediately after deployment, the QuayRegistry object will show the basic configuration: apiVersion: v1 items: - apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: creationTimestamp: "2021-09-14T10:51:22Z" generation: 3 name: example-registry namespace: quay-enterprise resourceVersion: "50147" selfLink: /apis/quay.redhat.com/v1/namespaces/quay-enterprise/quayregistries/example-registry uid: e3fc82ba-e716-4646-bb0f-63c26d05e00e spec: components: - kind: postgres managed: true - kind: clair managed: true - kind: redis managed: true - kind: horizontalpodautoscaler managed: true - kind: objectstorage managed: true - kind: route managed: true - kind: mirror managed: true - kind: monitoring managed: true - kind: tls managed: true - kind: clairpostgres managed: true configBundleSecret: example-registry-config-bundle-kt55s kind: List metadata: resourceVersion: "" selfLink: "" Use the oc get pods command to view the current state of the deployed components: USD oc get pods -n quay-enterprise Example output NAME READY STATUS RESTARTS AGE example-registry-clair-app-86554c6b49-ds7bl 0/1 ContainerCreating 0 2s example-registry-clair-app-86554c6b49-hxp5s 0/1 Running 1 17s example-registry-clair-postgres-68d8857899-lbc5n 0/1 ContainerCreating 0 17s example-registry-quay-app-upgrade-h2v7h 0/1 ContainerCreating 0 9s example-registry-quay-database-66f495c9bc-wqsjf 0/1 ContainerCreating 0 17s example-registry-quay-mirror-854c88457b-d845g 0/1 Init:0/1 0 2s example-registry-quay-mirror-854c88457b-fghxv 0/1 Init:0/1 0 17s example-registry-quay-postgres-init-bktdt 0/1 Terminating 0 17s example-registry-quay-redis-f9b9d44bf-4htpz 0/1 ContainerCreating 0 17s While the deployment is in progress, the QuayRegistry object will show the current status. In this instance, database migrations are taking place, and other components are waiting until completion: status: conditions: - lastTransitionTime: "2021-09-14T10:52:04Z" lastUpdateTime: "2021-09-14T10:52:04Z" message: all objects created/updated successfully reason: ComponentsCreationSuccess status: "False" type: RolloutBlocked - lastTransitionTime: "2021-09-14T10:52:05Z" lastUpdateTime: "2021-09-14T10:52:05Z" message: running database migrations reason: MigrationsInProgress status: "False" type: Available lastUpdated: 2021-09-14 10:52:05.371425635 +0000 UTC unhealthyComponents: clair: - lastTransitionTime: "2021-09-14T10:51:32Z" lastUpdateTime: "2021-09-14T10:51:32Z" message: 'Deployment example-registry-clair-postgres: Deployment does not have minimum availability.' reason: MinimumReplicasUnavailable status: "False" type: Available - lastTransitionTime: "2021-09-14T10:51:32Z" lastUpdateTime: "2021-09-14T10:51:32Z" message: 'Deployment example-registry-clair-app: Deployment does not have minimum availability.' reason: MinimumReplicasUnavailable status: "False" type: Available mirror: - lastTransitionTime: "2021-09-14T10:51:32Z" lastUpdateTime: "2021-09-14T10:51:32Z" message: 'Deployment example-registry-quay-mirror: Deployment does not have minimum availability.' reason: MinimumReplicasUnavailable status: "False" type: Available When the deployment process finishes successfully, the status in the QuayRegistry object shows no unhealthy components: status: conditions: - lastTransitionTime: "2021-09-14T10:52:36Z" lastUpdateTime: "2021-09-14T10:52:36Z" message: all registry component healthchecks passing reason: HealthChecksPassing status: "True" type: Available - lastTransitionTime: "2021-09-14T10:52:46Z" lastUpdateTime: "2021-09-14T10:52:46Z" message: all objects created/updated successfully reason: ComponentsCreationSuccess status: "False" type: RolloutBlocked currentVersion: {producty} lastUpdated: 2021-09-14 10:52:46.104181633 +0000 UTC registryEndpoint: https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org unhealthyComponents: {} 7.2. Deploying Red Hat Quay from the OpenShift Container Platform console Create a namespace, for example, quay-enterprise . Select Operators Installed Operators , then select the Quay Operator to navigate to the Operator detail view. Click 'Create Instance' on the 'Quay Registry' tile under 'Provided APIs'. Optionally change the 'Name' of the QuayRegistry . This will affect the hostname of the registry. All other fields have been populated with defaults. Click 'Create' to submit the QuayRegistry to be deployed by the Quay Operator. You should be redirected to the QuayRegistry list view. Click on the QuayRegistry you just created to see the details view. Once the 'Registry Endpoint' has a value, click it to access your new Quay registry via the UI. You can now select 'Create Account' to create a user and sign in. 7.2.1. Using the Red Hat Quay UI to create the first user Use the following procedure to create the first user by the Red Hat Quay UI. Note This procedure assumes that the FEATURE_USER_CREATION config option has not been set to false. If it is false , the Create Account functionality on the UI will be disabled, and you will have to use the API to create the first user. Procedure In the OpenShift Container Platform console, navigate to Operators Installed Operators , with the appropriate namespace / project. Click on the newly installed QuayRegistry object to view the details. For example: After the Registry Endpoint has a value, navigate to this URL in your browser. Select Create Account in the Red Hat Quay registry UI to create a user. For example: Enter the details for Username , Password , Email , and then click Create Account . For example: After creating the first user, you are automatically logged in to the Red Hat Quay registry. For example: | [
"oc new-project quay-enterprise",
"oc create secret generic quay-enterprise-config-bundle --from-file=config-bundle.tar.gz=/path/to/config-bundle.tar.gz",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: clair managed: false - kind: horizontalpodautoscaler managed: false - kind: mirror managed: false - kind: monitoring managed: false",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: init-config-bundle-secret",
"kind: QuayRegistry metadata: name: quay37 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: mirror managed: true overrides: env: - name: DEBUGLOG value: \"true\" - name: HTTP_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: HTTPS_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: NO_PROXY value: svc.cluster.local,localhost,quay370.apps.quayperf370.perfscale.devcluster.openshift.com - kind: tls managed: false - kind: clair managed: true overrides: env: - name: HTTP_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: HTTPS_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: NO_PROXY value: svc.cluster.local,localhost,quay370.apps.quayperf370.perfscale.devcluster.openshift.com - kind: quay managed: true overrides: env: - name: DEBUGLOG value: \"true\" - name: NO_PROXY value: svc.cluster.local,localhost,quay370.apps.quayperf370.perfscale.devcluster.openshift.com - name: HTTP_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: HTTPS_PROXY value: quayproxy.qe.devcluster.openshift.com:3128",
"oc create -n quay-enterprise -f quayregistry.yaml",
"oc get quayregistry -n quay-enterprise example-registry -o jsonpath=\"{.status.registryEndpoint}\" -w",
"FEATURE_USER_INITIALIZE: true SUPER_USERS: - quayadmin",
"sudo podman stop quay",
"sudo podman run -d -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv}",
"curl -X POST -k http://quay-server.example.com/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ \"username\": \"quayadmin\", \"password\":\"quaypass12345\", \"email\": \"[email protected]\", \"access_token\": true}'",
"{\"access_token\":\"6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED\", \"email\":\"[email protected]\",\"encrypted_password\":\"1nZMLH57RIE5UGdL/yYpDOHLqiNCgimb6W9kfF8MjZ1xrfDpRyRs9NUnUuNuAitW\",\"username\":\"quayadmin\"} # gitleaks:allow",
"{\"message\":\"Cannot initialize user in a non-empty database\"}",
"{\"message\":\"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace.\"}",
"sudo podman login -u quayadmin -p quaypass12345 http://quay-server.example.com --tls-verify=false",
"Login Succeeded!",
"oc get pods -n quay-enterprise",
"NAME READY STATUS RESTARTS AGE example-registry-clair-app-5ffc9f77d6-jwr9s 1/1 Running 0 3m42s example-registry-clair-app-5ffc9f77d6-wgp7d 1/1 Running 0 3m41s example-registry-clair-postgres-54956d6d9c-rgs8l 1/1 Running 0 3m5s example-registry-quay-app-79c6b86c7b-8qnr2 1/1 Running 4 3m42s example-registry-quay-app-79c6b86c7b-xk85f 1/1 Running 4 3m41s example-registry-quay-app-upgrade-5kl5r 0/1 Completed 4 3m50s example-registry-quay-database-b466fc4d7-tfrnx 1/1 Running 2 3m42s example-registry-quay-mirror-6d9bd78756-6lj6p 1/1 Running 0 2m58s example-registry-quay-mirror-6d9bd78756-bv6gq 1/1 Running 0 2m58s example-registry-quay-postgres-init-dzbmx 0/1 Completed 0 3m43s example-registry-quay-redis-8bd67b647-skgqx 1/1 Running 0 3m42s",
"oc get hpa -n quay-enterprise",
"NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE example-registry-clair-app Deployment/example-registry-clair-app 16%/90%, 0%/90% 2 10 2 13d example-registry-quay-app Deployment/example-registry-quay-app 31%/90%, 1%/90% 2 20 2 13d example-registry-quay-mirror Deployment/example-registry-quay-mirror 27%/90%, 0%/90% 2 20 2 13d",
"oc get quayregistry -n quay-enterprise -o yaml",
"apiVersion: v1 items: - apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: creationTimestamp: \"2021-09-14T10:51:22Z\" generation: 3 name: example-registry namespace: quay-enterprise resourceVersion: \"50147\" selfLink: /apis/quay.redhat.com/v1/namespaces/quay-enterprise/quayregistries/example-registry uid: e3fc82ba-e716-4646-bb0f-63c26d05e00e spec: components: - kind: postgres managed: true - kind: clair managed: true - kind: redis managed: true - kind: horizontalpodautoscaler managed: true - kind: objectstorage managed: true - kind: route managed: true - kind: mirror managed: true - kind: monitoring managed: true - kind: tls managed: true - kind: clairpostgres managed: true configBundleSecret: example-registry-config-bundle-kt55s kind: List metadata: resourceVersion: \"\" selfLink: \"\"",
"oc get pods -n quay-enterprise",
"NAME READY STATUS RESTARTS AGE example-registry-clair-app-86554c6b49-ds7bl 0/1 ContainerCreating 0 2s example-registry-clair-app-86554c6b49-hxp5s 0/1 Running 1 17s example-registry-clair-postgres-68d8857899-lbc5n 0/1 ContainerCreating 0 17s example-registry-quay-app-upgrade-h2v7h 0/1 ContainerCreating 0 9s example-registry-quay-database-66f495c9bc-wqsjf 0/1 ContainerCreating 0 17s example-registry-quay-mirror-854c88457b-d845g 0/1 Init:0/1 0 2s example-registry-quay-mirror-854c88457b-fghxv 0/1 Init:0/1 0 17s example-registry-quay-postgres-init-bktdt 0/1 Terminating 0 17s example-registry-quay-redis-f9b9d44bf-4htpz 0/1 ContainerCreating 0 17s",
"status: conditions: - lastTransitionTime: \"2021-09-14T10:52:04Z\" lastUpdateTime: \"2021-09-14T10:52:04Z\" message: all objects created/updated successfully reason: ComponentsCreationSuccess status: \"False\" type: RolloutBlocked - lastTransitionTime: \"2021-09-14T10:52:05Z\" lastUpdateTime: \"2021-09-14T10:52:05Z\" message: running database migrations reason: MigrationsInProgress status: \"False\" type: Available lastUpdated: 2021-09-14 10:52:05.371425635 +0000 UTC unhealthyComponents: clair: - lastTransitionTime: \"2021-09-14T10:51:32Z\" lastUpdateTime: \"2021-09-14T10:51:32Z\" message: 'Deployment example-registry-clair-postgres: Deployment does not have minimum availability.' reason: MinimumReplicasUnavailable status: \"False\" type: Available - lastTransitionTime: \"2021-09-14T10:51:32Z\" lastUpdateTime: \"2021-09-14T10:51:32Z\" message: 'Deployment example-registry-clair-app: Deployment does not have minimum availability.' reason: MinimumReplicasUnavailable status: \"False\" type: Available mirror: - lastTransitionTime: \"2021-09-14T10:51:32Z\" lastUpdateTime: \"2021-09-14T10:51:32Z\" message: 'Deployment example-registry-quay-mirror: Deployment does not have minimum availability.' reason: MinimumReplicasUnavailable status: \"False\" type: Available",
"status: conditions: - lastTransitionTime: \"2021-09-14T10:52:36Z\" lastUpdateTime: \"2021-09-14T10:52:36Z\" message: all registry component healthchecks passing reason: HealthChecksPassing status: \"True\" type: Available - lastTransitionTime: \"2021-09-14T10:52:46Z\" lastUpdateTime: \"2021-09-14T10:52:46Z\" message: all objects created/updated successfully reason: ComponentsCreationSuccess status: \"False\" type: RolloutBlocked currentVersion: {producty} lastUpdated: 2021-09-14 10:52:46.104181633 +0000 UTC registryEndpoint: https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org unhealthyComponents: {}"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/operator-deploy |
Chapter 108. Verifying permissions of IdM configuration files using Healthcheck | Chapter 108. Verifying permissions of IdM configuration files using Healthcheck Learn more about how to test Identity Management (IdM) configuration files using the Healthcheck tool. For details, see Healthcheck in IdM . Prerequisites The Healthcheck tool is only available on RHEL 8.1 or newer systems. 108.1. File permissions Healthcheck tests The Healthcheck tool tests ownership and permissions of some important files installed or configured by Identity Management (IdM). If you change the ownership or permissions of any tested file, the test returns a warning in the result section. While it does not necessarily mean that the configuration will not work, it means that the file differs from the default configuration. To see all tests, run the ipa-healthcheck with the --list-sources option: You can find the file permissions test under the ipahealthcheck.ipa.files source: IPAFileNSSDBCheck This test checks the 389-ds NSS database and the Certificate Authority (CA) database. The 389-ds database is located in /etc/dirsrv/slapd-<dashed-REALM> and the CA database is located in /etc/pki/pki-tomcat/alias/ . IPAFileCheck This test checks the following files: /var/lib/ipa/ra-agent.{key|pem} /var/lib/ipa/certs/httpd.pem /var/lib/ipa/private/httpd.key /etc/httpd/alias/ipasession.key /etc/dirsrv/ds.keytab /etc/ipa/ca.crt /etc/ipa/custodia/server.keys If PKINIT is enabled: /var/lib/ipa/certs/kdc.pem /var/lib/ipa/private/kdc.key If DNS is configured: /etc/named.keytab /etc/ipa/dnssec/ipa-dnskeysyncd.keytab TomcatFileCheck This test checks some tomcat-specific files if a CA is configured: /etc/pki/pki-tomcat/password.conf /var/lib/pki/pki-tomcat/conf/ca/CS.cfg /etc/pki/pki-tomcat/server.xml Note Run these tests on all IdM servers when trying to find issues. 108.2. Screening configuration files using Healthcheck Follow this procedure to run a standalone manual test of an Identity Management (IdM) server's configuration files using the Healthcheck tool. The Healthcheck tool includes many tests. Results can be narrowed down by: Excluding all successful test: --failures-only Including only ownership and permissions tests: --source=ipahealthcheck.ipa.files Procedure To run Healthcheck tests on IdM configuration file ownership and permissions, while displaying only warnings, errors and critical issues, enter: A successful test displays empty brackets: Failed tests display results similar to the following WARNING : Additional resources See man ipa-healthcheck . | [
"ipa-healthcheck --list-sources",
"ipa-healthcheck --source=ipahealthcheck.ipa.files --failures-only",
"ipa-healthcheck --source=ipahealthcheck.ipa.files --failures-only []",
"{ \"source\": \"ipahealthcheck.ipa.files\", \"check\": \"IPAFileNSSDBCheck\", \"result\": \"WARNING\", \"kw\": { \"key\": \"_etc_dirsrv_slapd-EXAMPLE-TEST_pkcs11.txt_mode\", \"path\": \"/etc/dirsrv/slapd-EXAMPLE-TEST/pkcs11.txt\", \"type\": \"mode\", \"expected\": \"0640\", \"got\": \"0666\", \"msg\": \"Permissions of /etc/dirsrv/slapd-EXAMPLE-TEST/pkcs11.txt are 0666 and should be 0640\" } }"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/verifying-permissions-of-idm-configuration-files-using-healthcheck_configuring-and-managing-idm |
Chapter 4. Installing a cluster on Nutanix in a restricted network | Chapter 4. Installing a cluster on Nutanix in a restricted network In OpenShift Container Platform 4.16, you can install a cluster on Nutanix infrastructure in a restricted network by creating an internal mirror of the installation release content. 4.1. Prerequisites You have reviewed details about the OpenShift Container Platform installation and update processes. The installation program requires access to port 9440 on Prism Central and Prism Element. You verified that port 9440 is accessible. If you use a firewall, you have met these prerequisites: You confirmed that port 9440 is accessible. Control plane nodes must be able to reach Prism Central and Prism Element on port 9440 for the installation to succeed. You configured the firewall to grant access to the sites that OpenShift Container Platform requires. This includes the use of Telemetry. If your Nutanix environment is using the default self-signed SSL/TLS certificate, replace it with a certificate that is signed by a CA. The installation program requires a valid CA-signed certificate to access to the Prism Central API. For more information about replacing the self-signed certificate, see the Nutanix AOS Security Guide . If your Nutanix environment uses an internal CA to issue certificates, you must configure a cluster-wide proxy as part of the installation process. For more information, see Configuring a custom PKI . Important Use 2048-bit certificates. The installation fails if you use 4096-bit certificates with Prism Central 2022.x. You have a container image registry, such as Red Hat Quay. If you do not already have a registry, you can create a mirror registry using mirror registry for Red Hat OpenShift . You have used the oc-mirror OpenShift CLI (oc) plugin to mirror all of the required OpenShift Container Platform content and other images, including the Nutanix CSI Operator, to your mirror registry. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. 4.2. About installations in restricted networks In OpenShift Container Platform 4.16, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 4.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Adding Nutanix root CA certificates to your system trust Because the installation program requires access to the Prism Central API, you must add your Nutanix trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the Prism Central web console, download the Nutanix root CA certificates. Extract the compressed file that contains the Nutanix root CA certificates. Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 4.5. Downloading the RHCOS cluster image Prism Central requires access to the Red Hat Enterprise Linux CoreOS (RHCOS) image to install the cluster. You can use the installation program to locate and download the RHCOS image and make it available through an internal HTTP server or Nutanix Objects. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install coreos print-stream-json Use the output of the command to find the location of the Nutanix image, and click the link to download it. Example output "nutanix": { "release": "411.86.202210041459-0", "formats": { "qcow2": { "disk": { "location": "https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2", "sha256": "42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b" Make the image available through an internal HTTP server or Nutanix Objects. Note the location of the downloaded image. You update the platform section in the installation configuration file ( install-config.yaml ) with the image's location before deploying the cluster. Snippet of an install-config.yaml file that specifies the RHCOS image platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2 4.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Nutanix. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSourcePolicy.yaml file that was created when you mirrored your registry. You have the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image you download. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. You have verified that you have met the Nutanix networking requirements. For more information, see "Preparing to install on Nutanix". Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select nutanix as the platform to target. Enter the Prism Central domain name or IP address. Enter the port that is used to log into Prism Central. Enter the credentials that are used to log into Prism Central. The installation program connects to Prism Central. Select the Prism Element that will manage the OpenShift Container Platform cluster. Select the network subnet to use. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you configured in the DNS records. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. In the install-config.yaml file, set the value of platform.nutanix.clusterOSImage to the image location or name. For example: platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Optional: Update one or more of the default configuration parameters in the install.config.yaml file to customize the installation. For more information about the parameters, see "Installation configuration parameters". Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on {platform}". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for Nutanix 4.6.1. Sample customized install-config.yaml file for Nutanix You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 10 12 13 16 17 18 19 Required. The installation program prompts you for this value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 8 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 5 9 14 Optional: Provide one or more pairs of a prism category key and a prism category value. These category key-value pairs must exist in Prism Central. You can provide separate categories to compute machines, control plane machines, or all machines. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 15 Optional: Specify a project with which VMs are associated. Specify either name or uuid for the project type, and then provide the corresponding UUID or project name. You can associate projects to compute machines, control plane machines, or all machines. 20 Optional: By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can override the default behavior by hosting the RHCOS image on any HTTP server or Nutanix Objects and pointing the installation program to the image. 21 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 Optional: You can provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 Provide the contents of the certificate file that you used for your mirror registry. 25 Provide these values from the metadata.name: release-0 section of the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. 4.6.2. Configuring failure domains Failure domains improve the fault tolerance of an OpenShift Container Platform cluster by distributing control plane and compute machines across multiple Nutanix Prism Elements (clusters). Tip It is recommended that you configure three failure domains to ensure high-availability. Prerequisites You have an installation configuration file ( install-config.yaml ). Procedure Edit the install-config.yaml file and add the following stanza to configure the first failure domain: apiVersion: v1 baseDomain: example.com compute: # ... platform: nutanix: failureDomains: - name: <failure_domain_name> prismElement: name: <prism_element_name> uuid: <prism_element_uuid> subnetUUIDs: - <network_uuid> # ... where: <failure_domain_name> Specifies a unique name for the failure domain. The name is limited to 64 or fewer characters, which can include lower-case letters, digits, and a dash ( - ). The dash cannot be in the leading or ending position of the name. <prism_element_name> Optional. Specifies the name of the Prism Element. <prism_element_uuid > Specifies the UUID of the Prism Element. <network_uuid > Specifies the UUID of the Prism Element subnet object. The subnet's IP address prefix (CIDR) should contain the virtual IP addresses that the OpenShift Container Platform cluster uses. Only one subnet per failure domain (Prism Element) in an OpenShift Container Platform cluster is supported. As required, configure additional failure domains. To distribute control plane and compute machines across the failure domains, do one of the following: If compute and control plane machines can share the same set of failure domains, add the failure domain names under the cluster's default machine configuration. Example of control plane and compute machines sharing a set of failure domains apiVersion: v1 baseDomain: example.com compute: # ... platform: nutanix: defaultMachinePlatform: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 # ... If compute and control plane machines must use different failure domains, add the failure domain names under the respective machine pools. Example of control plane and compute machines using different failure domains apiVersion: v1 baseDomain: example.com compute: # ... controlPlane: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 # ... compute: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 # ... Save the file. 4.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.7. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.8. Configuring IAM for Nutanix Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets. Prerequisites You have configured the ccoctl binary. You have an install-config.yaml file. Procedure Create a YAML file that contains the credentials data in the following format: Credentials data format credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element> 1 Specify the authentication type. Only basic authentication is supported. 2 Specify the Prism Central credentials. 3 Optional: Specify the Prism Element credentials. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: "true" labels: controller-tools.k8s.io: "1.0" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl nutanix create-shared-secrets \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --credentials-source-filepath=<path_to_credentials_file> 3 1 Specify the path to the directory that contains the files for the component CredentialsRequests objects. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Optional: Specify the directory that contains the credentials data YAML file. By default, ccoctl expects this file to be in <home_directory>/.nutanix/credentials . Edit the install-config.yaml configuration file so that the credentialsMode parameter is set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 ... 1 Add this line to set the credentialsMode parameter to Manual . Create the installation manifests by running the following command: USD openshift-install create manifests --dir <installation_directory> 1 1 Specify the path to the directory that contains the install-config.yaml file for your cluster. Copy the generated credential files to the target manifests directory by running the following command: USD cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests Verification Ensure that the appropriate secrets exist in the manifests directory. USD ls ./<installation_directory>/manifests Example output cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml 4.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.10. Post installation Complete the following steps to complete the configuration of your cluster. 4.10.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 4.10.2. Installing the policy resources into the cluster Mirroring the OpenShift Container Platform content using the oc-mirror OpenShift CLI (oc) plugin creates resources, which include catalogSource-certified-operator-index.yaml and imageContentSourcePolicy.yaml . The ImageContentSourcePolicy resource associates the mirror registry with the source registry and redirects image pull requests from the online registries to the mirror registry. The CatalogSource resource is used by Operator Lifecycle Manager (OLM) to retrieve information about the available Operators in the mirror registry, which lets users discover and install Operators. After you install the cluster, you must install these resources into the cluster. Prerequisites You have mirrored the image set to the registry mirror in the disconnected environment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift CLI as a user with the cluster-admin role. Apply the YAML files from the results directory to the cluster: USD oc apply -f ./oc-mirror-workspace/results-<id>/ Verification Verify that the ImageContentSourcePolicy resources were successfully installed: USD oc get imagecontentsourcepolicy Verify that the CatalogSource resources were successfully installed: USD oc get catalogsource --all-namespaces 4.10.3. Configuring the default storage container After you install the cluster, you must install the Nutanix CSI Operator and configure the default storage container for the cluster. For more information, see the Nutanix documentation for installing the CSI Operator and configuring registry storage . 4.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. 4.12. Additional resources About remote health monitoring 4.13. steps If necessary, see Opt out of remote health reporting If necessary, see Registering your disconnected cluster Customize your cluster | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install coreos print-stream-json",
"\"nutanix\": { \"release\": \"411.86.202210041459-0\", \"formats\": { \"qcow2\": { \"disk\": { \"location\": \"https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2\", \"sha256\": \"42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b\"",
"platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: example.com compute: platform: nutanix: failureDomains: - name: <failure_domain_name> prismElement: name: <prism_element_name> uuid: <prism_element_uuid> subnetUUIDs: - <network_uuid>",
"apiVersion: v1 baseDomain: example.com compute: platform: nutanix: defaultMachinePlatform: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3",
"apiVersion: v1 baseDomain: example.com compute: controlPlane: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 compute: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api",
"ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1",
"openshift-install create manifests --dir <installation_directory> 1",
"cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests",
"ls ./<installation_directory>/manifests",
"cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc apply -f ./oc-mirror-workspace/results-<id>/",
"oc get imagecontentsourcepolicy",
"oc get catalogsource --all-namespaces"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_nutanix/installing-restricted-networks-nutanix-installer-provisioned |
Chapter 12. Expanding the cluster | Chapter 12. Expanding the cluster You can expand a cluster installed with the Assisted Installer by adding hosts using the user interface or the API. 12.1. Prerequisites You must have access to an Assisted Installer cluster. You must install the OpenShift CLI ( oc ). Ensure that all the required DNS records exist for the cluster that you are adding the worker node to. If you are adding a worker node to a cluster with multiple CPU architectures, you must ensure that the architecture is set to multi . If you are adding arm64 , IBM Power , or IBM zSystems compute nodes to an existing x86_64 cluster, use a platform that supports a mixed architecture. For details, see Installing a mixed architecture cluster Additional resources Installing with the Assisted Installer API Installing with the Assisted Installer UI Adding hosts with the Assisted Installer API Adding hosts with the Assisted Installer UI 12.2. Checking for multiple architectures When adding a node to a cluster with multiple architectures, ensure that the architecture setting is set to multi . Procedure Log in to the cluster using the CLI. Check the architecture setting: USD oc adm release info -o json | jq .metadata.metadata Ensure that the architecture setting is set to 'multi'. { "release.openshift.io/architecture": "multi" } 12.3. Adding hosts with the UI You can add hosts to clusters that were created using the Assisted Installer . Important Adding hosts to Assisted Installer clusters is only supported for clusters running OpenShift Container Platform version 4.11 and up. Procedure Log in to OpenShift Cluster Manager and click the cluster that you want to expand. Click Add hosts and download the discovery ISO for the new host, adding an SSH public key and configuring cluster-wide proxy settings as needed. Optional: Modify ignition files as needed. Boot the target host using the discovery ISO, and wait for the host to be discovered in the console. Select the host role. It can be either a worker or a control plane host. Start the installation. As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host. When prompted, approve the pending CSRs to complete the installation. When the host is successfully installed, it is listed as a host in the cluster web console. Important New hosts will be encrypted using the same method as the original cluster. 12.4. Adding hosts with the API You can add hosts to clusters using the Assisted Installer REST API. Prerequisites Install the OpenShift Cluster Manager CLI ( ocm ). Log in to OpenShift Cluster Manager as a user with cluster creation privileges. Install jq . Ensure that all the required DNS records exist for the cluster that you want to expand. Procedure Authenticate against the Assisted Installer REST API and generate an API token for your session. The generated token is valid for 15 minutes only. Set the USDAPI_URL variable by running the following command: USD export API_URL=<api_url> 1 1 Replace <api_url> with the Assisted Installer API URL, for example, https://api.openshift.com Import the cluster by running the following commands: Set the USDCLUSTER_ID variable. Log in to the cluster and run the following command: USD export CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}') Set the USDCLUSTER_REQUEST variable that is used to import the cluster: USD export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id "USDCLUSTER_ID" '{ "api_vip_dnsname": "<api_vip>", 1 "openshift_cluster_id": USDCLUSTER_ID, "name": "<openshift_cluster_name>" 2 }') 1 Replace <api_vip> with the hostname for the cluster's API server. This can be the DNS domain for the API server or the IP address of the single node which the host can reach. For example, api.compute-1.example.com . 2 Replace <openshift_cluster_name> with the plain text name for the cluster. The cluster name should match the cluster name that was set during the Day 1 cluster installation. Import the cluster and set the USDCLUSTER_ID variable. Run the following command: USD CLUSTER_ID=USD(curl "USDAPI_URL/api/assisted-install/v2/clusters/import" -H "Authorization: Bearer USD{API_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' \ -d "USDCLUSTER_REQUEST" | tee /dev/stderr | jq -r '.id') Generate the InfraEnv resource for the cluster and set the USDINFRA_ENV_ID variable by running the following commands: Download the pull secret file from Red Hat OpenShift Cluster Manager at console.redhat.com . Set the USDINFRA_ENV_REQUEST variable: export INFRA_ENV_REQUEST=USD(jq --null-input \ --slurpfile pull_secret <path_to_pull_secret_file> \ 1 --arg ssh_pub_key "USD(cat <path_to_ssh_pub_key>)" \ 2 --arg cluster_id "USDCLUSTER_ID" '{ "name": "<infraenv_name>", 3 "pull_secret": USDpull_secret[0] | tojson, "cluster_id": USDcluster_id, "ssh_authorized_key": USDssh_pub_key, "image_type": "<iso_image_type>" 4 }') 1 Replace <path_to_pull_secret_file> with the path to the local file containing the downloaded pull secret from Red Hat OpenShift Cluster Manager at console.redhat.com . 2 Replace <path_to_ssh_pub_key> with the path to the public SSH key required to access the host. If you do not set this value, you cannot access the host while in discovery mode. 3 Replace <infraenv_name> with the plain text name for the InfraEnv resource. 4 Replace <iso_image_type> with the ISO image type, either full-iso or minimal-iso . Post the USDINFRA_ENV_REQUEST to the /v2/infra-envs API and set the USDINFRA_ENV_ID variable: USD INFRA_ENV_ID=USD(curl "USDAPI_URL/api/assisted-install/v2/infra-envs" -H "Authorization: Bearer USD{API_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' -d "USDINFRA_ENV_REQUEST" | tee /dev/stderr | jq -r '.id') Get the URL of the discovery ISO for the cluster host by running the following command: USD curl -s "USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq -r '.download_url' Example output https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.12 Download the ISO: USD curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1 1 Replace <iso_url> with the URL for the ISO from the step. Boot the new worker host from the downloaded rhcos-live-minimal.iso . Get the list of hosts in the cluster that are not installed. Keep running the following command until the new host shows up: USD curl -s "USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq -r '.hosts[] | select(.status != "installed").id' Example output 2294ba03-c264-4f11-ac08-2f1bb2f8c296 Set the USDHOST_ID variable for the new host, for example: USD HOST_ID=<host_id> 1 1 Replace <host_id> with the host ID from the step. Check that the host is ready to install by running the following command: Note Ensure that you copy the entire command including the complete jq expression. USD curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H "Authorization: Bearer USD{API_TOKEN}" | jq ' def host_name(USDhost): if (.suggested_hostname // "") == "" then if (.inventory // "") == "" then "Unknown hostname, please wait" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): ["failure", "pending", "error"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // "{}" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { "Hosts validations": { "Hosts": [ .hosts[] | select(.status != "installed") | { "id": .id, "name": host_name(.), "status": .status, "notable_validations": notable_validations(.validations_info) } ] }, "Cluster validations info": { "notable_validations": notable_validations(.validations_info) } } ' -r Example output { "Hosts validations": { "Hosts": [ { "id": "97ec378c-3568-460c-bc22-df54534ff08f", "name": "localhost.localdomain", "status": "insufficient", "notable_validations": [ { "id": "ntp-synced", "status": "failure", "message": "Host couldn't synchronize with any NTP server" }, { "id": "api-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" }, { "id": "api-int-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" }, { "id": "apps-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" } ] } ] }, "Cluster validations info": { "notable_validations": [] } } When the command shows that the host is ready, start the installation using the /v2/infra-envs/{infra_env_id}/hosts/{host_id}/actions/install API by running the following command: USD curl -X POST -s "USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install" -H "Authorization: Bearer USD{API_TOKEN}" As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host. Important You must approve the CSRs to complete the installation. Keep running the following API call to monitor the cluster installation: USD curl -s "USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq '{ "Cluster day-2 hosts": [ .hosts[] | select(.status != "installed") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }' Example output { "Cluster day-2 hosts": [ { "id": "a1c52dde-3432-4f59-b2ae-0a530c851480", "requested_hostname": "control-plane-1", "status": "added-to-existing-cluster", "status_info": "Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs", "progress": { "current_stage": "Done", "installation_percentage": 100, "stage_started_at": "2022-07-08T10:56:20.476Z", "stage_updated_at": "2022-07-08T10:56:20.476Z" }, "status_updated_at": "2022-07-08T10:56:20.476Z", "updated_at": "2022-07-08T10:57:15.306369Z", "infra_env_id": "b74ec0c3-d5b5-4717-a866-5b6854791bd3", "cluster_id": "8f721322-419d-4eed-aa5b-61b50ea586ae", "created_at": "2022-07-06T22:54:57.161614Z" } ] } Optional: Run the following command to see all the events for the cluster: USD curl -s "USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq -c '.[] | {severity, message, event_time, host_id}' Example output {"severity":"info","message":"Host compute-0: updated status from insufficient to known (Host is ready to be installed)","event_time":"2022-07-08T11:21:46.346Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from known to installing (Installation is in progress)","event_time":"2022-07-08T11:28:28.647Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from installing to installing-in-progress (Starting installation)","event_time":"2022-07-08T11:28:52.068Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae","event_time":"2022-07-08T11:29:47.802Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)","event_time":"2022-07-08T11:29:48.259Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host: compute-0, reached installation stage Rebooting","event_time":"2022-07-08T11:29:48.261Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} Log in to the cluster and approve the pending CSRs to complete the installation. Verification Check that the new host was successfully added to the cluster with a status of Ready : USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.25.0 compute-1.example.com Ready worker 11m v1.25.0 12.5. Installing a mixed-architecture cluster From OpenShift Container Platform version 4.12.0 and later, a cluster with an x86_64 control plane can support mixed-architecture worker nodes of two different CPU architectures. Mixed-architecture clusters combine the strengths of each architecture and support a variety of workloads. From version 4.12.0, you can add arm64 worker nodes to an existing OpenShift cluster with an x86_64 control plane. From version 4.14.0, you can add IBM Power or IBM zSystems worker nodes to an existing x86_64 control plane. The main steps of the installation are as follows: Create and register a multi-architecture cluster. Create an x86_64 infrastructure environment, download the ISO for x86_64 , and add the control plane. The control plane must have the x86_64 architecture. Create an arm64 , IBM Power or IBM zSystems infrastructure environment, download the ISO for arm64 , IBM Power or IBM zSystems , and add the worker nodes. These steps are detailed in the procedure below. Supported platforms The table below lists the platforms that support a mixed-architecture cluster for each OpenShift Container Platform version. Use the appropriate platforms for the version you are installing. OpenShift Container Platform version Supported platforms Day 1 control plane architecture Day 2 node architecture 4.12.0 Microsoft Azure (TP) x86_64 arm64 4.13.0 Microsoft Azure Amazon Web Services Bare Metal (TP) x86_64 x86_64 x86_64 arm64 arm64 arm64 4.14.0 Microsoft Azure Amazon Web Services Bare Metal Google Cloud Platform IBM(R) Power(R) IBM Z(R) x86_64 x86_64 x86_64 x86_64 x86_64 x86_64 arm64 arm64 arm64 arm64 ppc64le s390x Important Technology Preview (TP) features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Main steps Start the procedure for installing OpenShift Container Platform using the API. For details, see Installing with the Assisted Installer API in the Additional Resources section. When you reach the "Registering a new cluster" step of the installation, register the cluster as a multi-architecture cluster: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "<version-number>-multi", 1 "cpu_architecture" : "multi" 2 "high_availability_mode": "full" 3 "base_dns_domain": "example.com", "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Note 1 Use the multi- option for the OpenShift version number; for example, "4.12-multi" . 2 Set the CPU architecture` to "multi" . 3 Use the full value to indicate Multi-Node OpenShift. When you reach the "Registering a new infrastructure environment" step of the installation, set cpu_architecture to x86_64 : USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt \ --arg cluster_id USD{CLUSTER_ID} ' { "name": "testcluster-infra-env", "image_type":"full-iso", "cluster_id": USDcluster_id, "cpu_architecture" : "x86_64" "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' When you reach the "Adding hosts" step of the installation, set host_role to master : Note For more information, see Assigning Roles to Hosts in Additional Resources . USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "host_role":"master" } ' | jq Download the discovery image for the x86_64 architecture. Boot the x86_64 architecture hosts using the generated discovery image. Start the installation and wait for the cluster to be fully installed. Repeat the "Registering a new infrastructure environment" step of the installation. This time, set cpu_architecture to one of the following: ppc64le (for IBM Power), s390x (for IBM Z), or arm64 . For example: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "4.12", "cpu_architecture" : "arm64" "high_availability_mode": "full" "base_dns_domain": "example.com", "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Repeat the "Adding hosts" step of the installation. This time, set host_role to worker : Note For more details, see Assigning Roles to Hosts in Additional Resources . USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "host_role":"worker" } ' | jq Download the discovery image for the arm64 , ppc64le or s390x architecture. Boot the architecture hosts using the generated discovery image. Start the installation and wait for the cluster to be fully installed. Verification View the arm64 , ppc64le or s390x worker nodes in the cluster by running the following command: USD oc get nodes -o wide 12.6. Installing a primary control plane node on a healthy cluster This procedure describes how to install a primary control plane node on a healthy OpenShift Container Platform cluster. If the cluster is unhealthy, additional operations are required before they can be managed. See Additional Resources for more information. Prerequisites You are using OpenShift Container Platform 4.11 or newer with the correct etcd-operator version. You have installed a healthy cluster with a minimum of three nodes. You have assigned role: master to a single node. Procedure Review and approve CSRs Review the CertificateSigningRequests (CSRs): USD oc get csr | grep Pending Example output csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:worker-6 <none> Pending Approve all pending CSRs: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Important You must approve the CSRs to complete the installation. Confirm the primary node is in Ready status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 4h42m v1.24.0+3882f8f worker-1 Ready worker 4h29m v1.24.0+3882f8f master-2 Ready master 4h43m v1.24.0+3882f8f master-3 Ready master 4h27m v1.24.0+3882f8f worker-4 Ready worker 4h30m v1.24.0+3882f8f master-5 Ready master 105s v1.24.0+3882f8f Note The etcd-operator requires a Machine Custom Resources (CRs) referencing the new node when the cluster runs with a functional Machine API. Link the Machine CR with BareMetalHost and Node : Create the BareMetalHost CR with a unique .metadata.name value": apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: custom-master3 namespace: openshift-machine-api annotations: spec: automatedCleaningMode: metadata bootMACAddress: 00:00:00:00:00:02 bootMode: UEFI customDeploy: method: install_coreos externallyProvisioned: true online: true userData: name: master-user-data-managed namespace: openshift-machine-api USD oc create -f <filename> Apply the BareMetalHost CR: USD oc apply -f <filename> Create the Machine CR using the unique .machine.name value: apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: machine.openshift.io/instance-state: externally provisioned metal3.io/BareMetalHost: openshift-machine-api/custom-master3 finalizers: - machine.machine.openshift.io generation: 3 labels: machine.openshift.io/cluster-api-cluster: test-day2-1-6qv96 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: custom-master3 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: "" url: "" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed USD oc create -f <filename> Apply the Machine CR: USD oc apply -f <filename> Link BareMetalHost , Machine , and Node using the link-machine-and-node.sh script: #!/bin/bash # Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238. # This script will link Machine object and Node object. This is needed # in order to have IP address of the Node present in the status of the Machine. set -x set -e machine="USD1" node="USD2" if [ -z "USDmachine" -o -z "USDnode" ]; then echo "Usage: USD0 MACHINE NODE" exit 1 fi uid=USD(echo USDnode | cut -f1 -d':') node_name=USD(echo USDnode | cut -f2 -d':') oc proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH="http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts" function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name="USD1" url="USD2" timeout="USD3" shift 3 curl_opts="USD@" echo -n "Waiting for USDname to respond" start_time=USD(date +%s) until curl -g -X GET "USDurl" "USD{curl_opts[@]}" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n "." curr_time=USD(date +%s) time_diff=USD((USDcurr_time - USDstart_time)) if [[ USDtime_diff -gt USDtimeout ]]; then echo "\nTimed out waiting for USDname" return 1 fi sleep 5 done echo " Success!" return 0 } wait_for_json oc_proxy "USD{HOST_PROXY_API_PATH}" 10 -H "Accept: application/json" -H "Content-Type: application/json" addresses=USD(oc get node -n openshift-machine-api USD{node_name} -o json | jq -c '.status.addresses') machine_data=USD(oc get machine -n openshift-machine-api -o json USD{machine}) host=USD(echo "USDmachine_data" | jq '.metadata.annotations["metal3.io/BareMetalHost"]' | cut -f2 -d/ | sed 's/"//g') if [ -z "USDhost" ]; then echo "Machine USDmachine is not linked to a host yet." 1>&2 exit 1 fi # The address structure on the host doesn't match the node, so extract # the values we want into separate variables so we can build the patch # we need. hostname=USD(echo "USD{addresses}" | jq '.[] | select(. | .type == "Hostname") | .address' | sed 's/"//g') ipaddr=USD(echo "USD{addresses}" | jq '.[] | select(. | .type == "InternalIP") | .address' | sed 's/"//g') host_patch=' { "status": { "hardware": { "hostname": "'USD{hostname}'", "nics": [ { "ip": "'USD{ipaddr}'", "mac": "00:00:00:00:00:00", "model": "unknown", "speedGbps": 10, "vlanId": 0, "pxe": true, "name": "eth1" } ], "systemVendor": { "manufacturer": "Red Hat", "productName": "product name", "serialNumber": "" }, "firmware": { "bios": { "date": "04/01/2014", "vendor": "SeaBIOS", "version": "1.11.0-2.el7" } }, "ramMebibytes": 0, "storage": [], "cpu": { "arch": "x86_64", "model": "Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz", "clockMegahertz": 2199.998, "count": 4, "flags": [] } } } } ' echo "PATCHING HOST" echo "USD{host_patch}" | jq . curl -s \ -X PATCH \ USD{HOST_PROXY_API_PATH}/USD{host}/status \ -H "Content-type: application/merge-patch+json" \ -d "USD{host_patch}" oc get baremetalhost -n openshift-machine-api -o yaml "USD{host}" USD bash link-machine-and-node.sh custom-master3 worker-5 Confirm etcd members: USD oc rsh -n openshift-etcd etcd-worker-2 etcdctl member list -w table Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ Confirm the etcd-operator configuration applies to all nodes: USD oc get clusteroperator etcd Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE etcd 4.11.5 True False False 5h54m Confirm etcd-operator health: USD oc rsh -n openshift-etcd etcd-worker-0 etcdctl endpoint health Example output 192.168.111.26 is healthy: committed proposal: took = 11.297561ms 192.168.111.25 is healthy: committed proposal: took = 13.892416ms 192.168.111.28 is healthy: committed proposal: took = 11.870755ms Confirm node health: USD oc get Nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 6h20m v1.24.0+3882f8f worker-1 Ready worker 6h7m v1.24.0+3882f8f master-2 Ready master 6h20m v1.24.0+3882f8f master-3 Ready master 6h4m v1.24.0+3882f8f worker-4 Ready worker 6h7m v1.24.0+3882f8f master-5 Ready master 99m v1.24.0+3882f8f Confirm the ClusterOperators health: USD oc get ClusterOperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MSG authentication 4.11.5 True False False 5h57m baremetal 4.11.5 True False False 6h19m cloud-controller-manager 4.11.5 True False False 6h20m cloud-credential 4.11.5 True False False 6h23m cluster-autoscaler 4.11.5 True False False 6h18m config-operator 4.11.5 True False False 6h19m console 4.11.5 True False False 6h4m csi-snapshot-controller 4.11.5 True False False 6h19m dns 4.11.5 True False False 6h18m etcd 4.11.5 True False False 6h17m image-registry 4.11.5 True False False 6h7m ingress 4.11.5 True False False 6h6m insights 4.11.5 True False False 6h12m kube-apiserver 4.11.5 True False False 6h16m kube-controller-manager 4.11.5 True False False 6h16m kube-scheduler 4.11.5 True False False 6h16m kube-storage-version-migrator 4.11.5 True False False 6h19m machine-api 4.11.5 True False False 6h15m machine-approver 4.11.5 True False False 6h19m machine-config 4.11.5 True False False 6h18m marketplace 4.11.5 True False False 6h18m monitoring 4.11.5 True False False 6h4m network 4.11.5 True False False 6h20m node-tuning 4.11.5 True False False 6h18m openshift-apiserver 4.11.5 True False False 6h8m openshift-controller-manager 4.11.5 True False False 6h7m openshift-samples 4.11.5 True False False 6h12m operator-lifecycle-manager 4.11.5 True False False 6h18m operator-lifecycle-manager-catalog 4.11.5 True False False 6h19m operator-lifecycle-manager-pkgsvr 4.11.5 True False False 6h12m service-ca 4.11.5 True False False 6h19m storage 4.11.5 True False False 6h19m Confirm the ClusterVersion : USD oc get ClusterVersion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 5h57m Cluster version is 4.11.5 Remove the old control plane node: Delete the BareMetalHost CR: USD oc delete bmh -n openshift-machine-api custom-master3 Confirm the Machine is unhealthy: USD oc get machine -A Example output NAMESPACE NAME PHASE AGE openshift-machine-api custom-master3 Running 14h openshift-machine-api test-day2-1-6qv96-master-0 Failed 20h openshift-machine-api test-day2-1-6qv96-master-1 Running 20h openshift-machine-api test-day2-1-6qv96-master-2 Running 20h openshift-machine-api test-day2-1-6qv96-worker-0-8w7vr Running 19h openshift-machine-api test-day2-1-6qv96-worker-0-rxddj Running 19h Delete the Machine CR: USD oc delete machine -n openshift-machine-api test-day2-1-6qv96-master-0 machine.machine.openshift.io "test-day2-1-6qv96-master-0" deleted Confirm removal of the Node CR: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION worker-1 Ready worker 19h v1.24.0+3882f8f master-2 Ready master 20h v1.24.0+3882f8f master-3 Ready master 19h v1.24.0+3882f8f worker-4 Ready worker 19h v1.24.0+3882f8f master-5 Ready master 15h v1.24.0+3882f8f Check etcd-operator logs to confirm status of the etcd cluster: USD oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf Example output E0927 07:53:10.597523 1 base_controller.go:272] ClusterMemberRemovalController reconciliation failed: cannot remove member: 192.168.111.23 because it is reported as healthy but it doesn't have a machine nor a node resource Remove the physical machine to allow etcd-operator to reconcile the cluster members: USD oc rsh -n openshift-etcd etcd-worker-2 etcdctl member list -w table; etcdctl endpoint health Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ 192.168.111.26 is healthy: committed proposal: took = 10.458132ms 192.168.111.25 is healthy: committed proposal: took = 11.047349ms 192.168.111.28 is healthy: committed proposal: took = 11.414402ms Additional resources Installing a primary control plane node on an unhealthy cluster 12.7. Installing a primary control plane node on an unhealthy cluster This procedure describes how to install a primary control plane node on an unhealthy OpenShift Container Platform cluster. Prerequisites You are using OpenShift Container Platform 4.11 or newer with the correct etcd-operator version. You have installed a healthy cluster with a minimum of two nodes. You have created the Day 2 control plane. You have assigned role: master to a single node. Procedure Confirm initial state of the cluster: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION worker-1 Ready worker 20h v1.24.0+3882f8f master-2 NotReady master 20h v1.24.0+3882f8f master-3 Ready master 20h v1.24.0+3882f8f worker-4 Ready worker 20h v1.24.0+3882f8f master-5 Ready master 15h v1.24.0+3882f8f Confirm the etcd-operator detects the cluster as unhealthy: USD oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf Example output E0927 08:24:23.983733 1 base_controller.go:272] DefragController reconciliation failed: cluster is unhealthy: 2 of 3 members are available, worker-2 is unhealthy Confirm the etcdctl members: USD oc rsh -n openshift-etcd etcd-worker-3 etcdctl member list -w table Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ Confirm that etcdctl reports an unhealthy member of the cluster: USD etcdctl endpoint health Example output {"level":"warn","ts":"2022-09-27T08:25:35.953Z","logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000680380/192.168.111.25","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\""} 192.168.111.28 is healthy: committed proposal: took = 12.465641ms 192.168.111.26 is healthy: committed proposal: took = 12.297059ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster Remove the unhealthy control plane by deleting the Machine Custom Resource: USD oc delete machine -n openshift-machine-api test-day2-1-6qv96-master-2 Note The Machine and Node Custom Resources (CRs) will not be deleted if the unhealthy cluster cannot run successfully. Confirm that etcd-operator has not removed the unhealthy machine: USD oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf -f Example output I0927 08:58:41.249222 1 machinedeletionhooks.go:135] skip removing the deletion hook from machine test-day2-1-6qv96-master-2 since its member is still present with any of: [{InternalIP } {InternalIP 192.168.111.26}] Remove the unhealthy etcdctl member manually: USD oc rsh -n openshift-etcd etcd-worker-3\ etcdctl member list -w table Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ Confirm that etcdctl reports an unhealthy member of the cluster: USD etcdctl endpoint health Example output {"level":"warn","ts":"2022-09-27T10:31:07.227Z","logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0000d6e00/192.168.111.25","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\""} 192.168.111.28 is healthy: committed proposal: took = 13.038278ms 192.168.111.26 is healthy: committed proposal: took = 12.950355ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster Remove the unhealthy cluster by deleting the etcdctl member Custom Resource: USD etcdctl member remove 61e2a86084aafa62 Example output Member 61e2a86084aafa62 removed from cluster 6881c977b97990d7 Confirm members of etcdctl by running the following command: USD etcdctl member list -w table Example output +----------+---------+--------+--------------+--------------+-------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +----------+---------+--------+--------------+--------------+-------+ | 2c18942f | started |worker-3|192.168.111.26|192.168.111.26| false | | ead4f280 | started |worker-5|192.168.111.28|192.168.111.28| false | +----------+---------+--------+--------------+--------------+-------+ Review and approve Certificate Signing Requests Review the Certificate Signing Requests (CSRs): USD oc get csr | grep Pending Example output csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:worker-6 <none> Pending Approve all pending CSRs: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note You must approve the CSRs to complete the installation. Confirm ready status of the control plane node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION worker-1 Ready worker 22h v1.24.0+3882f8f master-3 Ready master 22h v1.24.0+3882f8f worker-4 Ready worker 22h v1.24.0+3882f8f master-5 Ready master 17h v1.24.0+3882f8f master-6 Ready master 2m52s v1.24.0+3882f8f Validate the Machine , Node and BareMetalHost Custom Resources. The etcd-operator requires Machine CRs to be present if the cluster is running with the functional Machine API. Machine CRs are displayed during the Running phase when present. Create Machine Custom Resource linked with BareMetalHost and Node . Make sure there is a Machine CR referencing the newly added node. Important Boot-it-yourself will not create BareMetalHost and Machine CRs, so you must create them. Failure to create the BareMetalHost and Machine CRs will generate errors when running etcd-operator . Add BareMetalHost Custom Resource: USD oc create bmh -n openshift-machine-api custom-master3 Add Machine Custom Resource: USD oc create machine -n openshift-machine-api custom-master3 Link BareMetalHost , Machine , and Node by running the link-machine-and-node.sh script: #!/bin/bash # Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238. # This script will link Machine object and Node object. This is needed # in order to have IP address of the Node present in the status of the Machine. set -x set -e machine="USD1" node="USD2" if [ -z "USDmachine" -o -z "USDnode" ]; then echo "Usage: USD0 MACHINE NODE" exit 1 fi uid=USD(echo USDnode | cut -f1 -d':') node_name=USD(echo USDnode | cut -f2 -d':') oc proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH="http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts" function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name="USD1" url="USD2" timeout="USD3" shift 3 curl_opts="USD@" echo -n "Waiting for USDname to respond" start_time=USD(date +%s) until curl -g -X GET "USDurl" "USD{curl_opts[@]}" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n "." curr_time=USD(date +%s) time_diff=USD((USDcurr_time - USDstart_time)) if [[ USDtime_diff -gt USDtimeout ]]; then echo "\nTimed out waiting for USDname" return 1 fi sleep 5 done echo " Success!" return 0 } wait_for_json oc_proxy "USD{HOST_PROXY_API_PATH}" 10 -H "Accept: application/json" -H "Content-Type: application/json" addresses=USD(oc get node -n openshift-machine-api USD{node_name} -o json | jq -c '.status.addresses') machine_data=USD(oc get machine -n openshift-machine-api -o json USD{machine}) host=USD(echo "USDmachine_data" | jq '.metadata.annotations["metal3.io/BareMetalHost"]' | cut -f2 -d/ | sed 's/"//g') if [ -z "USDhost" ]; then echo "Machine USDmachine is not linked to a host yet." 1>&2 exit 1 fi # The address structure on the host doesn't match the node, so extract # the values we want into separate variables so we can build the patch # we need. hostname=USD(echo "USD{addresses}" | jq '.[] | select(. | .type == "Hostname") | .address' | sed 's/"//g') ipaddr=USD(echo "USD{addresses}" | jq '.[] | select(. | .type == "InternalIP") | .address' | sed 's/"//g') host_patch=' { "status": { "hardware": { "hostname": "'USD{hostname}'", "nics": [ { "ip": "'USD{ipaddr}'", "mac": "00:00:00:00:00:00", "model": "unknown", "speedGbps": 10, "vlanId": 0, "pxe": true, "name": "eth1" } ], "systemVendor": { "manufacturer": "Red Hat", "productName": "product name", "serialNumber": "" }, "firmware": { "bios": { "date": "04/01/2014", "vendor": "SeaBIOS", "version": "1.11.0-2.el7" } }, "ramMebibytes": 0, "storage": [], "cpu": { "arch": "x86_64", "model": "Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz", "clockMegahertz": 2199.998, "count": 4, "flags": [] } } } } ' echo "PATCHING HOST" echo "USD{host_patch}" | jq . curl -s \ -X PATCH \ USD{HOST_PROXY_API_PATH}/USD{host}/status \ -H "Content-type: application/merge-patch+json" \ -d "USD{host_patch}" oc get baremetalhost -n openshift-machine-api -o yaml "USD{host}" USD bash link-machine-and-node.sh custom-master3 worker-3 Confirm members of etcdctl by running the following command: USD oc rsh -n openshift-etcd etcd-worker-3 etcdctl member list -w table Example output +---------+-------+--------+--------------+--------------+-------+ | ID | STATUS| NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +---------+-------+--------+--------------+--------------+-------+ | 2c18942f|started|worker-3|192.168.111.26|192.168.111.26| false | | ead4f280|started|worker-5|192.168.111.28|192.168.111.28| false | | 79153c5a|started|worker-6|192.168.111.29|192.168.111.29| false | +---------+-------+--------+--------------+--------------+-------+ Confirm the etcd operator has configured all nodes: USD oc get clusteroperator etcd Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE etcd 4.11.5 True False False 22h Confirm health of etcdctl : USD oc rsh -n openshift-etcd etcd-worker-3 etcdctl endpoint health Example output 192.168.111.26 is healthy: committed proposal: took = 9.105375ms 192.168.111.28 is healthy: committed proposal: took = 9.15205ms 192.168.111.29 is healthy: committed proposal: took = 10.277577ms Confirm the health of the nodes: USD oc get Nodes Example output NAME STATUS ROLES AGE VERSION worker-1 Ready worker 22h v1.24.0+3882f8f master-3 Ready master 22h v1.24.0+3882f8f worker-4 Ready worker 22h v1.24.0+3882f8f master-5 Ready master 18h v1.24.0+3882f8f master-6 Ready master 40m v1.24.0+3882f8f Confirm the health of the ClusterOperators : USD oc get ClusterOperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.5 True False False 150m baremetal 4.11.5 True False False 22h cloud-controller-manager 4.11.5 True False False 22h cloud-credential 4.11.5 True False False 22h cluster-autoscaler 4.11.5 True False False 22h config-operator 4.11.5 True False False 22h console 4.11.5 True False False 145m csi-snapshot-controller 4.11.5 True False False 22h dns 4.11.5 True False False 22h etcd 4.11.5 True False False 22h image-registry 4.11.5 True False False 22h ingress 4.11.5 True False False 22h insights 4.11.5 True False False 22h kube-apiserver 4.11.5 True False False 22h kube-controller-manager 4.11.5 True False False 22h kube-scheduler 4.11.5 True False False 22h kube-storage-version-migrator 4.11.5 True False False 148m machine-api 4.11.5 True False False 22h machine-approver 4.11.5 True False False 22h machine-config 4.11.5 True False False 110m marketplace 4.11.5 True False False 22h monitoring 4.11.5 True False False 22h network 4.11.5 True False False 22h node-tuning 4.11.5 True False False 22h openshift-apiserver 4.11.5 True False False 163m openshift-controller-manager 4.11.5 True False False 22h openshift-samples 4.11.5 True False False 22h operator-lifecycle-manager 4.11.5 True False False 22h operator-lifecycle-manager-catalog 4.11.5 True False False 22h operator-lifecycle-manager-pkgsvr 4.11.5 True False False 22h service-ca 4.11.5 True False False 22h storage 4.11.5 True False False 22h Confirm the ClusterVersion : USD oc get ClusterVersion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 22h Cluster version is 4.11.5 12.8. Additional resources Installing a primary control plane node on a healthy cluster Authenticating with the REST API | [
"oc adm release info -o json | jq .metadata.metadata",
"{ \"release.openshift.io/architecture\": \"multi\" }",
"export API_URL=<api_url> 1",
"export CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')",
"export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id \"USDCLUSTER_ID\" '{ \"api_vip_dnsname\": \"<api_vip>\", 1 \"openshift_cluster_id\": USDCLUSTER_ID, \"name\": \"<openshift_cluster_name>\" 2 }')",
"CLUSTER_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/clusters/import\" -H \"Authorization: Bearer USD{API_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDCLUSTER_REQUEST\" | tee /dev/stderr | jq -r '.id')",
"export INFRA_ENV_REQUEST=USD(jq --null-input --slurpfile pull_secret <path_to_pull_secret_file> \\ 1 --arg ssh_pub_key \"USD(cat <path_to_ssh_pub_key>)\" \\ 2 --arg cluster_id \"USDCLUSTER_ID\" '{ \"name\": \"<infraenv_name>\", 3 \"pull_secret\": USDpull_secret[0] | tojson, \"cluster_id\": USDcluster_id, \"ssh_authorized_key\": USDssh_pub_key, \"image_type\": \"<iso_image_type>\" 4 }')",
"INFRA_ENV_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/infra-envs\" -H \"Authorization: Bearer USD{API_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDINFRA_ENV_REQUEST\" | tee /dev/stderr | jq -r '.id')",
"curl -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq -r '.download_url'",
"https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.12",
"curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1",
"curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq -r '.hosts[] | select(.status != \"installed\").id'",
"2294ba03-c264-4f11-ac08-2f1bb2f8c296",
"HOST_ID=<host_id> 1",
"curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H \"Authorization: Bearer USD{API_TOKEN}\" | jq ' def host_name(USDhost): if (.suggested_hostname // \"\") == \"\" then if (.inventory // \"\") == \"\" then \"Unknown hostname, please wait\" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): [\"failure\", \"pending\", \"error\"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // \"{}\" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { \"Hosts validations\": { \"Hosts\": [ .hosts[] | select(.status != \"installed\") | { \"id\": .id, \"name\": host_name(.), \"status\": .status, \"notable_validations\": notable_validations(.validations_info) } ] }, \"Cluster validations info\": { \"notable_validations\": notable_validations(.validations_info) } } ' -r",
"{ \"Hosts validations\": { \"Hosts\": [ { \"id\": \"97ec378c-3568-460c-bc22-df54534ff08f\", \"name\": \"localhost.localdomain\", \"status\": \"insufficient\", \"notable_validations\": [ { \"id\": \"ntp-synced\", \"status\": \"failure\", \"message\": \"Host couldn't synchronize with any NTP server\" }, { \"id\": \"api-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"api-int-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"apps-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" } ] } ] }, \"Cluster validations info\": { \"notable_validations\": [] } }",
"curl -X POST -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install\" -H \"Authorization: Bearer USD{API_TOKEN}\"",
"curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq '{ \"Cluster day-2 hosts\": [ .hosts[] | select(.status != \"installed\") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }'",
"{ \"Cluster day-2 hosts\": [ { \"id\": \"a1c52dde-3432-4f59-b2ae-0a530c851480\", \"requested_hostname\": \"control-plane-1\", \"status\": \"added-to-existing-cluster\", \"status_info\": \"Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs\", \"progress\": { \"current_stage\": \"Done\", \"installation_percentage\": 100, \"stage_started_at\": \"2022-07-08T10:56:20.476Z\", \"stage_updated_at\": \"2022-07-08T10:56:20.476Z\" }, \"status_updated_at\": \"2022-07-08T10:56:20.476Z\", \"updated_at\": \"2022-07-08T10:57:15.306369Z\", \"infra_env_id\": \"b74ec0c3-d5b5-4717-a866-5b6854791bd3\", \"cluster_id\": \"8f721322-419d-4eed-aa5b-61b50ea586ae\", \"created_at\": \"2022-07-06T22:54:57.161614Z\" } ] }",
"curl -s \"USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq -c '.[] | {severity, message, event_time, host_id}'",
"{\"severity\":\"info\",\"message\":\"Host compute-0: updated status from insufficient to known (Host is ready to be installed)\",\"event_time\":\"2022-07-08T11:21:46.346Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from known to installing (Installation is in progress)\",\"event_time\":\"2022-07-08T11:28:28.647Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing to installing-in-progress (Starting installation)\",\"event_time\":\"2022-07-08T11:28:52.068Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae\",\"event_time\":\"2022-07-08T11:29:47.802Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)\",\"event_time\":\"2022-07-08T11:29:48.259Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host: compute-0, reached installation stage Rebooting\",\"event_time\":\"2022-07-08T11:29:48.261Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"}",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.25.0 compute-1.example.com Ready worker 11m v1.25.0",
"curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"<version-number>-multi\", 1 \"cpu_architecture\" : \"multi\" 2 \"high_availability_mode\": \"full\" 3 \"base_dns_domain\": \"example.com\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt --arg cluster_id USD{CLUSTER_ID} ' { \"name\": \"testcluster-infra-env\", \"image_type\":\"full-iso\", \"cluster_id\": USDcluster_id, \"cpu_architecture\" : \"x86_64\" \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"master\" } ' | jq",
"curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.12\", \"cpu_architecture\" : \"arm64\" \"high_availability_mode\": \"full\" \"base_dns_domain\": \"example.com\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"worker\" } ' | jq",
"oc get nodes -o wide",
"oc get csr | grep Pending",
"csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:worker-6 <none> Pending",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 4h42m v1.24.0+3882f8f worker-1 Ready worker 4h29m v1.24.0+3882f8f master-2 Ready master 4h43m v1.24.0+3882f8f master-3 Ready master 4h27m v1.24.0+3882f8f worker-4 Ready worker 4h30m v1.24.0+3882f8f master-5 Ready master 105s v1.24.0+3882f8f",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: custom-master3 namespace: openshift-machine-api annotations: spec: automatedCleaningMode: metadata bootMACAddress: 00:00:00:00:00:02 bootMode: UEFI customDeploy: method: install_coreos externallyProvisioned: true online: true userData: name: master-user-data-managed namespace: openshift-machine-api",
"oc create -f <filename>",
"oc apply -f <filename>",
"apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: machine.openshift.io/instance-state: externally provisioned metal3.io/BareMetalHost: openshift-machine-api/custom-master3 finalizers: - machine.machine.openshift.io generation: 3 labels: machine.openshift.io/cluster-api-cluster: test-day2-1-6qv96 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: custom-master3 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: \"\" url: \"\" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed",
"oc create -f <filename>",
"oc apply -f <filename>",
"#!/bin/bash Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238. This script will link Machine object and Node object. This is needed in order to have IP address of the Node present in the status of the Machine. set -x set -e machine=\"USD1\" node=\"USD2\" if [ -z \"USDmachine\" -o -z \"USDnode\" ]; then echo \"Usage: USD0 MACHINE NODE\" exit 1 fi uid=USD(echo USDnode | cut -f1 -d':') node_name=USD(echo USDnode | cut -f2 -d':') proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH=\"http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts\" function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name=\"USD1\" url=\"USD2\" timeout=\"USD3\" shift 3 curl_opts=\"USD@\" echo -n \"Waiting for USDname to respond\" start_time=USD(date +%s) until curl -g -X GET \"USDurl\" \"USD{curl_opts[@]}\" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n \".\" curr_time=USD(date +%s) time_diff=USD((USDcurr_time - USDstart_time)) if [[ USDtime_diff -gt USDtimeout ]]; then echo \"\\nTimed out waiting for USDname\" return 1 fi sleep 5 done echo \" Success!\" return 0 } wait_for_json oc_proxy \"USD{HOST_PROXY_API_PATH}\" 10 -H \"Accept: application/json\" -H \"Content-Type: application/json\" addresses=USD(oc get node -n openshift-machine-api USD{node_name} -o json | jq -c '.status.addresses') machine_data=USD(oc get machine -n openshift-machine-api -o json USD{machine}) host=USD(echo \"USDmachine_data\" | jq '.metadata.annotations[\"metal3.io/BareMetalHost\"]' | cut -f2 -d/ | sed 's/\"//g') if [ -z \"USDhost\" ]; then echo \"Machine USDmachine is not linked to a host yet.\" 1>&2 exit 1 fi The address structure on the host doesn't match the node, so extract the values we want into separate variables so we can build the patch we need. hostname=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"Hostname\") | .address' | sed 's/\"//g') ipaddr=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"InternalIP\") | .address' | sed 's/\"//g') host_patch=' { \"status\": { \"hardware\": { \"hostname\": \"'USD{hostname}'\", \"nics\": [ { \"ip\": \"'USD{ipaddr}'\", \"mac\": \"00:00:00:00:00:00\", \"model\": \"unknown\", \"speedGbps\": 10, \"vlanId\": 0, \"pxe\": true, \"name\": \"eth1\" } ], \"systemVendor\": { \"manufacturer\": \"Red Hat\", \"productName\": \"product name\", \"serialNumber\": \"\" }, \"firmware\": { \"bios\": { \"date\": \"04/01/2014\", \"vendor\": \"SeaBIOS\", \"version\": \"1.11.0-2.el7\" } }, \"ramMebibytes\": 0, \"storage\": [], \"cpu\": { \"arch\": \"x86_64\", \"model\": \"Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz\", \"clockMegahertz\": 2199.998, \"count\": 4, \"flags\": [] } } } } ' echo \"PATCHING HOST\" echo \"USD{host_patch}\" | jq . curl -s -X PATCH USD{HOST_PROXY_API_PATH}/USD{host}/status -H \"Content-type: application/merge-patch+json\" -d \"USD{host_patch}\" get baremetalhost -n openshift-machine-api -o yaml \"USD{host}\"",
"bash link-machine-and-node.sh custom-master3 worker-5",
"oc rsh -n openshift-etcd etcd-worker-2 etcdctl member list -w table",
"+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+",
"oc get clusteroperator etcd",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE etcd 4.11.5 True False False 5h54m",
"oc rsh -n openshift-etcd etcd-worker-0 etcdctl endpoint health",
"192.168.111.26 is healthy: committed proposal: took = 11.297561ms 192.168.111.25 is healthy: committed proposal: took = 13.892416ms 192.168.111.28 is healthy: committed proposal: took = 11.870755ms",
"oc get Nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 6h20m v1.24.0+3882f8f worker-1 Ready worker 6h7m v1.24.0+3882f8f master-2 Ready master 6h20m v1.24.0+3882f8f master-3 Ready master 6h4m v1.24.0+3882f8f worker-4 Ready worker 6h7m v1.24.0+3882f8f master-5 Ready master 99m v1.24.0+3882f8f",
"oc get ClusterOperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MSG authentication 4.11.5 True False False 5h57m baremetal 4.11.5 True False False 6h19m cloud-controller-manager 4.11.5 True False False 6h20m cloud-credential 4.11.5 True False False 6h23m cluster-autoscaler 4.11.5 True False False 6h18m config-operator 4.11.5 True False False 6h19m console 4.11.5 True False False 6h4m csi-snapshot-controller 4.11.5 True False False 6h19m dns 4.11.5 True False False 6h18m etcd 4.11.5 True False False 6h17m image-registry 4.11.5 True False False 6h7m ingress 4.11.5 True False False 6h6m insights 4.11.5 True False False 6h12m kube-apiserver 4.11.5 True False False 6h16m kube-controller-manager 4.11.5 True False False 6h16m kube-scheduler 4.11.5 True False False 6h16m kube-storage-version-migrator 4.11.5 True False False 6h19m machine-api 4.11.5 True False False 6h15m machine-approver 4.11.5 True False False 6h19m machine-config 4.11.5 True False False 6h18m marketplace 4.11.5 True False False 6h18m monitoring 4.11.5 True False False 6h4m network 4.11.5 True False False 6h20m node-tuning 4.11.5 True False False 6h18m openshift-apiserver 4.11.5 True False False 6h8m openshift-controller-manager 4.11.5 True False False 6h7m openshift-samples 4.11.5 True False False 6h12m operator-lifecycle-manager 4.11.5 True False False 6h18m operator-lifecycle-manager-catalog 4.11.5 True False False 6h19m operator-lifecycle-manager-pkgsvr 4.11.5 True False False 6h12m service-ca 4.11.5 True False False 6h19m storage 4.11.5 True False False 6h19m",
"oc get ClusterVersion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 5h57m Cluster version is 4.11.5",
"oc delete bmh -n openshift-machine-api custom-master3",
"oc get machine -A",
"NAMESPACE NAME PHASE AGE openshift-machine-api custom-master3 Running 14h openshift-machine-api test-day2-1-6qv96-master-0 Failed 20h openshift-machine-api test-day2-1-6qv96-master-1 Running 20h openshift-machine-api test-day2-1-6qv96-master-2 Running 20h openshift-machine-api test-day2-1-6qv96-worker-0-8w7vr Running 19h openshift-machine-api test-day2-1-6qv96-worker-0-rxddj Running 19h",
"oc delete machine -n openshift-machine-api test-day2-1-6qv96-master-0 machine.machine.openshift.io \"test-day2-1-6qv96-master-0\" deleted",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION worker-1 Ready worker 19h v1.24.0+3882f8f master-2 Ready master 20h v1.24.0+3882f8f master-3 Ready master 19h v1.24.0+3882f8f worker-4 Ready worker 19h v1.24.0+3882f8f master-5 Ready master 15h v1.24.0+3882f8f",
"oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf",
"E0927 07:53:10.597523 1 base_controller.go:272] ClusterMemberRemovalController reconciliation failed: cannot remove member: 192.168.111.23 because it is reported as healthy but it doesn't have a machine nor a node resource",
"oc rsh -n openshift-etcd etcd-worker-2 etcdctl member list -w table; etcdctl endpoint health",
"+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ 192.168.111.26 is healthy: committed proposal: took = 10.458132ms 192.168.111.25 is healthy: committed proposal: took = 11.047349ms 192.168.111.28 is healthy: committed proposal: took = 11.414402ms",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION worker-1 Ready worker 20h v1.24.0+3882f8f master-2 NotReady master 20h v1.24.0+3882f8f master-3 Ready master 20h v1.24.0+3882f8f worker-4 Ready worker 20h v1.24.0+3882f8f master-5 Ready master 15h v1.24.0+3882f8f",
"oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf",
"E0927 08:24:23.983733 1 base_controller.go:272] DefragController reconciliation failed: cluster is unhealthy: 2 of 3 members are available, worker-2 is unhealthy",
"oc rsh -n openshift-etcd etcd-worker-3 etcdctl member list -w table",
"+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+",
"etcdctl endpoint health",
"{\"level\":\"warn\",\"ts\":\"2022-09-27T08:25:35.953Z\",\"logger\":\"client\",\"caller\":\"v3/retry_interceptor.go:62\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc000680380/192.168.111.25\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\\\"\"} 192.168.111.28 is healthy: committed proposal: took = 12.465641ms 192.168.111.26 is healthy: committed proposal: took = 12.297059ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster",
"oc delete machine -n openshift-machine-api test-day2-1-6qv96-master-2",
"oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf -f",
"I0927 08:58:41.249222 1 machinedeletionhooks.go:135] skip removing the deletion hook from machine test-day2-1-6qv96-master-2 since its member is still present with any of: [{InternalIP } {InternalIP 192.168.111.26}]",
"oc rsh -n openshift-etcd etcd-worker-3 etcdctl member list -w table",
"+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+",
"etcdctl endpoint health",
"{\"level\":\"warn\",\"ts\":\"2022-09-27T10:31:07.227Z\",\"logger\":\"client\",\"caller\":\"v3/retry_interceptor.go:62\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc0000d6e00/192.168.111.25\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\\\"\"} 192.168.111.28 is healthy: committed proposal: took = 13.038278ms 192.168.111.26 is healthy: committed proposal: took = 12.950355ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster",
"etcdctl member remove 61e2a86084aafa62",
"Member 61e2a86084aafa62 removed from cluster 6881c977b97990d7",
"etcdctl member list -w table",
"+----------+---------+--------+--------------+--------------+-------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +----------+---------+--------+--------------+--------------+-------+ | 2c18942f | started |worker-3|192.168.111.26|192.168.111.26| false | | ead4f280 | started |worker-5|192.168.111.28|192.168.111.28| false | +----------+---------+--------+--------------+--------------+-------+",
"oc get csr | grep Pending",
"csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:worker-6 <none> Pending",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION worker-1 Ready worker 22h v1.24.0+3882f8f master-3 Ready master 22h v1.24.0+3882f8f worker-4 Ready worker 22h v1.24.0+3882f8f master-5 Ready master 17h v1.24.0+3882f8f master-6 Ready master 2m52s v1.24.0+3882f8f",
"oc create bmh -n openshift-machine-api custom-master3",
"oc create machine -n openshift-machine-api custom-master3",
"#!/bin/bash Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238. This script will link Machine object and Node object. This is needed in order to have IP address of the Node present in the status of the Machine. set -x set -e machine=\"USD1\" node=\"USD2\" if [ -z \"USDmachine\" -o -z \"USDnode\" ]; then echo \"Usage: USD0 MACHINE NODE\" exit 1 fi uid=USD(echo USDnode | cut -f1 -d':') node_name=USD(echo USDnode | cut -f2 -d':') proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH=\"http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts\" function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name=\"USD1\" url=\"USD2\" timeout=\"USD3\" shift 3 curl_opts=\"USD@\" echo -n \"Waiting for USDname to respond\" start_time=USD(date +%s) until curl -g -X GET \"USDurl\" \"USD{curl_opts[@]}\" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n \".\" curr_time=USD(date +%s) time_diff=USD((USDcurr_time - USDstart_time)) if [[ USDtime_diff -gt USDtimeout ]]; then echo \"\\nTimed out waiting for USDname\" return 1 fi sleep 5 done echo \" Success!\" return 0 } wait_for_json oc_proxy \"USD{HOST_PROXY_API_PATH}\" 10 -H \"Accept: application/json\" -H \"Content-Type: application/json\" addresses=USD(oc get node -n openshift-machine-api USD{node_name} -o json | jq -c '.status.addresses') machine_data=USD(oc get machine -n openshift-machine-api -o json USD{machine}) host=USD(echo \"USDmachine_data\" | jq '.metadata.annotations[\"metal3.io/BareMetalHost\"]' | cut -f2 -d/ | sed 's/\"//g') if [ -z \"USDhost\" ]; then echo \"Machine USDmachine is not linked to a host yet.\" 1>&2 exit 1 fi The address structure on the host doesn't match the node, so extract the values we want into separate variables so we can build the patch we need. hostname=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"Hostname\") | .address' | sed 's/\"//g') ipaddr=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"InternalIP\") | .address' | sed 's/\"//g') host_patch=' { \"status\": { \"hardware\": { \"hostname\": \"'USD{hostname}'\", \"nics\": [ { \"ip\": \"'USD{ipaddr}'\", \"mac\": \"00:00:00:00:00:00\", \"model\": \"unknown\", \"speedGbps\": 10, \"vlanId\": 0, \"pxe\": true, \"name\": \"eth1\" } ], \"systemVendor\": { \"manufacturer\": \"Red Hat\", \"productName\": \"product name\", \"serialNumber\": \"\" }, \"firmware\": { \"bios\": { \"date\": \"04/01/2014\", \"vendor\": \"SeaBIOS\", \"version\": \"1.11.0-2.el7\" } }, \"ramMebibytes\": 0, \"storage\": [], \"cpu\": { \"arch\": \"x86_64\", \"model\": \"Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz\", \"clockMegahertz\": 2199.998, \"count\": 4, \"flags\": [] } } } } ' echo \"PATCHING HOST\" echo \"USD{host_patch}\" | jq . curl -s -X PATCH USD{HOST_PROXY_API_PATH}/USD{host}/status -H \"Content-type: application/merge-patch+json\" -d \"USD{host_patch}\" get baremetalhost -n openshift-machine-api -o yaml \"USD{host}\"",
"bash link-machine-and-node.sh custom-master3 worker-3",
"oc rsh -n openshift-etcd etcd-worker-3 etcdctl member list -w table",
"+---------+-------+--------+--------------+--------------+-------+ | ID | STATUS| NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +---------+-------+--------+--------------+--------------+-------+ | 2c18942f|started|worker-3|192.168.111.26|192.168.111.26| false | | ead4f280|started|worker-5|192.168.111.28|192.168.111.28| false | | 79153c5a|started|worker-6|192.168.111.29|192.168.111.29| false | +---------+-------+--------+--------------+--------------+-------+",
"oc get clusteroperator etcd",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE etcd 4.11.5 True False False 22h",
"oc rsh -n openshift-etcd etcd-worker-3 etcdctl endpoint health",
"192.168.111.26 is healthy: committed proposal: took = 9.105375ms 192.168.111.28 is healthy: committed proposal: took = 9.15205ms 192.168.111.29 is healthy: committed proposal: took = 10.277577ms",
"oc get Nodes",
"NAME STATUS ROLES AGE VERSION worker-1 Ready worker 22h v1.24.0+3882f8f master-3 Ready master 22h v1.24.0+3882f8f worker-4 Ready worker 22h v1.24.0+3882f8f master-5 Ready master 18h v1.24.0+3882f8f master-6 Ready master 40m v1.24.0+3882f8f",
"oc get ClusterOperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.5 True False False 150m baremetal 4.11.5 True False False 22h cloud-controller-manager 4.11.5 True False False 22h cloud-credential 4.11.5 True False False 22h cluster-autoscaler 4.11.5 True False False 22h config-operator 4.11.5 True False False 22h console 4.11.5 True False False 145m csi-snapshot-controller 4.11.5 True False False 22h dns 4.11.5 True False False 22h etcd 4.11.5 True False False 22h image-registry 4.11.5 True False False 22h ingress 4.11.5 True False False 22h insights 4.11.5 True False False 22h kube-apiserver 4.11.5 True False False 22h kube-controller-manager 4.11.5 True False False 22h kube-scheduler 4.11.5 True False False 22h kube-storage-version-migrator 4.11.5 True False False 148m machine-api 4.11.5 True False False 22h machine-approver 4.11.5 True False False 22h machine-config 4.11.5 True False False 110m marketplace 4.11.5 True False False 22h monitoring 4.11.5 True False False 22h network 4.11.5 True False False 22h node-tuning 4.11.5 True False False 22h openshift-apiserver 4.11.5 True False False 163m openshift-controller-manager 4.11.5 True False False 22h openshift-samples 4.11.5 True False False 22h operator-lifecycle-manager 4.11.5 True False False 22h operator-lifecycle-manager-catalog 4.11.5 True False False 22h operator-lifecycle-manager-pkgsvr 4.11.5 True False False 22h service-ca 4.11.5 True False False 22h storage 4.11.5 True False False 22h",
"oc get ClusterVersion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 22h Cluster version is 4.11.5"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/assisted_installer_for_openshift_container_platform/expanding-the-cluster |
7.131. lm_sensors | 7.131. lm_sensors 7.131.1. RHBA-2012:1309 - lm_sensors bug fixes Updated lm_sensors packages that fix three bugs are now available for Red Hat Enterprise Linux 6. The lm_sensors packages provide a set of modules for general SMBus access and hardware monitoring. Bug Fixes BZ# 610000 , BZ#623587 Prior to this update, the sensors-detect script did not detect all GenuineIntel CPUs. As a consequence, lm_sensors did not load coretemp module automatically. This update uses a more generic detection for Intel CPUs. Now, the coretemp module is loaded as expected. BZ#768365 Prior to this update, the sensors-detect script reported an error when running without user-defined input. This behavior had no impact on the function but could confuse users. This update modifies the underlying code to allow for the sensors-detect script to run without user. All users of lm_sensors are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/lm_sensors |
Chapter 13. Troubleshooting builds | Chapter 13. Troubleshooting builds Use the following to troubleshoot build issues. 13.1. Resolving denial for access to resources If your request for access to resources is denied: Issue A build fails with: requested access to the resource is denied Resolution You have exceeded one of the image quotas set on your project. Check your current quota and verify the limits applied and storage in use: USD oc describe quota 13.2. Service certificate generation failure If your request for access to resources is denied: Issue If a service certificate generation fails with (service's service.beta.openshift.io/serving-cert-generation-error annotation contains): Example output secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60 Resolution The service that generated the certificate no longer exists, or has a different serviceUID . You must force certificates regeneration by removing the old secret, and clearing the following annotations on the service: service.beta.openshift.io/serving-cert-generation-error and service.beta.openshift.io/serving-cert-generation-error-num : USD oc delete secret <secret_name> USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error- USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num- Note The command removing annotation has a - after the annotation name to be removed. | [
"requested access to the resource is denied",
"oc describe quota",
"secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60",
"oc delete secret <secret_name>",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/builds/troubleshooting-builds_build-configuration |
Serverless | Serverless OpenShift Container Platform 4.15 Create and deploy serverless, event-driven applications using OpenShift Serverless Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/serverless/index |
Chapter 83. Master | Chapter 83. Master Only consumer is supported The Camel-Master endpoint provides a way to ensure only a single consumer in a cluster consumes from a given endpoint; with automatic failover if that JVM dies. This can be very useful if you need to consume from some legacy back end which either doesn't support concurrent consumption or due to commercial or stability reasons you can only have a single connection at any point in time. 83.1. Dependencies When using master with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-master-starter</artifactId> </dependency> 83.2. Using the master endpoint Just prefix any camel endpoint with master:someName: where someName is a logical name and is used to acquire the master lock. e.g. from("master:cheese:jms:foo").to("activemq:wine"); In this example, there master component ensures that the route is only active in one node, at any given time, in the cluster. So if there are 8 nodes in the cluster, then the master component will elect one route to be the leader, and only this route will be active, and hence only this route will consume messages from jms:foo . In case this route is stopped or unexpected terminated, then the master component will detect this, and re-elect another node to be active, which will then become active and start consuming messages from jms:foo . Note Apache ActiveMQ 5.x has such feature out of the box called Exclusive Consumers . 83.3. URI format Where endpoint is any Camel endpoint you want to run in master/slave mode. 83.4. Configuring Options Camel components are configured on two levels: Component level Endpoint level 83.4.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 83.4.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 83.5. Component Options The Master component supports 4 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean service (advanced) Inject the service to use. CamelClusterService serviceSelector (advanced) Inject the service selector used to lookup the CamelClusterService to use. Selector 83.6. Endpoint Options The Master endpoint is configured using URI syntax: with the following path and query parameters: 83.6.1. Path Parameters (2 parameters) Name Description Default Type namespace (consumer) Required The name of the cluster namespace to use. String delegateUri (consumer) Required The endpoint uri to use in master/slave mode. String 83.6.2. Query Parameters (3 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern 83.7. Example You can protect a clustered Camel application to only consume files from one active node. // the file endpoint we want to consume from String url = "file:target/inbox?delete=true"; // use the camel master component in the clustered group named myGroup // to run a master/slave mode in the following Camel url from("master:myGroup:" + url) .log(name + " - Received file: USD{file:name}") .delay(delay) .log(name + " - Done file: USD{file:name}") .to("file:target/outbox"); The master component leverages CamelClusterService you can configure using Java ZooKeeperClusterService service = new ZooKeeperClusterService(); service.setId("camel-node-1"); service.setNodes("myzk:2181"); service.setBasePath("/camel/cluster"); context.addService(service) Xml (Spring/Blueprint) <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <bean id="cluster" class="org.apache.camel.component.zookeeper.cluster.ZooKeeperClusterService"> <property name="id" value="camel-node-1"/> <property name="basePath" value="/camel/cluster"/> <property name="nodes" value="myzk:2181"/> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring" autoStartup="false"> ... </camelContext> </beans> Spring boot camel.component.zookeeper.cluster.service.enabled = true camel.component.zookeeper.cluster.service.id = camel-node-1 camel.component.zookeeper.cluster.service.base-path = /camel/cluster camel.component.zookeeper.cluster.service.nodes = myzk:2181 83.8. Implementations Camel provides the following ClusterService implementations: camel-consul camel-file camel-infinispan camel-jgroups-raft camel-jgroups camel-kubernetes camel-zookeeper 83.9. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.component.master.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.master.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.master.enabled Whether to enable auto configuration of the master component. This is enabled by default. Boolean camel.component.master.service Inject the service to use. The option is a org.apache.camel.cluster.CamelClusterService type. CamelClusterService camel.component.master.service-selector Inject the service selector used to lookup the CamelClusterService to use. The option is a org.apache.camel.cluster.CamelClusterService.Selector type. CamelClusterServiceUSDSelector | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-master-starter</artifactId> </dependency>",
"from(\"master:cheese:jms:foo\").to(\"activemq:wine\");",
"master:namespace:endpoint[?options]",
"master:namespace:delegateUri",
"// the file endpoint we want to consume from String url = \"file:target/inbox?delete=true\"; // use the camel master component in the clustered group named myGroup // to run a master/slave mode in the following Camel url from(\"master:myGroup:\" + url) .log(name + \" - Received file: USD{file:name}\") .delay(delay) .log(name + \" - Done file: USD{file:name}\") .to(\"file:target/outbox\");",
"ZooKeeperClusterService service = new ZooKeeperClusterService(); service.setId(\"camel-node-1\"); service.setNodes(\"myzk:2181\"); service.setBasePath(\"/camel/cluster\"); context.addService(service)",
"<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <bean id=\"cluster\" class=\"org.apache.camel.component.zookeeper.cluster.ZooKeeperClusterService\"> <property name=\"id\" value=\"camel-node-1\"/> <property name=\"basePath\" value=\"/camel/cluster\"/> <property name=\"nodes\" value=\"myzk:2181\"/> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\" autoStartup=\"false\"> </camelContext> </beans>",
"camel.component.zookeeper.cluster.service.enabled = true camel.component.zookeeper.cluster.service.id = camel-node-1 camel.component.zookeeper.cluster.service.base-path = /camel/cluster camel.component.zookeeper.cluster.service.nodes = myzk:2181"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-master-component-starter |
Chapter 4. Verifying the Metrics Store installation | Chapter 4. Verifying the Metrics Store installation Verify the Metrics Store installation using the Kibana console. You can view the collected logs and create data visualizations. Procedure Log in to the Kibana console using the URL ( https://kibana. example.com ) that you recorded in Section 2.3, "Deploying Metrics Store services on Red Hat OpenShift" . Use the default admin user, and the password you defined during the metrics store installation. Optionally, you can access the Red Hat OpenShift Container Platform portal at https:// example.com :8443 (using the same admin user credentials). Select the Discover tab, and check that you can view the project.ovirt-logs- ovirt_env_name-uuid index. See the Discover section in the Kibana User Guide for information about working with logs. Select the Visualize tab, where you can create data visualizations for the project.ovirt-metrics- ovirt_env_name-uuid and the project.ovirt-logs- ovirt_env_name-uuid indexes. The Metrics Store User Guide describes the available parameters. See the Visualize section of the Kibana User Guide for information about visualizing logs and metrics. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/metrics_store_installation_guide/Verifying_the_metrics_store_installation |
Chapter 34. Storage | Chapter 34. Storage lvconvert --repair now works properly on cache logical volumes Due to a regression in the lvm2-2.02.166-1.el package, released in Red Hat Enterprise Linux 7.3, the lvconvert --repair command could not be run properly on cache logical volumes. As a consequence, the Cannot convert internal LV error occurred. The underlying source code has been modified to fix this bug, and lvconvert --repair now works as expected. (BZ# 1380532 ) LVM2 library incompatibilities no longer cause device monitoring to fail and be lost during an upgrade Due to a bug in the lvm2-2.02.166-1.el package, released in Red Hat Enterprise Linux 7.3, the library was incompatible with earlier versions of Red Hat Enterprise Linux 7. The incompatibility could cause device monitoring to fail and be lost during an upgrade. As a consequence, device failures could go unnoticed (RAID), or out-of-space conditions were not handled properly (thin-p). This update fixes the incompatibility, and the logical volume monitoring now works as expected. (BZ# 1382688 ) be2iscsi driver errors no longer cause the system to become unresponsive Previously, the operating system sometimes became unresponsive due to be2iscsi driver errors. This update fixes be2iscsi , and the operating system no longer hangs due to be2iscsi errors. (BZ#1324918) Interaction problems no longer occur with the lvmetad daemon when mirror segment type is used Previously, when the legacy mirror segment type was used to create mirrored logical volumes with 3 or more legs, interaction problems could occur with the lvmetad daemon. Problems observed occurred only after a second device failure, when mirror fault policies were set to the non-default allocate option, when lvmetad was used, and there had been no reboot of the machine between device failure events. This bug has been fixed, and the described interaction problems no longer occur. (BZ# 1380521 ) The multipathd daemon no longer shows incorrect error messages for blacklisted devices Previously, the multipathd daemon showed incorrect error messages that it could not find devices that were blacklisted, causing users to see error messages when nothing was wrong. With this fix, multipath checks if a device has been blacklisted before issuing an error message. (BZ#1403552) Multipath now flags device reloads when there are no usable paths Previously, when the last path device to a multipath device was removed, the state of lvmetad became incorrect, and lvm devices on top of multipath could stop working correctly. This was because there was no way for the device-mapper to know how many working paths there were when a multipath device got reloaded. Because of this, the multipath udev rules that dealt with disabling scanning and other dm rules only worked when multipath devices lost their last usable path because it failed, instead of because of a device table reload. With this fix, multipath now flags device reloads when there are no usable paths, and the multipath udev rules now correctly disable scanning and other dm rules whenever a multipath device loses its last usable path. As a result, the state of lvmetad remains correct, and LVM devices on top of multipath continue working correctly. (BZ#1239173) Read requests sent after failed writes will always return the same data on multipath devices Previously, if a write request wass hung in the rbd module, and the iSCSI initiator and multipath layer decided to fail the request to the application, read requests sent after the failure may not have reflected the state of the write. This was because When a Ceph rbd image is exported through multiple iSCSI targets, the rbd kernel module will grab the exclusive lock when it receives a write request. With this fix, The rbd module will grab the exclusive lock for both reads and writes. This will cause hung writes to be flushed and or failed before executing reads. As a result, read requests sent after failed writes will always return the same data. (BZ# 1380602 ) When a path device in a multipath device switches to read-only, the multipath device will be reloaded read-only Previously, when reloading a multipath device, the multipath code always tried to reload the device read-write first, and then failed back to read-only. If a path device had already been opened read-write in the kernel, it would continue to be opened read-write even if the device had switched to read-only mode and the read-write reload would succeed. Consequently, when path devices switched to from read-write to read-only, the multipath device would still be read-write (even though all writes to the read-only device would fail). With this fix, multipathd now reloads the multipath device read-only when it gets a uevent showing that a path devices has become read-only. As a result, when a path device in a multipath device switches to read-only, the multipath device will be reloaded read-only. (BZ#1431562) Users no longer get potentially confusing stale data for multipath devices that are not being checked Previously, when a path device is orphaned (not a member of a multipath device), the device state and checker state displayed with the show paths command in the state of the device before it was orphaned. As a result, the show paths command showed out of date information on devices that was no longer checking. With this fix, the show paths command now displays undef as the checker state and unknown as the device state of orphaned paths and users no longer get potentially confusing stale data for devices that are not being checked. (BZ# 1402092 ) The multipathd daemon no longer hangs as a result of running the prioritizer on failed paths Previously, multipathd was running the prioritizer on paths that had failed in some cases. Because of this, if multipathd was configured with a synchronous prioritizer, it could hang trying to run the prioritizer on a failed path. With this fix, multipathd no longer runs the prioritizer when a path has failed and it no longer fails for this reason. (BZ#1362120) New RAID4 volumes, and existing RAID4 or RAID10 logical volumes after a system upgrade are now correctly activated After creating RAID4 logical volumes on Red Hat Enterprise Linux version 7.3, or after upgrading a system that has existing RAID4 or RAID10 logical volumes to version 7.3, the system sometimes failed to activate these volumes. With this update, the system activates these volumes successfully. (BZ# 1386184 ) LVM tools no longer crash due to an incorrect status of PVs When LVM observes particular types of inconsistencies between the metadata on Physical Volumes (PVs) in a Volume Group (VG), LVM can automatically repair them. Such inconsistencies happen, for example, if a VG is changed while some of its PVs are temporarily invisible to the system and then the PVs reappear. Prior to this update, when such a repair operation was performed, all the PVs were sometimes temporarily considered to have returned even if this was not the case. As a consequence, LVM tools sometimes terminated unexpectedly with a segmentation fault. With this update, the described problem no longer occurs. (BZ# 1434054 ) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/bug_fixes_storage |
Chapter 2. Memory management on RHEL for Real Time | Chapter 2. Memory management on RHEL for Real Time Real-time systems use a virtual memory system, where an address referenced by a user-space application translates into a physical address. The translation occurs through a combination of page tables and address translation hardware in the underlying computing system. An advantage of having the translation mechanism in between a program and the actual memory is that the operating system can swap pages when required or on request of the CPU. In real-time, to swap pages from the storage to the primary memory, the previously used page table entry is marked as invalid. As a result, even under normal memory pressure, the operating system can retrieve pages from one application and give them to another. This can cause unpredictable system behaviors. Memory allocation implementations include demand paging mechanism and memory lock ( mlock() ) system calls. Note Sharing data information about CPUs in different cache and NUMA domains might cause traffic problems and bottlenecks. When writing a multithreaded application, consider the machine topology before designing the data decomposition. Topology is the memory hierarchy, and includes CPU caches and the Non-Uniform Memory Access (NUMA) node. 2.1. Demand paging Demand paging is similar to a paging system with page swapping. The system loads pages that are stored in the secondary memory when required or on CPU demand. All memory addresses generated by a program pass through an address translation mechanism in the processor. The addresses then convert from a process-specific virtual address to a physical memory address. This is referred to as virtual memory. The two main components in the translation mechanism are page tables and translation lookaside buffers (TLBs) Page tables Page tables are multi-level tables in physical memory that contain mappings for virtual to physical memory. These mappings are readable by the virtual memory translation hardware in the processor. A page table entry with an assigned physical address, is referred to as the resident working set. When the operating system needs to free memory for other processes, it can swap pages from the resident working set. When swapping pages, any reference to a virtual address within that page creates a page fault and causes page reallocation. When the system is extremely low on physical memory, the swap process starts to thrash, which constantly steals pages from processes, and never allows a process to complete. You can monitor the virtual memory statistics by looking for the pgfault value in the /proc/vmstat file. Translation lookaside buffers Translation Lookaside Buffers (TLBs) are hardware caches of virtual memory translations. Any processor core with a TLB checks the TLB in parallel with initiating a memory read of a page table entry. If the TLB entry for a virtual address is valid, the memory read is aborted and the value in the TLB is used for the address translation. A TLB operates on the principle of locality of reference. This means that if code stays in one region of memory for a significant period of time (such as loops or call-related functions) then the TLB references avoid the main memory for address translations. This can significantly speed up processing times. When writing deterministic and fast code, use functions that maintain locality of reference. This can mean using loops rather than recursion. If recursion are not avoidable, place the recursion call at the end of the function. This is called tail-recursion, which makes the code work in a relatively small region of memory and avoids calling table translations from the main memory. 2.2. Major and minor page faults RHEL for Real Time allocates memory by breaking physical memory into chunks called pages and then maps them to the virtual memory. Faults in real-time occur when a process needs a specific page that is not mapped or is no longer available in the memory. Hence faults essentially mean unavailability of pages when required by a CPU. When a process encounters a page fault, all threads freeze until the kernel handles this fault. There are several ways to address this problem, but the best solution can be to adjust the source code to avoid page faults. Minor page faults Minor page faults in real-time occur when a process attempts to access a portion of memory before it has been initialized. In such scenarios, the system performs operations to fill the memory maps or other management structures. The severity of a minor page fault can depend on the system load and other factors, but they are usually short and have a negligible impact. Major page faults Real-time major faults occur when the system has to either synchronize memory buffers with the disk, swap memory pages belonging to other processes, or undertake any other input output (I/O) activity to free memory. This occurs when the processor references a virtual memory address that does not have a physical page allocated to it. The reference to an empty page causes the processor to execute a fault, and instructs the kernel code to allocate a page which increases latency dramatically. In real-time, when an application shows a performance drop, it is beneficial to check for the process information related to page faults in the /proc/ directory. For a specific process identifier (PID), using the cat command, you can view the /proc/PID/stat file for following relevant entries: Field 2: the executable file name. Field 10: the number of minor page faults. Field 12: the number of major page faults. The following example demonstrates viewing page faults with the cat command and a pipe function to return only the second, tenth, and twelfth lines of the /proc/PID/stat file: In the example output, the process with PID 3366 is bash and it has 5389 minor page faults and no major page faults. Additional resources Linux System Programming by Robert Love 2.3. mlock() system calls The memory lock ( mlock() ) system calls enable calling processes to lock or unlock a specified range of the address space and prevents Linux from paging the locked memory to swap space. After you allocate the physical page to the page table entry, references to that page are relatively fast. The memory lock system calls fall into mlock() and munlock() categories. The mlock() and munlock() system calls lock and unlock a specified range of process address pages. When successful, the pages in the specified range remain resident in the memory until the munlock() system call unlocks the pages. The mlock() and munlock() system calls take following parameters: addr : specifies the start of an address range. len : specifies the length of the address space in bytes. When successful, mlock() and munlock() system calls return 0. In case of an error, they return -1 and set a errno to indicate the error. The mlockall() and munlockall() system calls locks or unlocks all of the program space. Note The mlock() system call does not ensure that the program will not have a page I/O. It ensures that the data stays in the memory but cannot ensure it stays on the same page. Other functions such as move_pages and memory compactors can move data around regardless of the use of mlock() . Memory locks are made on a page basis and do not stack. If two dynamically allocated memory segments share the same page locked twice by mlock() or mlockall() , they unlock by using a single munlock() or munlockall() system call. As such, it is important to be aware of the pages that the application unlocks to avoid double-locking or single-unlocking problems. The following are two most common workarounds to mitigate double-lock or single-unlock problems: Tracking the allocated and locked memory areas and creating a wrapper function that verifies the number of page allocations before unlocking a page. This is the resource counting principle used in device drivers. Making memory allocations based on page size and alignment to avoid double-lock on a page. Additional resources capabilities(7) , mmap(2) , and move_pages(2) man pages on your system mlock(2) , mlock(3) , and mlockall(2) man pages on your system posix_memalign(3) and posix_memalign(3p) man pages on your system 2.4. Shared libraries The RHEL for Real Time shared libraries are called dynamic shared objects (DSO) and are a collection of pre-compiled code blocks called functions. These functions are reusable in multiple programs and they load at run-time or compile time. Linux supports the following two library classes: Dynamic or shared libraries: exists as separate files outside of the executable file. These files load into the memory and get mapped at run-time. Static libraries: are files linked to a program statically at compile time. The ld.so dynamic linker loads the shared libraries required by a program and then executes the code. The DSO functions load the libraries in the memory once and multiple processes can then reference the objects by mapping into the address space of processes. You can configure the dynamic libraries to load at compile time using the LD_BIND_NOW variable. Evaluating symbols before program initialization can improve performance because evaluating at application run-time can cause latency if the memory pages are located on an external disk. Additional resources ld.so(8) man page on your system 2.5. Shared memory In RHEL for Real Time, shared memory is a memory space shared between multiple processes. Using program threads, all threads created in one process context can share the same address space. This makes all data structures accessible to threads. With POSIX shared memory calls, you can configure processes to share a part of the address space. You can use the following supported POSIX shared memory calls: shm_open() : creates and opens a new or opens an existing POSIX shared memory object. shm_unlink() : unlinks POSIX shared memory objects. mmap() : creates a new mapping in the virtual address space of the calling process. Note The mechanism for sharing a memory region between two processes using System V IPC shmem() set of calls is deprecated and is no longer supported on RHEL for Real Time. Additional resources shm_open(3) , shm_overview(7) , and mmap(2) man pages on your system | [
"cat /proc/3366/stat | cut -d\\ -f2,10,12 (bash) 5389 0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/understanding_rhel_for_real_time/assembly_memory-management-on-rhel-for-real-time-_understanding-rhel-for-real-time-core-concepts |
Chapter 7. Fixed issues | Chapter 7. Fixed issues This section provides information about fixed issues in Ansible Automation Platform 2.5. 7.1. Ansible Automation Platform The installer now ensures semanage command is available when SELinux is enabled. (AAP-24396) The installer can now update certificates without attempting to start the nginx service for previously installed environments. (AAP-19948) Event-Driven Ansible installation now fails when the pre-existing automation controller is older than version 4.4.0. (AAP-18572) Event-Driven Ansible can now successfully install on its own with a controller URL when the controller is not in the inventory. (AAP-16483) Postgres tasks that create users in FIPS environments now use scram-sha-256 . (AAP-16456) The installer now successfully generates a new SECRET_KEY for controller. (AAP-15513) Ensure all backup and restore staged files and directories are cleaned up before running a backup or restore. You must also mark the files for deletion after a backup or restore. (AAP-14986) Postgres certificates are now temporarily copied when checking the Postgres version for SSL mode verify-full. (AAP-14732) The setup script now warns if the provided log path does not have write permissions, and fails if default path does not have write permissions. (AAP-14135) The linger configuration is now correctly set by the root user for the Event-Driven Ansible user. (AAP-13744) Subject alternative names for component hosts will now only be checked for signing certificates when HTTPS is enabled. (AAP-7737) The UI for creating and editing an organization now validates the Max hosts value. This value must be an integer and have a value between 0 and 214748364. (AAP-23270) Installations that do not include the automation controller but have an external database will no longer install an unused internal Postgres server. (AAP-29798) Added default port values for all pg_port variables in the installer. (AAP-18484) XDG_RUNTIME_DIR is now defined when applying Event-Driven Ansible linger settings for Podman. (AAP-18341)* Fixed an issue where the restore process failed to stop pulpcore-worker services on RHEL 9. (AAP-12829) Fixed Postgres sslmode for verify-full that affected external Postgres and Postgres signed for 127.0.0.1 for internally managed Postgres. (AAP-7107) Fixed support for automation hub content signing. (AAP-9739) Fixed conditional code statements to align with changes from ansible-core issue #82295. (AAP-19053) Resolved an issue where providing the database installation with a custom port broke the installation of Postgres. (AAP-30636) 7.2. Automation hub Automation hub now uses system crypto-policies in nginx. (AAP-17775) 7.3. Event-Driven Ansible Fixed a bug where the Swagger API docs URL returned 404 error with trailing slash. (AAP-27417) Fixed a bug where logs contained stack trace errors inappropriately. (AAP-23605) Fixed a bug where the API returned error 500 instead of error 400 when a foreign key ID did not exist. (AAP-23105) Fix a bug where the Git hash of a project could be empty. (AAP-21641) Fixed a bug where an activation could fail at the start time due to authentication errors with Podman. (AAP-21067) Fixed a bug where a project could not get imported if it contained a malformed rulebook. (AAP-20868) Added EDA_CSRF_TRUSTED_ORIGINS , which can be set by user input or defined based on the allowed hostnames provided or determined by the installer as a default. (AAP-19319) Redirected all Event-Driven Ansible traffic to /eda/ following UI changes that require the redirect. (AAP-18989) Fixed target database for Event-Driven automation restore from backup. (AAP-17918) Fixed the automation controller URL check when installing Event-Driven Ansible without a controller. (AAP-17249) Fixed a bug when the membership operator failed in a condition applied to a previously saved event. (AAP-16663) Fixed Event-Driven Ansible nginx configuration for custom HTTPS port. (AAP-16000) Instead of the target service only, all Event-Driven Ansible services are enabled after installation is completed. The Event-Driven Ansible services will always start after the setup is complete. (AAP-15889) 7.4. Ansible Automation Platform Operator Fixed Django REST Framework (DRF) browsable views. (AAP-25508) | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/release_notes/aap-2.5-fixed-issues |
Chapter 8. Setting up the test environment for non-containerized application testing | Chapter 8. Setting up the test environment for non-containerized application testing The first step towards certifying your product is setting up the environment where you can run the tests. The test environment consists of a system in which all the certification tests are run. 8.1. Setting up a system that acts as a system under test A system on which the product that needs certification is installed or configured is referred to as the system under test (SUT). Prerequisites The SUT has RHEL version 8 or 9 installed. For convenience, Red Hat provides kickstart files to install the SUT's operating system. Follow the instructions in the file that is appropriate for your system before launching the installation process. Procedure Configure the Red Hat Certification repository: Use your RHN credentials to register your system using Red Hat Subscription Management: Display the list of available subscriptions for your system: Search for the subscription which provides the Red Hat Certification (for RHEL Server) repository and make a note of the subscription and its Pool ID. Attach the subscription to your system: Replace the pool_ID with the Pool ID of the subscription. Note You don't have to attach the subscription to your system, if you enable the option Simple content access for Red Hat Subscription Management . For more details, see How do I enable Simple Content Access for Red Hat Subscription Management? Subscribe to the Red Hat Certification channel: On RHEL 8: Replace HOSTTYPE with the system architecture. To find out the system architecture, run Example: On RHEL 9: Replace HOSTTYPE with the system architecture. To find out the system architecture, run Example: Install the software test suite package: | [
"subscription-manager register",
"subscription-manager list --available*",
"subscription-manager attach --pool= <pool_ID >",
"subscription-manager repos --enable=cert-1-for-rhel-8- <HOSTTYPE> -rpms",
"uname -m",
"subscription-manager repos --enable=cert-1-for-rhel-8-x86_64-rpms",
"subscription-manager repos --enable=cert-1-for-rhel-9- <HOSTTYPE> -rpms",
"uname -m",
"subscription-manager repos --enable=cert-1-for-rhel-9-x86_64-rpms",
"dnf install redhat-certification-software"
] | https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_workflow_guide/assembly_setting-up-the-test-environment-for-non-containerized-application-testing_openshift-sw-cert-workflow-onboarding-certification-partners |
Chapter 1. Introduction to Automation execution environments | Chapter 1. Introduction to Automation execution environments Using Ansible content that depends on non-default dependencies can be complicated because the packages must be installed on each node, interact with other software installed on the host system, and be kept in sync. Automation execution environments help simplify this process and can easily be created with Ansible Builder. 1.1. About automation execution environments All automation in Red Hat Ansible Automation Platform runs on container images called automation execution environments. Automation execution environments create a common language for communicating automation dependencies, and offer a standard way to build and distribute the automation environment. An automation execution environment should contain the following: Ansible Core 2.16 or later Python 3.10 or later Ansible Runner Ansible content collections and their dependencies System dependencies 1.1.1. Why use automation execution environments? With automation execution environments, Red Hat Ansible Automation Platform has transitioned to a distributed architecture by separating the control plane from the execution plane. Keeping automation execution independent of the control plane results in faster development cycles and improves scalability, reliability, and portability across environments. Red Hat Ansible Automation Platform also includes access to Ansible content tools, making it easy to build and manage automation execution environments. In addition to speed, portability, and flexibility, automation execution environments provide the following benefits: They ensure that automation runs consistently across multiple platforms and make it possible to incorporate system-level dependencies and collection-based content. They give Red Hat Ansible Automation Platform administrators the ability to provide and manage automation environments to meet the needs of different teams. They allow automation to be easily scaled and shared between teams by providing a standard way of building and distributing the automation environment. They enable automation teams to define, build, and update their automation environments themselves. Automation execution environments provide a common language to communicate automation dependencies. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/creating_and_using_execution_environments/assembly-intro-to-builder |
Appendix B. Kickstart commands and options reference | Appendix B. Kickstart commands and options reference This reference is a complete list of all Kickstart commands supported by the Red Hat Enterprise Linux installation program. The commands are sorted alphabetically in a few broad categories. If a command can fall under multiple categories, it is listed in all of them. B.1. Kickstart changes The following sections describe the changes in Kickstart commands and options in Red Hat Enterprise Linux 8. auth or authconfig is deprecated in RHEL 8 The auth or authconfig Kickstart command is deprecated in Red Hat Enterprise Linux 8 because the authconfig tool and package have been removed. Similarly to authconfig commands issued on command line, authconfig commands in Kickstart scripts now use the authselect-compat tool to run the new authselect tool. For a description of this compatibility layer and its known issues, see the manual page authselect-migration(7) . The installation program will automatically detect use of the deprecated commands and install on the system the authselect-compat package to provide the compatibility layer. Kickstart no longer supports Btrfs The Btrfs file system is not supported from Red Hat Enterprise Linux 8. As a result, the Graphical User Interface (GUI) and the Kickstart commands no longer support Btrfs. Using Kickstart files from RHEL releases If you are using Kickstart files from RHEL releases, see the Repositories section of the Considerations in adopting RHEL 8 document for more information about the Red Hat Enterprise Linux 8 BaseOS and AppStream repositories. B.1.1. Deprecated Kickstart commands and options The following Kickstart commands and options have been deprecated in Red Hat Enterprise Linux 8. Where only specific options are listed, the base command and its other options are still available and not deprecated. auth or authconfig - use authselect instead device deviceprobe dmraid install - use the subcommands or methods directly as commands multipath bootloader --upgrade ignoredisk --interactive partition --active reboot --kexec syspurpose - use subscription-manager syspurpose instead Except the auth or authconfig command, using the commands in Kickstart files prints a warning in the logs. You can turn the deprecated command warnings into errors with the inst.ksstrict boot option, except for the auth or authconfig command. B.1.2. Removed Kickstart commands and options The following Kickstart commands and options have been completely removed in Red Hat Enterprise Linux 8. Using them in Kickstart files will cause an error. device deviceprobe dmraid install - use the subcommands or methods directly as commands multipath bootloader --upgrade ignoredisk --interactive partition --active harddrive --biospart upgrade (This command had already previously been deprecated.) btrfs part/partition btrfs part --fstype btrfs or partition --fstype btrfs logvol --fstype btrfs raid --fstype btrfs unsupported_hardware Where only specific options and values are listed, the base command and its other options are still available and not removed. B.2. Kickstart commands for installation program configuration and flow control The Kickstart commands in this list control the mode and course of installation, and what happens at its end. B.2.1. cdrom The cdrom Kickstart command is optional. It performs the installation from the first optical drive on the system. Use this command only once. Syntax Notes Previously, the cdrom command had to be used together with the install command. The install command has been deprecated and cdrom can be used on its own, because it implies install . This command has no options. To actually run the installation, you must specify one of cdrom , harddrive , hmc , nfs , liveimg , ostreesetup , rhsm , or url unless the inst.repo option is specified on the kernel command line. B.2.2. cmdline The cmdline Kickstart command is optional. It performs the installation in a completely non-interactive command line mode. Any prompt for interaction halts the installation. Use this command only once. Syntax Notes For a fully automatic installation, you must either specify one of the available modes ( graphical , text , or cmdline ) in the Kickstart file, or you must use the console= boot option. If no mode is specified, the system will use graphical mode if possible, or prompt you to choose from VNC and text mode. This command has no options. This mode is useful on 64-bit IBM Z systems with the x3270 terminal. B.2.3. driverdisk The driverdisk Kickstart command is optional. Use it to provide additional drivers to the installation program. Driver disks can be used during Kickstart installations to provide additional drivers not included by default. You must copy the driver disks contents to the root directory of a partition on the system's disk. Then, you must use the driverdisk command to specify that the installation program should look for a driver disk and its location. Use this command only once. Syntax Options You must specify the location of driver disk in one way out of these: partition - Partition containing the driver disk. The partition must be specified as a full path (for example, /dev/sdb1 ), not just the partition name (for example, sdb1 ). --source= - URL for the driver disk. Examples include: --biospart= - BIOS partition containing the driver disk (for example, 82p2 ). Notes Driver disks can also be loaded from a local disk or a similar device instead of being loaded over the network or from initrd . Follow this procedure: Load the driver disk on a disk drive, a USB or any similar device. Set the label, for example, DD , to this device. Add the following line to your Kickstart file: Replace DD with a specific label and replace e1000.rpm with a specific name. Use anything supported by the inst.repo command instead of LABEL to specify your disk drive. B.2.4. eula The eula Kickstart command is optional. Use this option to accept the End User License Agreement (EULA) without user interaction. Specifying this option prevents Initial Setup from prompting you to accept the license agreement after you finish the installation and reboot the system for the first time. Use this command only once. Syntax Options --agreed (required) - Accept the EULA. This option must always be used, otherwise the eula command is meaningless. B.2.5. firstboot The firstboot Kickstart command is optional. It determines whether the Initial Setup application starts the first time the system is booted. If enabled, the initial-setup package must be installed. If not specified, this option is disabled by default. Use this command only once. Syntax Options --enable or --enabled - Initial Setup is started the first time the system boots. --disable or --disabled - Initial Setup is not started the first time the system boots. --reconfig - Enable the Initial Setup to start at boot time in reconfiguration mode. This mode enables the root password, time & date, and networking & host name configuration options in addition to the default ones. B.2.6. graphical The graphical Kickstart command is optional. It performs the installation in graphical mode. This is the default. Use this command only once. Syntax Options --non-interactive - Performs the installation in a completely non-interactive mode. This mode will terminate the installation when user interaction is required. Note For a fully automatic installation, you must either specify one of the available modes ( graphical , text , or cmdline ) in the Kickstart file, or you must use the console= boot option. If no mode is specified, the system will use graphical mode if possible, or prompt you to choose from VNC and text mode. B.2.7. halt The halt Kickstart command is optional. Halt the system after the installation has successfully completed. This is similar to a manual installation, where Anaconda displays a message and waits for the user to press a key before rebooting. During a Kickstart installation, if no completion method is specified, this option is used as the default. Use this command only once. Syntax Notes The halt command is equivalent to the shutdown -H command. For more details, see the shutdown(8) man page on your system. For other completion methods, see the poweroff , reboot , and shutdown commands. This command has no options. B.2.8. harddrive The harddrive Kickstart command is optional. It performs the installation from a Red Hat installation tree or full installation ISO image on a local drive. The drive must be formatted with a file system the installation program can mount: ext2 , ext3 , ext4 , vfat , or xfs . Use this command only once. Syntax Options --partition= - Partition to install from (such as sdb2 ). --dir= - Directory containing the variant directory of the installation tree, or the ISO image of the full installation DVD. Example Notes Previously, the harddrive command had to be used together with the install command. The install command has been deprecated and harddrive can be used on its own, because it implies install . To actually run the installation, you must specify one of cdrom , harddrive , hmc , nfs , liveimg , ostreesetup , rhsm , or url unless the inst.repo option is specified on the kernel command line. B.2.9. install (deprecated) Important The install Kickstart command is deprecated in Red Hat Enterprise Linux 8. Use its methods as separate commands. The install Kickstart command is optional. It specifies the default installation mode. Syntax Notes The install command must be followed by an installation method command. The installation method command must be on a separate line. The methods include: cdrom harddrive hmc nfs liveimg url For details about the methods, see their separate reference pages. B.2.10. liveimg The liveimg Kickstart command is optional. It performs the installation from a disk image instead of packages. Use this command only once. Syntax Mandatory options --url= - The location to install from. Supported protocols are HTTP , HTTPS , FTP , and file . Optional options --url= - The location to install from. Supported protocols are HTTP , HTTPS , FTP , and file . --proxy= - Specify an HTTP , HTTPS or FTP proxy to use while performing the installation. --checksum= - An optional argument with the SHA256 checksum of the image file, used for verification. --noverifyssl - Disable SSL verification when connecting to an HTTPS server. Example Notes The image can be the squashfs.img file from a live ISO image, a compressed tar file ( .tar , .tbz , .tgz , .txz , .tar.bz2 , .tar.gz , or .tar.xz .), or any file system that the installation media can mount. Supported file systems are ext2 , ext3 , ext4 , vfat , and xfs . When using the liveimg installation mode with a driver disk, drivers on the disk will not automatically be included in the installed system. If necessary, these drivers should be installed manually, or in the %post section of a kickstart script. To actually run the installation, you must specify one of cdrom , harddrive , hmc , nfs , liveimg , ostreesetup , rhsm , or url unless the inst.repo option is specified on the kernel command line. Previously, the liveimg command had to be used together with the install command. The install command has been deprecated and liveimg can be used on its own, because it implies install . B.2.11. logging The logging Kickstart command is optional. It controls the error logging of Anaconda during installation. It has no effect on the installed system. Use this command only once. Logging is supported over TCP only. For remote logging, ensure that the port number that you specify in --port= option is open on the remote server. The default port is 514. Syntax Optional options --host= - Send logging information to the given remote host, which must be running a syslogd process configured to accept remote logging. --port= - If the remote syslogd process uses a port other than the default, set it using this option. --level= - Specify the minimum level of messages that appear on tty3. All messages are still sent to the log file regardless of this level, however. Possible values are debug , info , warning , error , or critical . B.2.12. mediacheck The mediacheck Kickstart command is optional. This command forces the installation program to perform a media check before starting the installation. This command requires that installations be attended, so it is disabled by default. Use this command only once. Syntax Notes This Kickstart command is equivalent to the rd.live.check boot option. This command has no options. B.2.13. nfs The nfs Kickstart command is optional. It performs the installation from a specified NFS server. Use this command only once. Syntax Options --server= - Server from which to install (host name or IP). --dir= - Directory containing the variant directory of the installation tree. --opts= - Mount options to use for mounting the NFS export. (optional) Example Notes Previously, the nfs command had to be used together with the install command. The install command has been deprecated and nfs can be used on its own, because it implies install . To actually run the installation, you must specify one of cdrom , harddrive , hmc , nfs , liveimg , ostreesetup , rhsm , or url unless the inst.repo option is specified on the kernel command line. B.2.14. ostreesetup The ostreesetup Kickstart command is optional. It is used to set up OStree-based installations. Use this command only once. Syntax Mandatory options: --osname= OSNAME - Management root for OS installation. --url= URL - URL of the repository to install from. --ref= REF - Name of the branch from the repository to be used for installation. Optional options: --remote= REMOTE - A remote repository location. --nogpg - Disable GPG key verification. Note For more information about the OStree tools, see the upstream documentation: https://ostreedev.github.io/ostree/ B.2.15. poweroff The poweroff Kickstart command is optional. It shuts down and powers off the system after the installation has successfully completed. Normally during a manual installation, Anaconda displays a message and waits for the user to press a key before rebooting. Use this command only once. Syntax Notes The poweroff option is equivalent to the shutdown -P command. For more details, see the shutdown(8) man page on your system. For other completion methods, see the halt , reboot , and shutdown Kickstart commands. The halt option is the default completion method if no other methods are explicitly specified in the Kickstart file. The poweroff command is highly dependent on the system hardware in use. Specifically, certain hardware components such as the BIOS, APM (advanced power management), and ACPI (advanced configuration and power interface) must be able to interact with the system kernel. Consult your hardware documentation for more information about you system's APM/ACPI abilities. This command has no options. B.2.16. reboot The reboot Kickstart command is optional. It instructs the installation program to reboot after the installation is successfully completed (no arguments). Normally, Kickstart displays a message and waits for the user to press a key before rebooting. Use this command only once. Syntax Options --eject - Attempt to eject the bootable media (DVD, USB, or other media) before rebooting. --kexec - Uses the kexec system call instead of performing a full reboot, which immediately loads the installed system into memory, bypassing the hardware initialization normally performed by the BIOS or firmware. Important This option is deprecated and available as a Technology Preview only. For information about Red Hat scope of support for Technology Preview features, see the Technology Preview Features Support Scope document. When kexec is used, device registers (which would normally be cleared during a full system reboot) might stay filled with data, which could potentially create issues for some device drivers. Notes Use of the reboot option might result in an endless installation loop, depending on the installation media and method. The reboot option is equivalent to the shutdown -r command. For more details, see the shutdown(8) man page on your system. Specify reboot to automate installation fully when installing in command line mode on 64-bit IBM Z. For other completion methods, see the halt , poweroff , and shutdown Kickstart options. The halt option is the default completion method if no other methods are explicitly specified in the Kickstart file. B.2.17. rhsm The rhsm Kickstart command is optional. It instructs the installation program to register and install RHEL from the CDN. Use this command only once. The rhsm Kickstart command removes the requirement of using custom %post scripts when registering the system. Options --organization= - Uses the organization id to register and install RHEL from the CDN. --activation-key= - Uses the activation key to register and install RHEL from the CDN. Option can be used multiple times, once per activation key, as long as the activation keys used are registered to your subscription. --connect-to-insights - Connects the target system to Red Hat Insights. --proxy= - Sets the HTTP proxy. To switch the installation source repository to the CDN by using the rhsm Kickstart command, you must meet the following conditions: On the kernel command line, you have used inst.stage2= <URL> to fetch the installation image but have not specified an installation source using inst.repo= . In the Kickstart file, you have not specified an installation source by using the url , cdrom , harddrive , liveimg , nfs and ostree setup commands. An installation source URL specified using a boot option or included in a Kickstart file takes precedence over the CDN, even if the Kickstart file contains the rhsm command with valid credentials. The system is registered, but it is installed from the URL installation source. This ensures that earlier installation processes operate as normal. B.2.18. shutdown The shutdown Kickstart command is optional. It shuts down the system after the installation has successfully completed. Use this command only once. Syntax Notes The shutdown Kickstart option is equivalent to the shutdown command. For more details, see the shutdown(8) man page on your system. For other completion methods, see the halt , poweroff , and reboot Kickstart options. The halt option is the default completion method if no other methods are explicitly specified in the Kickstart file. This command has no options. B.2.19. sshpw The sshpw Kickstart command is optional. During the installation, you can interact with the installation program and monitor its progress over an SSH connection. Use the sshpw command to create temporary accounts through which to log on. Each instance of the command creates a separate account that exists only in the installation environment. These accounts are not transferred to the installed system. Syntax Mandatory options --username = name - Provides the name of the user. This option is required. password - The password to use for the user. This option is required. Optional options --iscrypted - If this option is present, the password argument is assumed to already be encrypted. This option is mutually exclusive with --plaintext . To create an encrypted password, you can use Python: This generates a sha512 crypt-compatible hash of your password using a random salt. --plaintext - If this option is present, the password argument is assumed to be in plain text. This option is mutually exclusive with --iscrypted --lock - If this option is present, this account is locked by default. This means that the user will not be able to log in from the console. --sshkey - If this is option is present, then the <password> string is interpreted as an ssh key value. Notes By default, the ssh server is not started during the installation. To make ssh available during the installation, boot the system with the kernel boot option inst.sshd . If you want to disable root ssh access, while allowing another user ssh access, use the following: To simply disable root ssh access, use the following: B.2.20. text The text Kickstart command is optional. It performs the Kickstart installation in text mode. Kickstart installations are performed in graphical mode by default. Use this command only once. Syntax Options --non-interactive - Performs the installation in a completely non-interactive mode. This mode will terminate the installation when user interaction is required. Notes Note that for a fully automatic installation, you must either specify one of the available modes ( graphical , text , or cmdline ) in the Kickstart file, or you must use the console= boot option. If no mode is specified, the system will use graphical mode if possible, or prompt you to choose from VNC and text mode. B.2.21. url The url Kickstart command is optional. It is used to install from an installation tree image on a remote server by using the FTP, HTTP, or HTTPS protocol. You can only specify one URL. Use this command only once. You must specify one of the --url , --metalink or --mirrorlist options. Syntax Options --url= FROM - Specifies the HTTP , HTTPS , FTP , or file location to install from. --mirrorlist= - Specifies the mirror URL to install from. --proxy= - Specifies an HTTP , HTTPS , or FTP proxy to use during the installation. --noverifyssl - Disables SSL verification when connecting to an HTTPS server. --metalink= URL - Specifies the metalink URL to install from. Variable substitution is done for USDreleasever and USDbasearch in the URL . Examples To install from a HTTP server: To install from a FTP server: Notes Previously, the url command had to be used together with the install command. The install command has been deprecated and url can be used on its own, because it implies install . To actually run the installation, you must specify one of cdrom , harddrive , hmc , nfs , liveimg , ostreesetup , rhsm , or url unless the inst.repo option is specified on the kernel command line. B.2.22. vnc The vnc Kickstart command is optional. It allows the graphical installation to be viewed remotely through VNC. This method is usually preferred over text mode, as there are some size and language limitations in text installations. With no additional options, this command starts a VNC server on the installation system with no password and displays the details required to connect to it. Use this command only once. Syntax Options --host= Connect to the VNC viewer process listening on the given host name. --port= Provide a port that the remote VNC viewer process is listening on. If not provided, Anaconda uses the VNC default port of 5900. --password= Set a password which must be provided to connect to the VNC session. This is optional, but recommended. Additional resources Preparing to install from the network using PXE B.2.23. hmc The hmc kickstart command is optional. Use it to install from an installation medium by using SE/HMC on IBM Z. This command does not have any options. Syntax B.2.24. %include The %include Kickstart command is optional. Use the %include command to include the contents of another file in the Kickstart file as if the contents were at the location of the %include command in the Kickstart file. This inclusion is evaluated only after the %pre script sections and can thus be used to include files generated by scripts in the %pre sections. To include files before evaluation of %pre sections, use the %ksappend command. Syntax B.2.25. %ksappend The %ksappend Kickstart command is optional. Use the %ksappend command to include the contents of another file in the Kickstart file as if the contents were at the location of the %ksappend command in the Kickstart file. This inclusion is evaluated before the %pre script sections, unlike inclusion with the %include command. Syntax B.3. Kickstart commands for system configuration The Kickstart commands in this list configure further details on the resulting system such as users, repositories, or services. B.3.1. auth or authconfig (deprecated) Important Use the new authselect command instead of the deprecated auth or authconfig Kickstart command. auth and authconfig are available only for limited backwards compatibility. The auth or authconfig Kickstart command is optional. It sets up the authentication options for the system using the authconfig tool, which can also be run on the command line after the installation finishes. Use this command only once. Syntax Notes Previously, the auth or authconfig Kickstart commands called the authconfig tool. This tool has been deprecated in Red Hat Enterprise Linux 8. These Kickstart commands now use the authselect-compat tool to call the new authselect tool. For a description of the compatibility layer and its known issues, see the manual page authselect-migration(7) . The installation program will automatically detect use of the deprecated commands and install on the system the authselect-compat package to provide the compatibility layer. Passwords are shadowed by default. When using OpenLDAP with the SSL protocol for security, ensure that the SSLv2 and SSLv3 protocols are disabled in the server configuration. This is due to the POODLE SSL vulnerability (CVE-2014-3566). For more information, see the Red Hat Knowledgebase solution Resolution for POODLE SSLv3.0 vulnerability . B.3.2. authselect The authselect Kickstart command is optional. It sets up the authentication options for the system using the authselect command, which can also be run on the command line after the installation finishes. Use this command only once. Syntax Notes This command passes all options to the authselect command. Refer to the authselect(8) manual page and the authselect --help command for more details. This command replaces the deprecated auth or authconfig commands deprecated in Red Hat Enterprise Linux 8 together with the authconfig tool. Passwords are shadowed by default. When using OpenLDAP with the SSL protocol for security, ensure that the SSLv2 and SSLv3 protocols are disabled in the server configuration. This is due to the POODLE SSL vulnerability (CVE-2014-3566). For more information, see the Red Hat Knowledgebase solution Resolution for POODLE SSLv3.0 vulnerability . B.3.3. firewall The firewall Kickstart command is optional. It specifies the firewall configuration for the installed system. Syntax Mandatory options --enabled or --enable - Reject incoming connections that are not in response to outbound requests, such as DNS replies or DHCP requests. If access to services running on this machine is needed, you can choose to allow specific services through the firewall. --disabled or --disable - Do not configure any iptables rules. Optional options --trust - Listing a device here, such as em1 , allows all traffic coming to and from that device to go through the firewall. To list more than one device, use the option more times, such as --trust em1 --trust em2 . Do not use a comma-separated format such as --trust em1, em2 . --remove-service - Do not allow services through the firewall. incoming - Replace with one or more of the following to allow the specified services through the firewall. --ssh --smtp --http --ftp --port= - You can specify that ports be allowed through the firewall using the port:protocol format. For example, to allow IMAP access through your firewall, specify imap:tcp . Numeric ports can also be specified explicitly; for example, to allow UDP packets on port 1234 through, specify 1234:udp . To specify multiple ports, separate them by commas. --service= - This option provides a higher-level way to allow services through the firewall. Some services (like cups , avahi , and so on.) require multiple ports to be open or other special configuration in order for the service to work. You can specify each individual port with the --port option, or specify --service= and open them all at once. Valid options are anything recognized by the firewall-offline-cmd program in the firewalld package. If the firewalld service is running, firewall-cmd --get-services provides a list of known service names. --use-system-defaults - Do not configure the firewall at all. This option instructs anaconda to do nothing and allows the system to rely on the defaults that were provided with the package or ostree. If this option is used with other options then all other options will be ignored. B.3.4. group The group Kickstart command is optional. It creates a new user group on the system. Mandatory options --name= - Provides the name of the group. Optional options --gid= - The group's GID. If not provided, defaults to the available non-system GID. Notes If a group with the given name or GID already exists, this command fails. The user command can be used to create a new group for the newly created user. B.3.5. keyboard (required) The keyboard Kickstart command is required. It sets one or more available keyboard layouts for the system. Use this command only once. Syntax Options --vckeymap= - Specify a VConsole keymap which should be used. Valid names correspond to the list of files in the /usr/lib/kbd/keymaps/xkb/ directory, without the .map.gz extension. --xlayouts= - Specify a list of X layouts that should be used as a comma-separated list without spaces. Accepts values in the same format as setxkbmap(1) , either in the layout format (such as cz ), or in the layout ( variant ) format (such as cz (qwerty) ). All available layouts can be viewed on the xkeyboard-config(7) man page under Layouts . --switch= - Specify a list of layout-switching options (shortcuts for switching between multiple keyboard layouts). Multiple options must be separated by commas without spaces. Accepts values in the same format as setxkbmap(1) . Available switching options can be viewed on the xkeyboard-config(7) man page under Options . Notes Either the --vckeymap= or the --xlayouts= option must be used. Example The following example sets up two keyboard layouts ( English (US) and Czech (qwerty) ) using the --xlayouts= option, and allows to switch between them using Alt + Shift : B.3.6. lang (required) The lang Kickstart command is required. It sets the language to use during installation and the default language to use on the installed system. Use this command only once. Syntax Mandatory options language - Install support for this language and set it as system default. Optional options --addsupport= - Add support for additional languages. Takes the form of comma-separated list without spaces. For example: Notes The locale -a | grep _ or localectl list-locales | grep _ commands return a list of supported locales. Certain languages (for example, Chinese, Japanese, Korean, and Indic languages) are not supported during text-mode installation. If you specify one of these languages with the lang command, the installation process continues in English, but the installed system uses your selection as its default language. Example To set the language to English, the Kickstart file should contain the following line: B.3.7. module The module Kickstart command is optional. Use this command to enable a package module stream within kickstart script. Syntax Mandatory options --name= Specifies the name of the module to enable. Replace NAME with the actual name. Optional options --stream= Specifies the name of the module stream to enable. Replace STREAM with the actual name. You do not need to specify this option for modules with a default stream defined. For modules without a default stream, this option is mandatory and leaving it out results in an error. Enabling a module multiple times with different streams is not possible. Notes Using a combination of this command and the %packages section allows you to install packages provided by the enabled module and stream combination, without specifying the module and stream explicitly. Modules must be enabled before package installation. After enabling a module with the module command, you can install the packages enabled by this module by listing them in the %packages section. A single module command can enable only a single module and stream combination. To enable multiple modules, use multiple module commands. Enabling a module multiple times with different streams is not possible. In Red Hat Enterprise Linux 8, modules are present only in the AppStream repository. To list available modules, use the yum module list command on an installed Red Hat Enterprise Linux 8 system with a valid subscription. Additional resources Installing, managing, and removing user-space components B.3.8. repo The repo Kickstart command is optional. It configures additional yum repositories that can be used as sources for package installation. You can add multiple repo lines. Syntax Mandatory options --name= - The repository id. This option is required. If a repository has a name which conflicts with another previously added repository, it is ignored. Because the installation program uses a list of preset repositories, this means that you cannot add repositories with the same names as the preset ones. URL options These options are mutually exclusive and optional. The variables that can be used in yum repository configuration files are not supported here. You can use the strings USDreleasever and USDbasearch which are replaced by the respective values in the URL. --baseurl= - The URL to the repository. --mirrorlist= - The URL pointing at a list of mirrors for the repository. --metalink= - The URL with metalink for the repository. Optional options --install - Save the provided repository configuration on the installed system in the /etc/yum.repos.d/ directory. Without using this option, a repository configured in a Kickstart file will only be available during the installation process, not on the installed system. --cost= - An integer value to assign a cost to this repository. If multiple repositories provide the same packages, this number is used to prioritize which repository will be used before another. Repositories with a lower cost take priority over repositories with higher cost. --excludepkgs= - A comma-separated list of package names that must not be pulled from this repository. This is useful if multiple repositories provide the same package and you want to make sure it comes from a particular repository. Both full package names (such as publican ) and globs (such as gnome-* ) are accepted. --includepkgs= - A comma-separated list of package names and globs that are allowed to be pulled from this repository. Any other packages provided by the repository will be ignored. This is useful if you want to install just a single package or set of packages from a repository while excluding all other packages the repository provides. --proxy=[ protocol ://][ username [: password ]@] host [: port ] - Specify an HTTP/HTTPS/FTP proxy to use just for this repository. This setting does not affect any other repositories, nor how the install.img is fetched on HTTP installations. --noverifyssl - Disable SSL verification when connecting to an HTTPS server. Note Repositories used for installation must be stable. The installation can fail if a repository is modified before the installation concludes. B.3.9. rootpw (required) The rootpw Kickstart command is required. It sets the system's root password to the password argument. Use this command only once. Syntax Mandatory options password - Password specification. Either plain text or encrypted string. See --iscrypted and --plaintext below. Options --iscrypted - If this option is present, the password argument is assumed to already be encrypted. This option is mutually exclusive with --plaintext . To create an encrypted password, you can use python: This generates a sha512 crypt-compatible hash of your password using a random salt. --plaintext - If this option is present, the password argument is assumed to be in plain text. This option is mutually exclusive with --iscrypted . --lock - If this option is present, the root account is locked by default. This means that the root user will not be able to log in from the console. This option will also disable the Root Password screens in both the graphical and text-based manual installation. B.3.10. selinux The selinux Kickstart command is optional. It sets the state of SELinux on the installed system. The default SELinux policy is enforcing . Use this command only once. Syntax Options --enforcing Enables SELinux with the default targeted policy being enforcing . --permissive Outputs warnings based on the SELinux policy, but does not actually enforce the policy. --disabled Disables SELinux completely on the system. Additional resources Using SElinux B.3.11. services The services Kickstart command is optional. It modifies the default set of services that will run under the default systemd target. The list of disabled services is processed before the list of enabled services. Therefore, if a service appears on both lists, it will be enabled. Syntax Options --disabled= - Disable the services given in the comma separated list. --enabled= - Enable the services given in the comma separated list. Notes When using the services element to enable systemd services, ensure you include packages containing the specified service file in the %packages section. Multiple services should be included separated by comma, without any spaces. For example, to disable four services, enter: If you include any spaces, Kickstart enables or disables only the services up to the first space. For example: That disables only the auditd service. To disable all four services, this entry must include no spaces. B.3.12. skipx The skipx Kickstart command is optional. If present, X is not configured on the installed system. If you install a display manager among your package selection options, this package creates an X configuration, and the installed system defaults to graphical.target . That overrides the effect of the skipx option. Use this command only once. Syntax Notes This command has no options. B.3.13. sshkey The sshkey Kickstart command is optional. It adds a SSH key to the authorized_keys file of the specified user on the installed system. Syntax Mandatory options --username= - The user for which the key will be installed. ssh_key - The complete SSH key fingerprint. It must be wrapped with quotes. B.3.14. syspurpose The syspurpose Kickstart command is optional. Use it to set the system purpose which describes how the system will be used after installation. This information helps apply the correct subscription entitlement to the system. Use this command only once. Note Red Hat Enterprise Linux 8.6 and later enables you to manage and display system purpose attributes with a single module by making the role , service-level , usage , and addons subcommands available under one subscription-manager syspurpose module. Previously, system administrators used one of four standalone syspurpose commands to manage each attribute. This standalone syspurpose command is deprecated starting with RHEL 8.6 and is planned to be removed in RHEL 9. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements. Starting with RHEL 9, the single subscription-manager syspurpose command and its associated subcommands is the only way to use system purpose. Syntax Options --role= - Set the intended system role. Available values are: Red Hat Enterprise Linux Server Red Hat Enterprise Linux Workstation Red Hat Enterprise Linux Compute Node --sla= - Set the Service Level Agreement. Available values are: Premium Standard Self-Support --usage= - The intended usage of the system. Available values are: Production Disaster Recovery Development/Test --addon= - Specifies additional layered products or features. You can use this option multiple times. Notes Enter the values with spaces and enclose them in double quotes: While it is strongly recommended that you configure System Purpose, it is an optional feature of the Red Hat Enterprise Linux installation program. If you want to enable System Purpose after the installation completes, you can do so using the syspurpose command-line tool. Note Red Hat Enterprise Linux 8.6 and later enables you to manage and display system purpose attributes with a single module by making the role , service-level , usage , and addons subcommands available under one subscription-manager syspurpose module. Previously, system administrators used one of four standalone syspurpose commands to manage each attribute. This standalone syspurpose command is deprecated starting with RHEL 8.6 and is planned to be removed in RHEL 9. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements. Starting with RHEL 9, the single subscription-manager syspurpose command and its associated subcommands is the only way to use system purpose. B.3.15. timezone (required) The timezone Kickstart command is required. It sets the system time zone. Use this command only once. Syntax Mandatory options timezone - the time zone to set for the system. Optional options --utc - If present, the system assumes the hardware clock is set to UTC (Greenwich Mean) time. --nontp - Disable the NTP service automatic starting. --ntpservers= - Specify a list of NTP servers to be used as a comma-separated list without spaces. Note In Red Hat Enterprise Linux 8, time zone names are validated using the pytz.all_timezones list, provided by the pytz package. In releases, the names were validated against pytz.common_timezones , which is a subset of the currently used list. Note that the graphical and text mode interfaces still use the more restricted pytz.common_timezones list; you must use a Kickstart file to use additional time zone definitions. B.3.16. user The user Kickstart command is optional. It creates a new user on the system. Syntax Mandatory options --name= - Provides the name of the user. This option is required. Optional options --gecos= - Provides the GECOS information for the user. This is a string of various system-specific fields separated by a comma. It is frequently used to specify the user's full name, office number, and so on. See the passwd(5) man page for more details. --groups= - In addition to the default group, a comma separated list of group names the user should belong to. The groups must exist before the user account is created. See the group command. --homedir= - The home directory for the user. If not provided, this defaults to /home/ username . --lock - If this option is present, this account is locked by default. This means that the user will not be able to log in from the console. This option will also disable the Create User screens in both the graphical and text-based manual installation. --password= - The new user's password. If not provided, the account will be locked by default. --iscrypted - If this option is present, the password argument is assumed to already be encrypted. This option is mutually exclusive with --plaintext . To create an encrypted password, you can use python: This generates a sha512 crypt-compatible hash of your password using a random salt. --plaintext - If this option is present, the password argument is assumed to be in plain text. This option is mutually exclusive with --iscrypted --shell= - The user's login shell. If not provided, the system default is used. --uid= - The user's UID (User ID). If not provided, this defaults to the available non-system UID. --gid= - The GID (Group ID) to be used for the user's group. If not provided, this defaults to the available non-system group ID. Notes Consider using the --uid and --gid options to set IDs of regular users and their default groups at range starting at 5000 instead of 1000 . That is because the range reserved for system users and groups, 0 - 999 , might increase in the future and thus overlap with IDs of regular users. For changing the minimum UID and GID limits after the installation, which ensures that your chosen UID and GID ranges are applied automatically on user creation, see the Setting default permissions for new files using umask section of the Configuring basic system settings document. Files and directories are created with various permissions, dictated by the application used to create the file or directory. For example, the mkdir command creates directories with all permissions enabled. However, applications are prevented from granting certain permissions to newly created files, as specified by the user file-creation mask setting. The user file-creation mask can be controlled with the umask command. The default setting of the user file-creation mask for new users is defined by the UMASK variable in the /etc/login.defs configuration file on the installed system. If unset, it defaults to 022 . This means that by default when an application creates a file, it is prevented from granting write permission to users other than the owner of the file. However, this can be overridden by other settings or scripts. More information can be found in the Setting default permissions for new files using umask section of the Configuring basic system settings document. B.3.17. xconfig The xconfig Kickstart command is optional. It configures the X Window System. Use this command only once. Syntax Options --startxonboot - Use a graphical login on the installed system. Notes Because Red Hat Enterprise Linux 8 does not include the KDE Desktop Environment, do not use the --defaultdesktop= documented in upstream. B.4. Kickstart commands for network configuration The Kickstart commands in this list let you configure networking on the system. B.4.1. network (optional) Use the optional network Kickstart command to configure network information for the target system and activate the network devices in the installation environment. The device specified in the first network command is activated automatically. You can also explicitly require a device to be activated using the --activate option. Warning Re-configuration of already active network devices that are in use by the running installer may lead to an installation failure or freeze. In such a case, avoid re-configuration of network devices used to access the installer runtime image ( stage2 ) over NFS. Syntax Options --activate - activate this device in the installation environment. If you use the --activate option on a device that has already been activated (for example, an interface you configured with boot options so that the system could retrieve the Kickstart file) the device is reactivated to use the details specified in the Kickstart file. Use the --nodefroute option to prevent the device from using the default route. --no-activate - do not activate this device in the installation environment. By default, Anaconda activates the first network device in the Kickstart file regardless of the --activate option. You can disable the default setting by using the --no-activate option. --bootproto= - One of dhcp , bootp , ibft , or static . The default option is dhcp ; the dhcp and bootp options are treated the same. To disable ipv4 configuration of the device, use --noipv4 option. Note This option configures ipv4 configuration of the device. For ipv6 configuration use --ipv6 and --ipv6gateway options. The DHCP method uses a DHCP server system to obtain its networking configuration. The BOOTP method is similar, requiring a BOOTP server to supply the networking configuration. To direct a system to use DHCP: To direct a machine to use BOOTP to obtain its networking configuration, use the following line in the Kickstart file: To direct a machine to use the configuration specified in iBFT, use: The static method requires that you specify at least the IP address and netmask in the Kickstart file. This information is static and is used during and after the installation. All static networking configuration information must be specified on one line; you cannot wrap lines using a backslash ( \ ) as you can on a command line. You can also configure multiple nameservers at the same time. To do so, use the --nameserver= option once, and specify each of their IP addresses, separated by commas: --device= - specifies the device to be configured (and eventually activated in Anaconda) with the network command. If the --device= option is missing on the first use of the network command, the value of the inst.ks.device= Anaconda boot option is used, if available. This is considered deprecated behavior; in most cases, you should always specify a --device= for every network command. The behavior of any subsequent network command in the same Kickstart file is unspecified if its --device= option is missing. Verify you specify this option for any network command beyond the first. You can specify a device to be activated in any of the following ways: the device name of the interface, for example, em1 the MAC address of the interface, for example, 01:23:45:67:89:ab the keyword link , which specifies the first interface with its link in the up state the keyword bootif , which uses the MAC address that pxelinux set in the BOOTIF variable. Set IPAPPEND 2 in your pxelinux.cfg file to have pxelinux set the BOOTIF variable. For example: --ipv4-dns-search / --ipv6-dns-search - Set the DNS search domains manually. You must use these options together with --device options and mirror their respective NetworkManager properties, for example: --ipv4-ignore-auto-dns / --ipv6-ignore-auto-dns - Set to ignore the DNS settings from DHCP. You must use these options together with --device options and these options do not require any arguments. --ip= - IP address of the device. --ipv6= - IPv6 address of the device, in the form of address [/ prefix length ] - for example, 3ffe:ffff:0:1::1/128 . If prefix is omitted, 64 is used. You can also use auto for automatic configuration, or dhcp for DHCPv6-only configuration (no router advertisements). --gateway= - Default gateway as a single IPv4 address. --ipv6gateway= - Default gateway as a single IPv6 address. --nodefroute - Prevents the interface being set as the default route. Use this option when you activate additional devices with the --activate= option, for example, a NIC on a separate subnet for an iSCSI target. --nameserver= - DNS name server, as an IP address. To specify more than one name server, use this option once, and separate each IP address with a comma. --netmask= - Network mask for the installed system. --hostname= - Used to configure the target system's host name. The host name can either be a fully qualified domain name (FQDN) in the format hostname.domainname , or a short host name without the domain. Many networks have a Dynamic Host Configuration Protocol (DHCP) service that automatically supplies connected systems with a domain name. To allow the DHCP service to assign the domain name to this machine, specify only the short host name. When using static IP and host name configuration, it depends on the planned system use case whether to use a short name or FQDN. Red Hat Identity Management configures FQDN during provisioning but some 3rd party software products may require short name. In either case, to ensure availability of both forms in all situations, add an entry for the host in /etc/hosts in the format IP FQDN short-alias . The value localhost means that no specific static host name for the target system is configured, and the actual host name of the installed system is configured during the processing of the network configuration, for example, by NetworkManager using DHCP or DNS. Host names can only contain alphanumeric characters and - or . . Host name should be equal to or less than 64 characters. Host names cannot start or end with - and . . To be compliant with DNS, each part of a FQDN should be equal to or less than 63 characters and the FQDN total length, including dots, should not exceed 255 characters. If you only want to configure the target system's host name, use the --hostname option in the network command and do not include any other option. If you provide additional options when configuring the host name, the network command configures a device using the options specified. If you do not specify which device to configure using the --device option, the default --device link value is used. Additionally, if you do not specify the protocol using the --bootproto option, the device is configured to use DHCP by default. --ethtool= - Specifies additional low-level settings for the network device which will be passed to the ethtool program. --onboot= - Whether or not to enable the device at boot time. --dhcpclass= - The DHCP class. --mtu= - The MTU of the device. --noipv4 - Disable IPv4 on this device. --noipv6 - Disable IPv6 on this device. --bondslaves= - When this option is used, the bond device specified by the --device= option is created using secondary devices defined in the --bondslaves= option. For example: The above command creates a bond device named bond0 using the em1 and em2 interfaces as its secondary devices. --bondopts= - a list of optional parameters for a bonded interface, which is specified using the --bondslaves= and --device= options. Options in this list must be separated by commas (",") or semicolons (";"). If an option itself contains a comma, use a semicolon to separate the options. For example: Important The --bondopts=mode= parameter only supports full mode names such as balance-rr or broadcast , not their numerical representations such as 0 or 3 . For the list of available and supported modes, see the Configuring and Managing Networking Guide . --vlanid= - Specifies virtual LAN (VLAN) ID number (802.1q tag) for the device created using the device specified in --device= as a parent. For example, network --device=em1 --vlanid=171 creates a virtual LAN device em1.171 . --interfacename= - Specify a custom interface name for a virtual LAN device. This option should be used when the default name generated by the --vlanid= option is not desirable. This option must be used along with --vlanid= . For example: The above command creates a virtual LAN interface named vlan171 on the em1 device with an ID of 171 . The interface name can be arbitrary (for example, my-vlan ), but in specific cases, the following conventions must be followed: If the name contains a dot ( . ), it must take the form of NAME . ID . The NAME is arbitrary, but the ID must be the VLAN ID. For example: em1.171 or my-vlan.171 . Names starting with vlan must take the form of vlan ID - for example, vlan171 . --teamslaves= - Team device specified by the --device= option will be created using secondary devices specified in this option. Secondary devices are separated by commas. A secondary device can be followed by its configuration, which is a single-quoted JSON string with double quotes escaped by the \ character. For example: See also the --teamconfig= option. --teamconfig= - Double-quoted team device configuration which is a JSON string with double quotes escaped by the \ character. The device name is specified by --device= option and its secondary devices and their configuration by --teamslaves= option. For example: --bridgeslaves= - When this option is used, the network bridge with device name specified using the --device= option will be created and devices defined in the --bridgeslaves= option will be added to the bridge. For example: --bridgeopts= - An optional comma-separated list of parameters for the bridged interface. Available values are stp , priority , forward-delay , hello-time , max-age , and ageing-time . For information about these parameters, see the bridge setting table in the nm-settings(5) man page or at Network Configuration Setting Specification . Also see the Configuring and managing networking document for general information about network bridging. --bindto=mac - Bind the device configuration file on the installed system to the device MAC address ( HWADDR ) instead of the default binding to the interface name ( DEVICE ). This option is independent of the --device= option - --bindto=mac will be applied even if the same network command also specifies a device name, link , or bootif . Notes The ethN device names such as eth0 are no longer available in Red Hat Enterprise Linux due to changes in the naming scheme. For more information about the device naming scheme, see the upstream document Predictable Network Interface Names . If you used a Kickstart option or a boot option to specify an installation repository on a network, but no network is available at the start of the installation, the installation program displays the Network Configuration window to set up a network connection prior to displaying the Installation Summary window. For more details, see Configuring network and host name options . B.4.2. realm The realm Kickstart command is optional. Use it to join an Active Directory or IPA domain. For more information about this command, see the join section of the realm(8) man page on your system. Syntax Mandatory options domain - The domain to join. Options --computer-ou=OU= - Provide the distinguished name of an organizational unit in order to create the computer account. The exact format of the distinguished name depends on the client software and membership software. The root DSE portion of the distinguished name can usually be left out. --no-password - Join automatically without a password. --one-time-password= - Join using a one-time password. This is not possible with all types of realm. --client-software= - Only join realms which can run this client software. Valid values include sssd and winbind . Not all realms support all values. By default, the client software is chosen automatically. --server-software= - Only join realms which can run this server software. Possible values include active-directory or freeipa . --membership-software= - Use this software when joining the realm. Valid values include samba and adcli . Not all realms support all values. By default, the membership software is chosen automatically. B.5. Kickstart commands for handling storage The Kickstart commands in this section configure aspects of storage such as devices, disks, partitions, LVM, and filesystems. Important The sdX (or /dev/sdX ) format does not guarantee consistent device names across reboots, which can complicate the usage of some Kickstart commands. When a command requires a device node name, you can use any item from /dev/disk as an alternative. For example, instead of using the following device name: part / --fstype=xfs --onpart=sda1 You can use an entry similar to one of the following: part / --fstype=xfs --onpart=/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0-part1 part / --fstype=xfs --onpart=/dev/disk/by-id/ata-ST3160815AS_6RA0C882-part1 By using this approach, the command always targets the same storage device. This is especially useful in large storage environments. To explore the available device names on the system, you can use the ls -lR /dev/disk command during the interactive installation. For more information about different ways to consistently refer to storage devices, see Overview of persistent naming attributes . B.5.1. device (deprecated) The device Kickstart command is optional. Use it to load additional kernel modules. On most PCI systems, the installation program automatically detects Ethernet and SCSI cards. However, on older systems and some PCI systems, Kickstart requires a hint to find the proper devices. The device command, which tells the installation program to install extra modules, uses the following format: Syntax Options moduleName - Replace with the name of the kernel module which should be installed. --opts= - Options to pass to the kernel module. For example: B.5.2. ignoredisk The ignoredisk Kickstart command is optional. It causes the installation program to ignore the specified disks. This is useful if you use automatic partitioning and want to be sure that some disks are ignored. For example, without ignoredisk , attempting to deploy on a SAN-cluster the Kickstart would fail, as the installation program detects passive paths to the SAN that return no partition table. Use this command only once. Syntax Options --drives= driveN ,... - Replace driveN with one of sda , sdb ,... , hda ,... and so on. --only-use= driveN ,... - Specifies a list of disks for the installation program to use. All other disks are ignored. For example, to use disk sda during installation and ignore all other disks: To include a multipath device that does not use LVM: To include a multipath device that uses LVM: You must specify only one of the --drives or --only-use . Notes The --interactive option is deprecated in Red Hat Enterprise Linux 8. This option allowed users to manually navigate the advanced storage screen. To ignore a multipath device that does not use logical volume manager (LVM), use the format disk/by-id/dm-uuid-mpath- WWID , where WWID is the world-wide identifier for the device. For example, to ignore a disk with WWID 2416CD96995134CA5D787F00A5AA11017 , use: Never specify multipath devices by device names like mpatha . Device names such as this are not specific to a particular disk. The disk named /dev/mpatha during installation might not be the one that you expect it to be. Therefore, the clearpart command could target the wrong disk. The sdX (or /dev/sdX ) format does not guarantee consistent device names across reboots, which can complicate the usage of some Kickstart commands. When a command requires a device node name, you can use any item from /dev/disk as an alternative. For example, instead of using the following device name: You can use an entry similar to one of the following: By using this approach, the command always targets the same storage device. This is especially useful in large storage environments. To explore the available device names on the system, you can use the ls -lR /dev/disk command during the interactive installation. For more information about different ways to consistently refer to storage devices, see Overview of persistent naming attributes . B.5.3. clearpart The clearpart Kickstart command is optional. It removes partitions from the system, prior to creation of new partitions. By default, no partitions are removed. Use this command only once. Syntax Options --all - Erases all partitions from the system. This option will erase all disks which can be reached by the installation program, including any attached network storage. Use this option with caution. You can prevent clearpart from wiping storage you want to preserve by using the --drives= option and specifying only the drives you want to clear, by attaching network storage later (for example, in the %post section of the Kickstart file), or by blocklisting the kernel modules used to access network storage. --drives= - Specifies which drives to clear partitions from. For example, the following clears all the partitions on the first two drives on the primary IDE controller: To clear a multipath device, use the format disk/by-id/scsi- WWID , where WWID is the world-wide identifier for the device. For example, to clear a disk with WWID 58095BEC5510947BE8C0360F604351918 , use: This format is preferable for all multipath devices, but if errors arise, multipath devices that do not use logical volume manager (LVM) can also be cleared using the format disk/by-id/dm-uuid-mpath- WWID , where WWID is the world-wide identifier for the device. For example, to clear a disk with WWID 2416CD96995134CA5D787F00A5AA11017 , use: Never specify multipath devices by device names like mpatha . Device names such as this are not specific to a particular disk. The disk named /dev/mpatha during installation might not be the one that you expect it to be. Therefore, the clearpart command could target the wrong disk. --initlabel - Initializes a disk (or disks) by creating a default disk label for all disks in their respective architecture that have been designated for formatting (for example, msdos for x86). Because --initlabel can see all disks, it is important to ensure only those drives that are to be formatted are connected. Disks cleared by clearpart will have the label created even in case the --initlabel is not used. For example: --list= - Specifies which partitions to clear. This option overrides the --all and --linux options if used. Can be used across different drives. For example: --disklabel= LABEL - Set the default disklabel to use. Only disklabels supported for the platform will be accepted. For example, on the 64-bit Intel and AMD architectures, the msdos and gpt disklabels are accepted, but dasd is not accepted. --linux - Erases all Linux partitions. --none (default) - Do not remove any partitions. --cdl - Reformat any LDL DASDs to CDL format. Notes The sdX (or /dev/sdX ) format does not guarantee consistent device names across reboots, which can complicate the usage of some Kickstart commands. When a command requires a device node name, you can use any item from /dev/disk as an alternative. For example, instead of using the following device name: You can use an entry similar to one of the following: By using this approach, the command always targets the same storage device. This is especially useful in large storage environments. To explore the available device names on the system, you can use the ls -lR /dev/disk command during the interactive installation. For more information about different ways to consistently refer to storage devices, see Overview of persistent naming attributes . If the clearpart command is used, then the part --onpart command cannot be used on a logical partition. B.5.4. zerombr The zerombr Kickstart command is optional. The zerombr initializes any invalid partition tables that are found on disks and destroys all of the contents of disks with invalid partition tables. This command is required when performing an installation on an 64-bit IBM Z system with unformatted Direct Access Storage Device (DASD) disks, otherwise the unformatted disks are not formatted and used during the installation. Use this command only once. Syntax Notes On 64-bit IBM Z, if zerombr is specified, any Direct Access Storage Device (DASD) visible to the installation program which is not already low-level formatted is automatically low-level formatted with dasdfmt. The command also prevents user choice during interactive installations. If zerombr is not specified and there is at least one unformatted DASD visible to the installation program, a non-interactive Kickstart installation exits unsuccessfully. If zerombr is not specified and there is at least one unformatted DASD visible to the installation program, an interactive installation exits if the user does not agree to format all visible and unformatted DASDs. To circumvent this, only activate those DASDs that you will use during installation. You can always add more DASDs after installation is complete. This command has no options. B.5.5. bootloader The bootloader Kickstart command is required. It specifies how the boot loader should be installed. Use this command only once. Syntax Options --append= - Specifies additional kernel parameters. To specify multiple parameters, separate them with spaces. For example: The rhgb and quiet parameters are automatically added when the plymouth package is installed, even if you do not specify them here or do not use the --append= command at all. To disable this behavior, explicitly disallow installation of plymouth : This option is useful for disabling mechanisms which were implemented to mitigate the Meltdown and Spectre speculative execution vulnerabilities found in most modern processors (CVE-2017-5754, CVE-2017-5753, and CVE-2017-5715). In some cases, these mechanisms may be unnecessary, and keeping them enabled causes decreased performance with no improvement in security. To disable these mechanisms, add the options to do so into your Kickstart file - for example, bootloader --append="nopti noibrs noibpb" on AMD64/Intel 64 systems. Warning Ensure your system is not at risk of attack before disabling any of the vulnerability mitigation mechanisms. See the Red Hat vulnerability response article for information about the Meltdown and Spectre vulnerabilities. --boot-drive= - Specifies which drive the boot loader should be written to, and therefore which drive the computer will boot from. If you use a multipath device as the boot drive, specify the device using its disk/by-id/dm-uuid-mpath-WWID name. Important The --boot-drive= option is currently being ignored in Red Hat Enterprise Linux installations on 64-bit IBM Z systems using the zipl boot loader. When zipl is installed, it determines the boot drive on its own. --leavebootorder - The installation program will add Red Hat Enterprise Linux 8 to the top of the list of installed systems in the boot loader, and preserve all existing entries as well as their order. Important This option is applicable for Power systems only and UEFI systems should not use this option. --driveorder= - Specifies which drive is first in the BIOS boot order. For example: --location= - Specifies where the boot record is written. Valid values are the following: mbr - The default option. Depends on whether the drive uses the Master Boot Record (MBR) or GUID Partition Table (GPT) scheme: On a GPT-formatted disk, this option installs stage 1.5 of the boot loader into the BIOS boot partition. On an MBR-formatted disk, stage 1.5 is installed into the empty space between the MBR and the first partition. partition - Install the boot loader on the first sector of the partition containing the kernel. none - Do not install the boot loader. In most cases, this option does not need to be specified. --nombr - Do not install the boot loader to the MBR. --password= - If using GRUB, sets the boot loader password to the one specified with this option. This should be used to restrict access to the GRUB shell, where arbitrary kernel options can be passed. If a password is specified, GRUB also asks for a user name. The user name is always root . --iscrypted - Normally, when you specify a boot loader password using the --password= option, it is stored in the Kickstart file in plain text. If you want to encrypt the password, use this option and an encrypted password. To generate an encrypted password, use the grub2-mkpasswd-pbkdf2 command, enter the password you want to use, and copy the command's output (the hash starting with grub.pbkdf2 ) into the Kickstart file. An example bootloader Kickstart entry with an encrypted password looks similar to the following: --timeout= - Specifies the amount of time the boot loader waits before booting the default option (in seconds). --default= - Sets the default boot image in the boot loader configuration. --extlinux - Use the extlinux boot loader instead of GRUB. This option only works on systems supported by extlinux. --disabled - This option is a stronger version of --location=none . While --location=none simply disables boot loader installation, --disabled disables boot loader installation and also disables installation of the package containing the boot loader, thus saving space. Notes Red Hat recommends setting up a boot loader password on every system. An unprotected boot loader can allow a potential attacker to modify the system's boot options and gain unauthorized access to the system. In some cases, a special partition is required to install the boot loader on AMD64, Intel 64, and 64-bit ARM systems. The type and size of this partition depends on whether the disk you are installing the boot loader to uses the Master Boot Record (MBR) or a GUID Partition Table (GPT) schema. For more information, see the Configuring boot loader section. The sdX (or /dev/sdX ) format does not guarantee consistent device names across reboots, which can complicate the usage of some Kickstart commands. When a command requires a device node name, you can use any item from /dev/disk as an alternative. For example, instead of using the following device name: You can use an entry similar to one of the following: By using this approach, the command always targets the same storage device. This is especially useful in large storage environments. To explore the available device names on the system, you can use the ls -lR /dev/disk command during the interactive installation. For more information about different ways to consistently refer to storage devices, see Overview of persistent naming attributes . The --upgrade option is deprecated in Red Hat Enterprise Linux 8. B.5.6. autopart The autopart Kickstart command is optional. It automatically creates partitions. The automatically created partitions are: a root ( / ) partition (1 GiB or larger), a swap partition, and an appropriate /boot partition for the architecture. On large enough drives (50 GiB and larger), this also creates a /home partition. Use this command only once. Syntax Options --type= - Selects one of the predefined automatic partitioning schemes you want to use. Accepts the following values: lvm : The LVM partitioning scheme. plain : Regular partitions with no LVM. thinp : The LVM Thin Provisioning partitioning scheme. --fstype= - Selects one of the available file system types. The available values are ext2 , ext3 , ext4 , xfs , and vfat . The default file system is xfs . --nohome - Disables automatic creation of the /home partition. --nolvm - Do not use LVM for automatic partitioning. This option is equal to --type=plain . --noboot - Do not create a /boot partition. --noswap - Do not create a swap partition. --encrypted - Encrypts all partitions with Linux Unified Key Setup (LUKS). This is equivalent to checking the Encrypt partitions check box on the initial partitioning screen during a manual graphical installation. Note When encrypting one or more partitions, Anaconda attempts to gather 256 bits of entropy to ensure the partitions are encrypted securely. Gathering entropy can take some time - the process will stop after a maximum of 10 minutes, regardless of whether sufficient entropy has been gathered. The process can be sped up by interacting with the installation system (typing on the keyboard or moving the mouse). If you are installing in a virtual machine, you can also attach a virtio-rng device (a virtual random number generator) to the guest. --luks-version= LUKS_VERSION - Specifies which version of LUKS format should be used to encrypt the filesystem. This option is only meaningful if --encrypted is specified. --passphrase= - Provides a default system-wide passphrase for all encrypted devices. --escrowcert= URL_of_X.509_certificate - Stores data encryption keys of all encrypted volumes as files in /root , encrypted using the X.509 certificate from the URL specified with URL_of_X.509_certificate . The keys are stored as a separate file for each encrypted volume. This option is only meaningful if --encrypted is specified. --backuppassphrase - Adds a randomly-generated passphrase to each encrypted volume. Store these passphrases in separate files in /root , encrypted using the X.509 certificate specified with --escrowcert . This option is only meaningful if --escrowcert is specified. --cipher= - Specifies the type of encryption to use if the Anaconda default aes-xts-plain64 is not satisfactory. You must use this option together with the --encrypted option; by itself it has no effect. Available types of encryption are listed in the Security hardening document, but Red Hat strongly recommends using either aes-xts-plain64 or aes-cbc-essiv:sha256 . --pbkdf= PBKDF - Sets Password-Based Key Derivation Function (PBKDF) algorithm for LUKS keyslot. See also the man page cryptsetup(8) . This option is only meaningful if --encrypted is specified. --pbkdf-memory= PBKDF_MEMORY - Sets the memory cost for PBKDF. See also the man page cryptsetup(8) . This option is only meaningful if --encrypted is specified. --pbkdf-time= PBKDF_TIME - Sets the number of milliseconds to spend with PBKDF passphrase processing. See also --iter-time in the man page cryptsetup(8) . This option is only meaningful if --encrypted is specified, and is mutually exclusive with --pbkdf-iterations . --pbkdf-iterations= PBKDF_ITERATIONS - Sets the number of iterations directly and avoids PBKDF benchmark. See also --pbkdf-force-iterations in the man page cryptsetup(8) . This option is only meaningful if --encrypted is specified, and is mutually exclusive with --pbkdf-time . Notes The autopart option cannot be used together with the part/partition , raid , logvol , or volgroup options in the same Kickstart file. The autopart command is not mandatory, but you must include it if there are no part or mount commands in your Kickstart script. It is recommended to use the autopart --nohome Kickstart option when installing on a single FBA DASD of the CMS type. This ensures that the installation program does not create a separate /home partition. The installation then proceeds successfully. If you lose the LUKS passphrase, any encrypted partitions and their data is completely inaccessible. There is no way to recover a lost passphrase. However, you can save encryption passphrases with the --escrowcert and create backup encryption passphrases with the --backuppassphrase options. Ensure that the disk sector sizes are consistent when using autopart , autopart --type=lvm , or autopart=thinp . B.5.7. reqpart The reqpart Kickstart command is optional. It automatically creates partitions required by your hardware platform. These include a /boot/efi partition for systems with UEFI firmware, a biosboot partition for systems with BIOS firmware and GPT, and a PRePBoot partition for IBM Power Systems. Use this command only once. Syntax Options --add-boot - Creates a separate /boot partition in addition to the platform-specific partition created by the base command. Note This command cannot be used together with autopart , because autopart does everything the reqpart command does and, in addition, creates other partitions or logical volumes such as / and swap . In contrast with autopart , this command only creates platform-specific partitions and leaves the rest of the drive empty, allowing you to create a custom layout. B.5.8. part or partition The part or partition Kickstart command is required. It creates a partition on the system. Syntax Options mntpoint - Where the partition is mounted. The value must be of one of the following forms: / path For example, / , /usr , /home swap The partition is used as swap space. To determine the size of the swap partition automatically, use the --recommended option: The size assigned will be effective but not precisely calibrated for your system. To determine the size of the swap partition automatically but also allow extra space for your system to hibernate, use the --hibernation option: The size assigned will be equivalent to the swap space assigned by --recommended plus the amount of RAM on your system. For the swap sizes assigned by these commands, see Recommended Partitioning Scheme for AMD64, Intel 64, and 64-bit ARM systems. raid. id The partition is used for software RAID (see raid ). pv. id The partition is used for LVM (see logvol ). biosboot The partition will be used for a BIOS Boot partition. A 1 MiB BIOS boot partition is necessary on BIOS-based AMD64 and Intel 64 systems using a GUID Partition Table (GPT); the boot loader will be installed into it. It is not necessary on UEFI systems. See also the bootloader command. /boot/efi An EFI System Partition. A 50 MiB EFI partition is necessary on UEFI-based AMD64, Intel 64, and 64-bit ARM; the recommended size is 200 MiB. It is not necessary on BIOS systems. See also the bootloader command. --size= - The minimum partition size in MiB. Specify an integer value here such as 500 (do not include the unit). Installation fails if size specified is too small. Set the --size value as the minimum amount of space you require. For size recommendations, see Recommended Partitioning Scheme . --grow - Specifies the partition to grow to fill available space (if any), or up to the maximum size setting, if one is specified. If you use --grow= without setting --maxsize= on a swap partition, Anaconda limits the maximum size of the swap partition. For systems that have less than 2 GiB of physical memory, the imposed limit is twice the amount of physical memory. For systems with more than 2 GiB, the imposed limit is the size of physical memory plus 2GiB. --maxsize= - The maximum partition size in MiB when the partition is set to grow. Specify an integer value here such as 500 (do not include the unit). --noformat - Specifies that the partition should not be formatted, for use with the --onpart command. --onpart= or --usepart= - Specifies the device on which to place the partition. Uses an existing blank device and format it to the new specified type. For example: puts /home on /dev/hda1 . These options can also add a partition to a logical volume. For example: The device must already exist on the system; the --onpart option will not create it. It is also possible to specify an entire drive, rather than a partition, in which case Anaconda will format and use the drive without creating a partition table. However, installation of GRUB is not supported on a device formatted in this way, and must be placed on a drive with a partition table. --ondisk= or --ondrive= - Creates a partition (specified by the part command) on an existing disk.This command always creates a partition. Forces the partition to be created on a particular disk. For example, --ondisk=sdb puts the partition on the second SCSI disk on the system. To specify a multipath device that does not use logical volume manager (LVM), use the format disk/by-id/dm-uuid-mpath- WWID , where WWID is the world-wide identifier for the device. For example, to specify a disk with WWID 2416CD96995134CA5D787F00A5AA11017 , use: Warning Never specify multipath devices by device names like mpatha . Device names such as this are not specific to a particular disk. The disk named /dev/mpatha during installation might not be the one that you expect it to be. Therefore, the part command could target the wrong disk. --asprimary - Forces the partition to be allocated as a primary partition. If the partition cannot be allocated as primary (usually due to too many primary partitions being already allocated), the partitioning process fails. This option only makes sense when the disk uses a Master Boot Record (MBR); for GUID Partition Table (GPT)-labeled disks this option has no meaning. --fsprofile= - Specifies a usage type to be passed to the program that makes a filesystem on this partition. A usage type defines a variety of tuning parameters to be used when making a filesystem. For this option to work, the filesystem must support the concept of usage types and there must be a configuration file that lists valid types. For ext2 , ext3 , ext4 , this configuration file is /etc/mke2fs.conf . --mkfsoptions= - Specifies additional parameters to be passed to the program that makes a filesystem on this partition. This is similar to --fsprofile but works for all filesystems, not just the ones that support the profile concept. No processing is done on the list of arguments, so they must be supplied in a format that can be passed directly to the mkfs program. This means multiple options should be comma-separated or surrounded by double quotes, depending on the filesystem. For example, For details, see the man pages of the filesystems you are creating. For example, mkfs.ext4 or mkfs.xfs . --fstype= - Sets the file system type for the partition. Valid values are xfs , ext2 , ext3 , ext4 , swap , vfat , efi and biosboot . --fsoptions - Specifies a free form string of options to be used when mounting the filesystem. This string will be copied into the /etc/fstab file of the installed system and should be enclosed in quotes. Note In the EFI system partition ( /boot/efi ), anaconda hard codes the value and ignores the users specified --fsoptions values. --label= - assign a label to an individual partition. --recommended - Determine the size of the partition automatically. For details about the recommended scheme, see Recommended Partitioning Scheme for AMD64, Intel 64, and 64-bit ARM. This option can only be used for partitions which result in a file system such as the /boot partition and swap space. It cannot be used to create LVM physical volumes or RAID members. --onbiosdisk - Forces the partition to be created on a particular disk as discovered by the BIOS. --encrypted - Specifies that this partition should be encrypted with Linux Unified Key Setup (LUKS), using the passphrase provided in the --passphrase option. If you do not specify a passphrase, Anaconda uses the default, system-wide passphrase set with the autopart --passphrase command, or stops the installation and prompts you to provide a passphrase if no default is set. Note When encrypting one or more partitions, Anaconda attempts to gather 256 bits of entropy to ensure the partitions are encrypted securely. Gathering entropy can take some time - the process will stop after a maximum of 10 minutes, regardless of whether sufficient entropy has been gathered. The process can be sped up by interacting with the installation system (typing on the keyboard or moving the mouse). If you are installing in a virtual machine, you can also attach a virtio-rng device (a virtual random number generator) to the guest. --luks-version= LUKS_VERSION - Specifies which version of LUKS format should be used to encrypt the filesystem. This option is only meaningful if --encrypted is specified. --passphrase= - Specifies the passphrase to use when encrypting this partition. You must use this option together with the --encrypted option; by itself it has no effect. --cipher= - Specifies the type of encryption to use if the Anaconda default aes-xts-plain64 is not satisfactory. You must use this option together with the --encrypted option; by itself it has no effect. Available types of encryption are listed in the Security hardening document, but Red Hat strongly recommends using either aes-xts-plain64 or aes-cbc-essiv:sha256 . --escrowcert= URL_of_X.509_certificate - Store data encryption keys of all encrypted partitions as files in /root , encrypted using the X.509 certificate from the URL specified with URL_of_X.509_certificate . The keys are stored as a separate file for each encrypted partition. This option is only meaningful if --encrypted is specified. --backuppassphrase - Add a randomly-generated passphrase to each encrypted partition. Store these passphrases in separate files in /root , encrypted using the X.509 certificate specified with --escrowcert . This option is only meaningful if --escrowcert is specified. --pbkdf= PBKDF - Sets Password-Based Key Derivation Function (PBKDF) algorithm for LUKS keyslot. See also the man page cryptsetup(8) . This option is only meaningful if --encrypted is specified. --pbkdf-memory= PBKDF_MEMORY - Sets the memory cost for PBKDF. See also the man page cryptsetup(8) . This option is only meaningful if --encrypted is specified. --pbkdf-time= PBKDF_TIME - Sets the number of milliseconds to spend with PBKDF passphrase processing. See also --iter-time in the man page cryptsetup(8) . This option is only meaningful if --encrypted is specified, and is mutually exclusive with --pbkdf-iterations . --pbkdf-iterations= PBKDF_ITERATIONS - Sets the number of iterations directly and avoids PBKDF benchmark. See also --pbkdf-force-iterations in the man page cryptsetup(8) . This option is only meaningful if --encrypted is specified, and is mutually exclusive with --pbkdf-time . --resize= - Resize an existing partition. When using this option, specify the target size (in MiB) using the --size= option and the target partition using the --onpart= option. Notes The part command is not mandatory, but you must include either part , autopart or mount in your Kickstart script. The --active option is deprecated in Red Hat Enterprise Linux 8. If partitioning fails for any reason, diagnostic messages appear on virtual console 3. All partitions created are formatted as part of the installation process unless --noformat and --onpart are used. The sdX (or /dev/sdX ) format does not guarantee consistent device names across reboots, which can complicate the usage of some Kickstart commands. When a command requires a device node name, you can use any item from /dev/disk as an alternative. For example, instead of using the following device name: You can use an entry similar to one of the following: By using this approach, the command always targets the same storage device. This is especially useful in large storage environments. To explore the available device names on the system, you can use the ls -lR /dev/disk command during the interactive installation. For more information about different ways to consistently refer to storage devices, see Overview of persistent naming attributes . If you lose the LUKS passphrase, any encrypted partitions and their data is completely inaccessible. There is no way to recover a lost passphrase. However, you can save encryption passphrases with the --escrowcert and create backup encryption passphrases with the --backuppassphrase options. B.5.9. raid The raid Kickstart command is optional. It assembles a software RAID device. Syntax Options mntpoint - Location where the RAID file system is mounted. If it is / , the RAID level must be 1 unless a boot partition ( /boot ) is present. If a boot partition is present, the /boot partition must be level 1 and the root ( / ) partition can be any of the available types. The partitions* (which denotes that multiple partitions can be listed) lists the RAID identifiers to add to the RAID array. Important On IBM Power Systems, if a RAID device has been prepared and has not been reformatted during the installation, ensure that the RAID metadata version is 0.90 or 1.0 if you intend to put the /boot and PReP partitions on the RAID device. The mdadm metadata versions 1.1 and 1.2 are not supported for the /boot and PReP partitions. The PReP Boot partitions are not required on PowerNV systems. --level= - RAID level to use (0, 1, 4, 5, 6, or 10). --device= - Name of the RAID device to use - for example, --device=root . Important Do not use mdraid names in the form of md0 - these names are not guaranteed to be persistent. Instead, use meaningful names such as root or swap . Using meaningful names creates a symbolic link from /dev/md/ name to whichever /dev/md X node is assigned to the array. If you have an old (v0.90 metadata) array that you cannot assign a name to, you can specify the array by a filesystem label or UUID. For example, --device=LABEL=root or --device=UUID=93348e56-4631-d0f0-6f5b-45c47f570b88 . You can use the UUID of the file system on the RAID device or UUID of the RAID device itself. The UUID of the RAID device should be in the 8-4-4-4-12 format. UUID reported by mdadm is in the 8:8:8:8 format which needs to be changed. For example 93348e56:4631d0f0:6f5b45c4:7f570b88 should be changed to 93348e56-4631-d0f0-6f5b-45c47f570b88 . --chunksize= - Sets the chunk size of a RAID storage in KiB. In certain situations, using a different chunk size than the default ( 512 Kib ) can improve the performance of the RAID. --spares= - Specifies the number of spare drives allocated for the RAID array. Spare drives are used to rebuild the array in case of drive failure. --fsprofile= - Specifies a usage type to be passed to the program that makes a filesystem on this partition. A usage type defines a variety of tuning parameters to be used when making a filesystem. For this option to work, the filesystem must support the concept of usage types and there must be a configuration file that lists valid types. For ext2, ext3, and ext4, this configuration file is /etc/mke2fs.conf . --fstype= - Sets the file system type for the RAID array. Valid values are xfs , ext2 , ext3 , ext4 , swap , and vfat . --fsoptions= - Specifies a free form string of options to be used when mounting the filesystem. This string will be copied into the /etc/fstab file of the installed system and should be enclosed in quotes. In the EFI system partition ( /boot/efi ), anaconda hard codes the value and ignores the users specified --fsoptions values. --mkfsoptions= - Specifies additional parameters to be passed to the program that makes a filesystem on this partition. No processing is done on the list of arguments, so they must be supplied in a format that can be passed directly to the mkfs program. This means multiple options should be comma-separated or surrounded by double quotes, depending on the filesystem. For example, For details, see the man pages of the filesystems you are creating. For example, mkfs.ext4 or mkfs.xfs . --label= - Specify the label to give to the filesystem to be made. If the given label is already in use by another filesystem, a new label will be created. --noformat - Use an existing RAID device and do not format the RAID array. --useexisting - Use an existing RAID device and reformat it. --encrypted - Specifies that this RAID device should be encrypted with Linux Unified Key Setup (LUKS), using the passphrase provided in the --passphrase option. If you do not specify a passphrase, Anaconda uses the default, system-wide passphrase set with the autopart --passphrase command, or stops the installation and prompts you to provide a passphrase if no default is set. Note When encrypting one or more partitions, Anaconda attempts to gather 256 bits of entropy to ensure the partitions are encrypted securely. Gathering entropy can take some time - the process will stop after a maximum of 10 minutes, regardless of whether sufficient entropy has been gathered. The process can be sped up by interacting with the installation system (typing on the keyboard or moving the mouse). If you are installing in a virtual machine, you can also attach a virtio-rng device (a virtual random number generator) to the guest. --luks-version= LUKS_VERSION - Specifies which version of LUKS format should be used to encrypt the filesystem. This option is only meaningful if --encrypted is specified. --cipher= - Specifies the type of encryption to use if the Anaconda default aes-xts-plain64 is not satisfactory. You must use this option together with the --encrypted option; by itself it has no effect. Available types of encryption are listed in the Security hardening document, but Red Hat strongly recommends using either aes-xts-plain64 or aes-cbc-essiv:sha256 . --passphrase= - Specifies the passphrase to use when encrypting this RAID device. You must use this option together with the --encrypted option; by itself it has no effect. --escrowcert= URL_of_X.509_certificate - Store the data encryption key for this device in a file in /root , encrypted using the X.509 certificate from the URL specified with URL_of_X.509_certificate . This option is only meaningful if --encrypted is specified. --backuppassphrase - Add a randomly-generated passphrase to this device. Store the passphrase in a file in /root , encrypted using the X.509 certificate specified with --escrowcert . This option is only meaningful if --escrowcert is specified. --pbkdf= PBKDF - Sets Password-Based Key Derivation Function (PBKDF) algorithm for LUKS keyslot. See also the man page cryptsetup(8) . This option is only meaningful if --encrypted is specified. --pbkdf-memory= PBKDF_MEMORY - Sets the memory cost for PBKDF. See also the man page cryptsetup(8) . This option is only meaningful if --encrypted is specified. --pbkdf-time= PBKDF_TIME - Sets the number of milliseconds to spend with PBKDF passphrase processing. See also --iter-time in the man page cryptsetup(8) . This option is only meaningful if --encrypted is specified, and is mutually exclusive with --pbkdf-iterations . --pbkdf-iterations= PBKDF_ITERATIONS - Sets the number of iterations directly and avoids PBKDF benchmark. See also --pbkdf-force-iterations in the man page cryptsetup(8) . This option is only meaningful if --encrypted is specified, and is mutually exclusive with --pbkdf-time . Example The following example shows how to create a RAID level 1 partition for / , and a RAID level 5 for /home , assuming there are three SCSI disks on the system. It also creates three swap partitions, one on each drive. Note If you lose the LUKS passphrase, any encrypted partitions and their data is completely inaccessible. There is no way to recover a lost passphrase. However, you can save encryption passphrases with the --escrowcert and create backup encryption passphrases with the --backuppassphrase options. B.5.10. volgroup The volgroup Kickstart command is optional. It creates a Logical Volume Manager (LVM) group. Syntax Mandatory options name - Name of the new volume group. Options partition - Physical volume partitions to use as backing storage for the volume group. --noformat - Use an existing volume group and do not format it. --useexisting - Use an existing volume group and reformat it. If you use this option, do not specify a partition . For example: --pesize= - Set the size of the volume group's physical extents in KiB. The default value is 4096 (4 MiB), and the minimum value is 1024 (1 MiB). --reserved-space= - Specify an amount of space to leave unused in a volume group in MiB. Applicable only to newly created volume groups. --reserved-percent= - Specify a percentage of total volume group space to leave unused. Applicable only to newly created volume groups. Notes Create the partition first, then create the logical volume group, and then create the logical volume. For example: Do not use the dash ( - ) character in logical volume and volume group names when installing Red Hat Enterprise Linux using Kickstart. If this character is used, the installation finishes normally, but the /dev/mapper/ directory will list these volumes and volume groups with every dash doubled. For example, a volume group named volgrp-01 containing a logical volume named logvol-01 will be listed as /dev/mapper/volgrp--01-logvol--01 . This limitation only applies to newly created logical volume and volume group names. If you are reusing existing ones using the --noformat option, their names will not be changed. B.5.11. logvol The logvol Kickstart command is optional. It creates a logical volume for Logical Volume Manager (LVM). Syntax Mandatory options mntpoint The mount point where the partition is mounted. Must be of one of the following forms: / path For example, / or /home swap The partition is used as swap space. To determine the size of the swap partition automatically, use the --recommended option: To determine the size of the swap partition automatically and also allow extra space for your system to hibernate, use the --hibernation option: The size assigned will be equivalent to the swap space assigned by --recommended plus the amount of RAM on your system. For the swap sizes assigned by these commands, see Recommended Partitioning Scheme for AMD64, Intel 64, and 64-bit ARM systems. --vgname= name Name of the volume group. --name= name Name of the logical volume. Optional options --noformat Use an existing logical volume and do not format it. --useexisting Use an existing logical volume and reformat it. --fstype= Sets the file system type for the logical volume. Valid values are xfs , ext2 , ext3 , ext4 , swap , and vfat . --fsoptions= Specifies a free form string of options to be used when mounting the filesystem. This string will be copied into the /etc/fstab file of the installed system and should be enclosed in quotes. Note In the EFI system partition ( /boot/efi ), anaconda hard codes the value and ignores the users specified --fsoptions values. --mkfsoptions= Specifies additional parameters to be passed to the program that makes a filesystem on this partition. No processing is done on the list of arguments, so they must be supplied in a format that can be passed directly to the mkfs program. This means multiple options should be comma-separated or surrounded by double quotes, depending on the filesystem. For example, For details, see the man pages of the filesystems you are creating. For example, mkfs.ext4 or mkfs.xfs . --fsprofile= Specifies a usage type to be passed to the program that makes a filesystem on this partition. A usage type defines a variety of tuning parameters to be used when making a filesystem. For this option to work, the filesystem must support the concept of usage types and there must be a configuration file that lists valid types. For ext2 , ext3 , and ext4 , this configuration file is /etc/mke2fs.conf . --label= Sets a label for the logical volume. --grow Extends the logical volume to occupy the available space (if any), or up to the maximum size specified, if any. The option must be used only if you have pre-allocated a minimum storage space in the disk image, and would want the volume to grow and occupy the available space. In a physical environment, this is an one-time-action. However, in a virtual environment, the volume size increases as and when the virtual machine writes any data to the virtual disk. --size= The size of the logical volume in MiB. This option cannot be used together with the --percent= option. --percent= The size of the logical volume, as a percentage of the free space in the volume group after any statically-sized logical volumes are taken into account. This option cannot be used together with the --size= option. Important When creating a new logical volume, you must either specify its size statically using the --size= option, or as a percentage of remaining free space using the --percent= option. You cannot use both of these options on the same logical volume. --maxsize= The maximum size in MiB when the logical volume is set to grow. Specify an integer value here such as 500 (do not include the unit). --recommended Use this option when creating a logical volume to determine the size of this volume automatically, based on your system's hardware. For details about the recommended scheme, see Recommended Partitioning Scheme for AMD64, Intel 64, and 64-bit ARM systems. --resize Resize a logical volume. If you use this option, you must also specify --useexisting and --size . --encrypted Specifies that this logical volume should be encrypted with Linux Unified Key Setup (LUKS), using the passphrase provided in the --passphrase= option. If you do not specify a passphrase, the installation program uses the default, system-wide passphrase set with the autopart --passphrase command, or stops the installation and prompts you to provide a passphrase if no default is set. Note When encrypting one or more partitions, Anaconda attempts to gather 256 bits of entropy to ensure the partitions are encrypted securely. Gathering entropy can take some time - the process will stop after a maximum of 10 minutes, regardless of whether sufficient entropy has been gathered. The process can be sped up by interacting with the installation system (typing on the keyboard or moving the mouse). If you are installing in a virtual machine, you can also attach a virtio-rng device (a virtual random number generator) to the guest. --passphrase= Specifies the passphrase to use when encrypting this logical volume. You must use this option together with the --encrypted option; it has no effect by itself. --cipher= Specifies the type of encryption to use if the Anaconda default aes-xts-plain64 is not satisfactory. You must use this option together with the --encrypted option; by itself it has no effect. Available types of encryption are listed in the Security hardening document, but Red Hat strongly recommends using either aes-xts-plain64 or aes-cbc-essiv:sha256 . --escrowcert= URL_of_X.509_certificate Store data encryption keys of all encrypted volumes as files in /root , encrypted using the X.509 certificate from the URL specified with URL_of_X.509_certificate . The keys are stored as a separate file for each encrypted volume. This option is only meaningful if --encrypted is specified. --luks-version= LUKS_VERSION Specifies which version of LUKS format should be used to encrypt the filesystem. This option is only meaningful if --encrypted is specified. --backuppassphrase Add a randomly-generated passphrase to each encrypted volume. Store these passphrases in separate files in /root , encrypted using the X.509 certificate specified with --escrowcert . This option is only meaningful if --escrowcert is specified. --pbkdf= PBKDF Sets Password-Based Key Derivation Function (PBKDF) algorithm for LUKS keyslot. See also the man page cryptsetup(8) . This option is only meaningful if --encrypted is specified. --pbkdf-memory= PBKDF_MEMORY Sets the memory cost for PBKDF. See also the man page cryptsetup(8) . This option is only meaningful if --encrypted is specified. --pbkdf-time= PBKDF_TIME Sets the number of milliseconds to spend with PBKDF passphrase processing. See also --iter-time in the man page cryptsetup(8) . This option is only meaningful if --encrypted is specified, and is mutually exclusive with --pbkdf-iterations . --pbkdf-iterations= PBKDF_ITERATIONS Sets the number of iterations directly and avoids PBKDF benchmark. See also --pbkdf-force-iterations in the man page cryptsetup(8) . This option is only meaningful if --encrypted is specified, and is mutually exclusive with --pbkdf-time . --thinpool Creates a thin pool logical volume. (Use a mount point of none ) --metadatasize= size Specify the metadata area size (in MiB) for a new thin pool device. --chunksize= size Specify the chunk size (in KiB) for a new thin pool device. --thin Create a thin logical volume. (Requires use of --poolname ) --poolname= name Specify the name of the thin pool in which to create a thin logical volume. Requires the --thin option. --profile= name Specify the configuration profile name to use with thin logical volumes. If used, the name will also be included in the metadata for the given logical volume. By default, the available profiles are default and thin-performance and are defined in the /etc/lvm/profile/ directory. See the lvm(8) man page for additional information. --cachepvs= A comma-separated list of physical volumes which should be used as a cache for this volume. --cachemode= Specify which mode should be used to cache this logical volume - either writeback or writethrough . Note For more information about cached logical volumes and their modes, see the lvmcache(7) man page on your system. --cachesize= Size of cache attached to the logical volume, specified in MiB. This option requires the --cachepvs= option. Notes Do not use the dash ( - ) character in logical volume and volume group names when installing Red Hat Enterprise Linux using Kickstart. If this character is used, the installation finishes normally, but the /dev/mapper/ directory will list these volumes and volume groups with every dash doubled. For example, a volume group named volgrp-01 containing a logical volume named logvol-01 will be listed as /dev/mapper/volgrp- 01-logvol- 01 . This limitation only applies to newly created logical volume and volume group names. If you are reusing existing ones using the --noformat option, their names will not be changed. If you lose the LUKS passphrase, any encrypted partitions and their data is completely inaccessible. There is no way to recover a lost passphrase. However, you can save encryption passphrases with the --escrowcert and create backup encryption passphrases with the --backuppassphrase options. Examples Create the partition first, create the logical volume group, and then create the logical volume: Create the partition first, create the logical volume group, and then create the logical volume to occupy 90% of the remaining space in the volume group: Additional resources Configuring and managing logical volumes B.5.12. snapshot The snapshot Kickstart command is optional. Use it to create LVM thin volume snapshots during the installation process. This enables you to back up a logical volume before or after the installation. To create multiple snapshots, add the snaphost Kickstart command multiple times. Syntax Options vg_name / lv_name - Sets the name of the volume group and logical volume to create the snapshot from. --name= snapshot_name - Sets the name of the snapshot. This name must be unique within the volume group. --when= pre-install|post-install - Sets if the snapshot is created before the installation begins or after the installation is completed. B.5.13. mount The mount Kickstart command is optional. It assigns a mount point to an existing block device, and optionally reformats it to a given format. Syntax Mandatory options: device - The block device to mount. mountpoint - Where to mount the device . It must be a valid mount point, such as / or /usr , or none if the device is unmountable (for example swap ). Optional options: --reformat= - Specifies a new format (such as ext4 ) to which the device should be reformatted. --mkfsoptions= - Specifies additional options to be passed to the command which creates the new file system specified in --reformat= . The list of options provided here is not processed, so they must be specified in a format that can be passed directly to the mkfs program. The list of options should be either comma-separated or surrounded by double quotes, depending on the file system. See the mkfs man page for the file system you want to create (for example mkfs.ext4(8) or mkfs.xfs(8) ) for specific details. --mountoptions= - Specifies a free form string that contains options to be used when mounting the file system. The string will be copied to the /etc/fstab file on the installed system and should be enclosed in double quotes. See the mount(8) man page for a full list of mount options, and fstab(5) for basics. Notes Unlike most other storage configuration commands in Kickstart, mount does not require you to describe the entire storage configuration in the Kickstart file. You only need to ensure that the described block device exists on the system. However, if you want to create the storage stack with all the devices mounted, you must use other commands such as part to do so. You can not use mount together with other storage-related commands such as part , logvol , or autopart in the same Kickstart file. B.5.14. zipl The zipl Kickstart command is optional. It specifies the ZIPL configuration for 64-bit IBM Z. Use this command only once. Options --secure-boot - Enables secure boot if it is supported by the installing system. Note When installed on a system that is later than IBM z14, the installed system cannot be booted from an IBM z14 or earlier model. --force-secure-boot - Enables secure boot unconditionally. Note Installation is not supported on IBM z14 and earlier models. --no-secure-boot - Disables secure boot. Note Secure Boot is not supported on IBM z14 and earlier models. Use --no-secure-boot if you intend to boot the installed system on IBM z14 and earlier models. B.5.15. fcoe The fcoe Kickstart command is optional. It specifies which FCoE devices should be activated automatically in addition to those discovered by Enhanced Disk Drive Services (EDD). Syntax Options --nic= (required) - The name of the device to be activated. --dcb= - Establish Data Center Bridging (DCB) settings. --autovlan - Discover VLANs automatically. This option is enabled by default. B.5.16. iscsi The iscsi Kickstart command is optional. It specifies additional iSCSI storage to be attached during installation. Syntax Mandatory options --ipaddr= (required) - the IP address of the target to connect to. Optional options --port= (required) - the port number. If not present, --port=3260 is used automatically by default. --target= - the target IQN (iSCSI Qualified Name). --iface= - bind the connection to a specific network interface instead of using the default one determined by the network layer. Once used, it must be specified in all instances of the iscsi command in the entire Kickstart file. --user= - the user name required to authenticate with the target --password= - the password that corresponds with the user name specified for the target --reverse-user= - the user name required to authenticate with the initiator from a target that uses reverse CHAP authentication --reverse-password= - the password that corresponds with the user name specified for the initiator Notes If you use the iscsi command, you must also assign a name to the iSCSI node, using the iscsiname command. The iscsiname command must appear before the iscsi command in the Kickstart file. Wherever possible, configure iSCSI storage in the system BIOS or firmware (iBFT for Intel systems) rather than use the iscsi command. Anaconda automatically detects and uses disks configured in BIOS or firmware and no special configuration is necessary in the Kickstart file. If you must use the iscsi command, ensure that networking is activated at the beginning of the installation, and that the iscsi command appears in the Kickstart file before you refer to iSCSI disks with commands such as clearpart or ignoredisk . B.5.17. iscsiname The iscsiname Kickstart command is optional. It assigns a name to an iSCSI node specified by the iscsi command. Use this command only once. Syntax Options iqname - Name to assign to the iSCSI node. Note If you use the iscsi command in your Kickstart file, you must specify iscsiname earlier in the Kickstart file. B.5.18. nvdimm The nvdimm Kickstart command is optional. It performs an action on Non-Volatile Dual In-line Memory Module (NVDIMM) devices. By default, NVDIMM devices are ignored by the installation program. You must use the nvdimm command to enable installation on these devices. Syntax Actions reconfigure - Reconfigure a specific NVDIMM device into a given mode. Additionally, the specified device is implicitly marked as to be used, so a subsequent nvdimm use command for the same device is redundant. This action uses the following format: --namespace= - The device specification by namespace. For example: --mode= - The mode specification. Currently, only the value sector is available. --sectorsize= - Size of a sector for sector mode. For example: The supported sector sizes are 512 and 4096 bytes. use - Specify a NVDIMM device as a target for installation. The device must be already configured to the sector mode by the nvdimm reconfigure command. This action uses the following format: --namespace= - Specifies the device by namespace. For example: --blockdevs= - Specifies a comma-separated list of block devices corresponding to the NVDIMM devices to be used. The asterisk * wildcard is supported. For example: B.5.19. zfcp The zfcp Kickstart command is optional. It defines a Fibre channel device. This option only applies on 64-bit IBM Z. All of the options described below must be specified. Syntax Options --devnum= - The device number (zFCP adapter device bus ID). --wwpn= - The device's World Wide Port Name (WWPN). Takes the form of a 16-digit number, preceded by 0x . --fcplun= - The device's Logical Unit Number (LUN). Takes the form of a 16-digit number, preceded by 0x . Note It is sufficient to specify an FCP device bus ID if automatic LUN scanning is available and when installing 8 or later releases. Otherwise all three parameters are required. Automatic LUN scanning is available for FCP devices operating in NPIV mode if it is not disabled through the zfcp.allow_lun_scan module parameter (enabled by default). It provides access to all SCSI devices found in the storage area network attached to the FCP device with the specified bus ID. Example B.6. Kickstart commands for addons supplied with the RHEL installation program The Kickstart commands in this section are related to add-ons supplied by default with the Red Hat Enterprise Linux installation program: Kdump and OpenSCAP. B.6.1. %addon com_redhat_kdump The %addon com_redhat_kdump Kickstart command is optional. This command configures the kdump kernel crash dumping mechanism. Syntax Note The syntax for this command is unusual because it is an add-on rather than a built-in Kickstart command. Notes Kdump is a kernel crash dumping mechanism that allows you to save the contents of the system's memory for later analysis. It relies on kexec , which can be used to boot a Linux kernel from the context of another kernel without rebooting the system, and preserve the contents of the first kernel's memory that would otherwise be lost. In case of a system crash, kexec boots into a second kernel (a capture kernel). This capture kernel resides in a reserved part of the system memory. Kdump then captures the contents of the crashed kernel's memory (a crash dump) and saves it to a specified location. The location cannot be configured using this Kickstart command; it must be configured after the installation by editing the /etc/kdump.conf configuration file. For more information about Kdump, see the Installing kdump . Options --enable - Enable kdump on the installed system. --disable - Disable kdump on the installed system. --reserve-mb= - The amount of memory you want to reserve for kdump, in MiB. For example: You can also specify auto instead of a numeric value. In that case, the installation program will determine the amount of memory automatically based on the criteria described in the Memory requirements for kdump section of the Managing, monitoring and updating the kernel document. If you enable kdump and do not specify a --reserve-mb= option, the value auto will be used. --enablefadump - Enable firmware-assisted dumping on systems which allow it (notably, IBM Power Systems servers). B.6.2. %addon org_fedora_oscap The %addon org_fedora_oscap Kickstart command is optional. The OpenSCAP installation program add-on is used to apply SCAP (Security Content Automation Protocol) content - security policies - on the installed system. This add-on has been enabled by default since Red Hat Enterprise Linux 7.2. When enabled, the packages necessary to provide this functionality will automatically be installed. However, by default, no policies are enforced, meaning that no checks are performed during or after installation unless specifically configured. Important Applying a security policy is not necessary on all systems. This command should only be used when a specific policy is mandated by your organization rules or government regulations. Unlike most other commands, this add-on does not accept regular options, but uses key-value pairs in the body of the %addon definition instead. These pairs are whitespace-agnostic. Values can be optionally enclosed in single quotes ( ' ) or double quotes ( " ). Syntax Keys The following keys are recognized by the add-on: content-type Type of the security content. Possible values are datastream , archive , rpm , and scap-security-guide . If the content-type is scap-security-guide , the add-on will use content provided by the scap-security-guide package, which is present on the boot media. This means that all other keys except profile will have no effect. content-url Location of the security content. The content must be accessible using HTTP, HTTPS, or FTP; local storage is currently not supported. A network connection must be available to reach content definitions in a remote location. datastream-id ID of the data stream referenced in the content-url value. Used only if content-type is datastream . xccdf-id ID of the benchmark you want to use. content-path Path to the datastream or the XCCDF file which should be used, given as a relative path in the archive. profile ID of the profile to be applied. Use default to apply the default profile. fingerprint A MD5, SHA1 or SHA2 checksum of the content referenced by content-url . tailoring-path Path to a tailoring file which should be used, given as a relative path in the archive. Examples The following is an example %addon org_fedora_oscap section which uses content from the scap-security-guide on the installation media: Example B.1. Sample OpenSCAP Add-on Definition Using SCAP Security Guide The following is a more complex example which loads a custom profile from a web server: Example B.2. Sample OpenSCAP Add-on Definition Using a Datastream Additional resources Security Hardening OpenSCAP installation program add-on OpenSCAP Portal B.7. Commands used in Anaconda The pwpolicy command is an Anaconda UI specific command that can be used only in the %anaconda section of the kickstart file. B.7.1. pwpolicy The pwpolicy Kickstart command is optional. Use this command to enforce a custom password policy during installation. The policy requires you to create passwords for the root, users, or the luks user accounts. The factors such as password length and strength decide the validity of a password. Syntax Mandatory options name - Replace with either root , user or luks to enforce the policy for the root password, user passwords, or LUKS passphrase, respectively. Optional options --minlen= - Sets the minimum allowed password length, in characters. The default is 6 . --minquality= - Sets the minimum allowed password quality as defined by the libpwquality library. The default value is 1 . --strict - Enables strict password enforcement. Passwords which do not meet the requirements specified in --minquality= and --minlen= will not be accepted. This option is disabled by default. --notstrict - Passwords which do not meet the minimum quality requirements specified by the --minquality= and -minlen= options will be allowed, after Done is clicked twice in the GUI. For text mode interface, a similar mechanism is used. --emptyok - Allows the use of empty passwords. Enabled by default for user passwords. --notempty - Disallows the use of empty passwords. Enabled by default for the root password and the LUKS passphrase. --changesok - Allows changing the password in the user interface, even if the Kickstart file already specifies a password. Disabled by default. --nochanges - Disallows changing passwords which are already set in the Kickstart file. Enabled by default. Notes The pwpolicy command is an Anaconda-UI specific command that can be used only in the %anaconda section of the kickstart file. The libpwquality library is used to check minimum password requirements (length and quality). You can use the pwscore and pwmake commands provided by the libpwquality package to check the quality score of a password, or to create a random password with a given score. See the pwscore(1) and pwmake(1) man page for details about these commands. B.8. Kickstart commands for system recovery The Kickstart command in this section repairs an installed system. B.8.1. rescue The rescue Kickstart command is optional. It provides a shell environment with root privileges and a set of system management tools to repair the installation and to troubleshoot the issues like: Mount file systems as read-only Blocklist or add a driver provided on a driver disc Install or upgrade system packages Manage partitions Note The Kickstart rescue mode is different from the rescue mode and emergency mode, which are provided as part of the systemd and service manager. The rescue command does not modify the system on its own. It only sets up the rescue environment by mounting the system under /mnt/sysimage in a read-write mode. You can choose not to mount the system, or to mount it in read-only mode. Use this command only once. Syntax Options --nomount or --romount - Controls how the installed system is mounted in the rescue environment. By default, the installation program finds your system and mount it in read-write mode, telling you where it has performed this mount. You can optionally select to not mount anything (the --nomount option) or mount in read-only mode (the --romount option). Only one of these two options can be used. Notes To run a rescue mode, make a copy of the Kickstart file, and include the rescue command in it. Using the rescue command causes the installer to perform the following steps: Run the %pre script. Set up environment for rescue mode. The following kickstart commands take effect: updates sshpw logging lang network Set up advanced storage environment. The following kickstart commands take effect: fcoe iscsi iscsiname nvdimm zfcp Mount the system Run %post script This step is run only if the installed system is mounted in read-write mode. Start shell Reboot system | [
"cdrom",
"cmdline",
"driverdisk [ partition |--source= url |--biospart= biospart ]",
"driverdisk --source=ftp://path/to/dd.img driverdisk --source=http://path/to/dd.img driverdisk --source=nfs:host:/path/to/dd.img",
"driverdisk LABEL = DD :/e1000.rpm",
"eula [--agreed]",
"firstboot OPTIONS",
"graphical [--non-interactive]",
"halt",
"harddrive OPTIONS",
"harddrive --partition=hdb2 --dir=/tmp/install-tree",
"install installation_method",
"liveimg --url= SOURCE [ OPTIONS ]",
"liveimg --url=file:///images/install/squashfs.img --checksum=03825f567f17705100de3308a20354b4d81ac9d8bed4bb4692b2381045e56197 --noverifyssl",
"logging OPTIONS",
"mediacheck",
"nfs OPTIONS",
"nfs --server=nfsserver.example.com --dir=/tmp/install-tree",
"ostreesetup --osname= OSNAME [--remote= REMOTE ] --url= URL --ref= REF [--nogpg]",
"poweroff",
"reboot OPTIONS",
"shutdown",
"sshpw --username= name [ OPTIONS ] password",
"python3 -c 'import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass(\"Confirm: \")) else exit())'",
"sshpw --username= example_username example_password --plaintext sshpw --username=root example_password --lock",
"sshpw --username=root example_password --lock",
"text [--non-interactive]",
"url --url= FROM [ OPTIONS ]",
"url --url=http:// server / path",
"url --url=ftp:// username : password @ server / path",
"vnc [--host= host_name ] [--port= port ] [--password= password ]",
"hmc",
"%include path/to/file",
"%ksappend path/to/file",
"authconfig [ OPTIONS ]",
"authselect [ OPTIONS ]",
"firewall --enabled|--disabled [ incoming ] [ OPTIONS ]",
"group --name= name [--gid= gid ]",
"keyboard --vckeymap|--xlayouts OPTIONS",
"keyboard --xlayouts=us,'cz (qwerty)' --switch=grp:alt_shift_toggle",
"lang language [--addsupport= language,... ]",
"lang en_US --addsupport=cs_CZ,de_DE,en_UK",
"lang en_US",
"module --name= NAME [--stream= STREAM ]",
"repo --name= repoid [--baseurl= url |--mirrorlist= url |--metalink= url ] [ OPTIONS ]",
"rootpw [--iscrypted|--plaintext] [--lock] password",
"python -c 'import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass(\"Confirm: \")) else exit())'",
"selinux [--disabled|--enforcing|--permissive]",
"services [--disabled= list ] [--enabled= list ]",
"services --disabled=auditd,cups,smartd,nfslock",
"services --disabled=auditd, cups, smartd, nfslock",
"skipx",
"sshkey --username= user \"ssh_key\"",
"syspurpose [ OPTIONS ]",
"syspurpose --role=\"Red Hat Enterprise Linux Server\"",
"timezone timezone [ OPTIONS ]",
"user --name= username [ OPTIONS ]",
"python -c 'import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass(\"Confirm: \")) else exit())'",
"xconfig [--startxonboot]",
"network OPTIONS",
"network --bootproto=dhcp",
"network --bootproto=bootp",
"network --bootproto=ibft",
"network --bootproto=static --ip=10.0.2.15 --netmask=255.255.255.0 --gateway=10.0.2.254 --nameserver=10.0.2.1",
"network --bootproto=static --ip=10.0.2.15 --netmask=255.255.255.0 --gateway=10.0.2.254 --nameserver=192.168.2.1,192.168.3.1",
"network --bootproto=dhcp --device=em1",
"network --device ens3 --ipv4-dns-search domain1.example.com,domain2.example.com",
"network --device=bond0 --bondslaves=em1,em2",
"network --bondopts=mode=active-backup,balance-rr;primary=eth1",
"network --device=em1 --vlanid=171 --interfacename=vlan171",
"network --teamslaves=\"p3p1'{\\\"prio\\\": -10, \\\"sticky\\\": true}',p3p2'{\\\"prio\\\": 100}'\"",
"network --device team0 --activate --bootproto static --ip=10.34.102.222 --netmask=255.255.255.0 --gateway=10.34.102.254 --nameserver=10.34.39.2 --teamslaves=\"p3p1'{\\\"prio\\\": -10, \\\"sticky\\\": true}',p3p2'{\\\"prio\\\": 100}'\" --teamconfig=\"{\\\"runner\\\": {\\\"name\\\": \\\"activebackup\\\"}}\"",
"network --device=bridge0 --bridgeslaves=em1",
"realm join [ OPTIONS ] domain",
"device moduleName --opts= options",
"device --opts=\"aic152x=0x340 io=11\"",
"ignoredisk --drives= drive1,drive2 ,... | --only-use= drive",
"ignoredisk --only-use=sda",
"ignoredisk --only-use=disk/by-id/dm-uuid-mpath-2416CD96995134CA5D787F00A5AA11017",
"ignoredisk --only-use==/dev/disk/by-id/dm-uuid-mpath-",
"bootloader --location=mbr",
"ignoredisk --drives=disk/by-id/dm-uuid-mpath-2416CD96995134CA5D787F00A5AA11017",
"part / --fstype=xfs --onpart=sda1",
"part / --fstype=xfs --onpart=/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0-part1 part / --fstype=xfs --onpart=/dev/disk/by-id/ata-ST3160815AS_6RA0C882-part1",
"clearpart OPTIONS",
"clearpart --drives=hda,hdb --all",
"clearpart --drives=disk/by-id/scsi-58095BEC5510947BE8C0360F604351918",
"clearpart --drives=disk/by-id/dm-uuid-mpath-2416CD96995134CA5D787F00A5AA11017",
"clearpart --initlabel --drives=names_of_disks",
"clearpart --initlabel --drives=dasda,dasdb,dasdc",
"clearpart --list=sda2,sda3,sdb1",
"part / --fstype=xfs --onpart=sda1",
"part / --fstype=xfs --onpart=/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0-part1 part / --fstype=xfs --onpart=/dev/disk/by-id/ata-ST3160815AS_6RA0C882-part1",
"zerombr",
"bootloader [ OPTIONS ]",
"bootloader --location=mbr --append=\"hdd=ide-scsi ide=nodma\"",
"%packages -plymouth %end",
"bootloader --driveorder=sda,hda",
"bootloader --iscrypted --password=grub.pbkdf2.sha512.10000.5520C6C9832F3AC3D149AC0B24BE69E2D4FB0DBEEDBD29CA1D30A044DE2645C4C7A291E585D4DC43F8A4D82479F8B95CA4BA4381F8550510B75E8E0BB2938990.C688B6F0EF935701FF9BD1A8EC7FE5BD2333799C98F28420C5CC8F1A2A233DE22C83705BB614EA17F3FDFDF4AC2161CEA3384E56EB38A2E39102F5334C47405E",
"part / --fstype=xfs --onpart=sda1",
"part / --fstype=xfs --onpart=/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0-part1 part / --fstype=xfs --onpart=/dev/disk/by-id/ata-ST3160815AS_6RA0C882-part1",
"autopart OPTIONS",
"reqpart [--add-boot]",
"part|partition mntpoint [ OPTIONS ]",
"swap --recommended",
"swap --hibernation",
"partition /home --onpart=hda1",
"partition pv.1 --onpart=hda2",
"partition pv.1 --onpart=hdb",
"part / --fstype=xfs --grow --asprimary --size=8192 --ondisk=disk/by-id/dm-uuid-mpath-2416CD96995134CA5D787F00A5AA11017",
"part /opt/foo1 --size=512 --fstype=ext4 --mkfsoptions=\"-O ^has_journal,^flex_bg,^metadata_csum\" part /opt/foo2 --size=512 --fstype=xfs --mkfsoptions=\"-m bigtime=0,finobt=0\"",
"part / --fstype=xfs --onpart=sda1",
"part / --fstype=xfs --onpart=/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0-part1 part / --fstype=xfs --onpart=/dev/disk/by-id/ata-ST3160815AS_6RA0C882-part1",
"raid mntpoint --level= level --device= device-name partitions*",
"part /opt/foo1 --size=512 --fstype=ext4 --mkfsoptions=\"-O ^has_journal,^flex_bg,^metadata_csum\" part /opt/foo2 --size=512 --fstype=xfs --mkfsoptions=\"-m bigtime=0,finobt=0\"",
"part raid.01 --size=6000 --ondisk=sda part raid.02 --size=6000 --ondisk=sdb part raid.03 --size=6000 --ondisk=sdc part swap --size=512 --ondisk=sda part swap --size=512 --ondisk=sdb part swap --size=512 --ondisk=sdc part raid.11 --size=1 --grow --ondisk=sda part raid.12 --size=1 --grow --ondisk=sdb part raid.13 --size=1 --grow --ondisk=sdc raid / --level=1 --device=rhel8-root --label=rhel8-root raid.01 raid.02 raid.03 raid /home --level=5 --device=rhel8-home --label=rhel8-home raid.11 raid.12 raid.13",
"volgroup name [ OPTIONS ] [ partition *]",
"volgroup rhel00 --useexisting --noformat",
"part pv.01 --size 10000 volgroup my_volgrp pv.01 logvol / --vgname=my_volgrp --size=2000 --name=root",
"logvol mntpoint --vgname= name --name= name [ OPTIONS ]",
"swap --recommended",
"swap --hibernation",
"part /opt/foo1 --size=512 --fstype=ext4 --mkfsoptions=\"-O ^has_journal,^flex_bg,^metadata_csum\" part /opt/foo2 --size=512 --fstype=xfs --mkfsoptions=\"-m bigtime=0,finobt=0\"",
"part pv.01 --size 3000 volgroup myvg pv.01 logvol / --vgname=myvg --size=2000 --name=rootvol",
"part pv.01 --size 1 --grow volgroup myvg pv.01 logvol / --vgname=myvg --name=rootvol --percent=90",
"snapshot vg_name/lv_name --name= snapshot_name --when= pre-install|post-install",
"mount [ OPTIONS ] device mountpoint",
"fcoe --nic= name [ OPTIONS ]",
"iscsi --ipaddr= address [ OPTIONS ]",
"iscsiname iqname",
"nvdimm action [ OPTIONS ]",
"nvdimm reconfigure [--namespace= NAMESPACE ] [--mode= MODE ] [--sectorsize= SECTORSIZE ]",
"nvdimm reconfigure --namespace=namespace0.0 --mode=sector --sectorsize=512",
"nvdimm reconfigure --namespace=namespace0.0 --mode=sector --sectorsize=512",
"nvdimm use [--namespace= NAMESPACE |--blockdevs= DEVICES ]",
"nvdimm use --namespace=namespace0.0",
"nvdimm use --blockdevs=pmem0s,pmem1s nvdimm use --blockdevs=pmem*",
"zfcp --devnum= devnum [--wwpn= wwpn --fcplun= lun ]",
"zfcp --devnum=0.0.4000 --wwpn=0x5005076300C213e9 --fcplun=0x5022000000000000 zfcp --devnum=0.0.4000",
"%addon com_redhat_kdump [ OPTIONS ] %end",
"%addon com_redhat_kdump --enable --reserve-mb=128 %end",
"%addon org_fedora_oscap key = value %end",
"%addon org_fedora_oscap content-type = scap-security-guide profile = xccdf_org.ssgproject.content_profile_pci-dss %end",
"%addon org_fedora_oscap content-type = datastream content-url = http://www.example.com/scap/testing_ds.xml datastream-id = scap_example.com_datastream_testing xccdf-id = scap_example.com_cref_xccdf.xml profile = xccdf_example.com_profile_my_profile fingerprint = 240f2f18222faa98856c3b4fc50c4195 %end",
"pwpolicy name [--minlen= length ] [--minquality= quality ] [--strict|--notstrict] [--emptyok|--notempty] [--changesok|--nochanges]",
"rescue [--nomount|--romount]",
"rescue [--nomount|--romount]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automatically_installing_rhel/kickstart-commands-and-options-reference_rhel-installer |
Chapter 2. Red Hat Process Automation Manager components | Chapter 2. Red Hat Process Automation Manager components The product is made up of Business Central and KIE Server. Business Central is the graphical user interface where you create and manage business rules. You can install Business Central in a Red Hat JBoss EAP instance or on the Red Hat OpenShift Container Platform (OpenShift). Business Central is also available as a standalone JAR file. You can use the Business Central standalone JAR file to run Business Central without deploying it to an application server. KIE Server is the server where rules and other artifacts are executed. It is used to instantiate and execute rules and solve planning problems. You can install KIE Server in a Red Hat JBoss EAP instance, in a Red Hat JBoss EAP cluster, on OpenShift, in an Oracle WebLogic server instance, in an IBM WebSphere Application Server instance, or as a part of Spring Boot application. You can configure KIE Server to run in managed or unmanaged mode. If KIE Server is unmanaged, you must manually create and maintain KIE containers (deployment units). A KIE container is a specific version of a project. If KIE Server is managed, the Process Automation Manager controller manages the KIE Server configuration and you interact with the Process Automation Manager controller to create and maintain KIE containers. The Process Automation Manager controller is integrated with Business Central. If you install Business Central on Red Hat JBoss EAP, use the Execution Server page to create and maintain KIE containers. However, if you do not install Business Central, you can install the headless Process Automation Manager controller and use the REST API or the KIE Server Java Client API to interact with it. Red Hat build of OptaPlanner is integrated in Business Central and KIE Server. It is a lightweight, embeddable planning engine that optimizes planning problems. Red Hat build of OptaPlanner helps Java programmers solve planning problems efficiently, and it combines optimization heuristics and metaheuristics with efficient score calculations. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/components-con_planning |
Chapter 4. DataImage [metal3.io/v1alpha1] | Chapter 4. DataImage [metal3.io/v1alpha1] Description DataImage is the Schema for the dataimages API. Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object DataImageSpec defines the desired state of DataImage. status object DataImageStatus defines the observed state of DataImage. 4.1.1. .spec Description DataImageSpec defines the desired state of DataImage. Type object Required url Property Type Description url string Url is the address of the dataImage that we want to attach to a BareMetalHost 4.1.2. .status Description DataImageStatus defines the observed state of DataImage. Type object Property Type Description attachedImage object Currently attached DataImage error object Error count and message when attaching/detaching lastReconciled string Time of last reconciliation 4.1.3. .status.attachedImage Description Currently attached DataImage Type object Required url Property Type Description url string 4.1.4. .status.error Description Error count and message when attaching/detaching Type object Required count message Property Type Description count integer message string 4.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/dataimages GET : list objects of kind DataImage /apis/metal3.io/v1alpha1/namespaces/{namespace}/dataimages DELETE : delete collection of DataImage GET : list objects of kind DataImage POST : create a DataImage /apis/metal3.io/v1alpha1/namespaces/{namespace}/dataimages/{name} DELETE : delete a DataImage GET : read the specified DataImage PATCH : partially update the specified DataImage PUT : replace the specified DataImage /apis/metal3.io/v1alpha1/namespaces/{namespace}/dataimages/{name}/status GET : read status of the specified DataImage PATCH : partially update status of the specified DataImage PUT : replace status of the specified DataImage 4.2.1. /apis/metal3.io/v1alpha1/dataimages HTTP method GET Description list objects of kind DataImage Table 4.1. HTTP responses HTTP code Reponse body 200 - OK DataImageList schema 401 - Unauthorized Empty 4.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/dataimages HTTP method DELETE Description delete collection of DataImage Table 4.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind DataImage Table 4.3. HTTP responses HTTP code Reponse body 200 - OK DataImageList schema 401 - Unauthorized Empty HTTP method POST Description create a DataImage Table 4.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.5. Body parameters Parameter Type Description body DataImage schema Table 4.6. HTTP responses HTTP code Reponse body 200 - OK DataImage schema 201 - Created DataImage schema 202 - Accepted DataImage schema 401 - Unauthorized Empty 4.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/dataimages/{name} Table 4.7. Global path parameters Parameter Type Description name string name of the DataImage HTTP method DELETE Description delete a DataImage Table 4.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified DataImage Table 4.10. HTTP responses HTTP code Reponse body 200 - OK DataImage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified DataImage Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.12. HTTP responses HTTP code Reponse body 200 - OK DataImage schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified DataImage Table 4.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.14. Body parameters Parameter Type Description body DataImage schema Table 4.15. HTTP responses HTTP code Reponse body 200 - OK DataImage schema 201 - Created DataImage schema 401 - Unauthorized Empty 4.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/dataimages/{name}/status Table 4.16. Global path parameters Parameter Type Description name string name of the DataImage HTTP method GET Description read status of the specified DataImage Table 4.17. HTTP responses HTTP code Reponse body 200 - OK DataImage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified DataImage Table 4.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.19. HTTP responses HTTP code Reponse body 200 - OK DataImage schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified DataImage Table 4.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.21. Body parameters Parameter Type Description body DataImage schema Table 4.22. HTTP responses HTTP code Reponse body 200 - OK DataImage schema 201 - Created DataImage schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/provisioning_apis/dataimage-metal3-io-v1alpha1 |
Chapter 19. console | Chapter 19. console This chapter describes the commands under the console command. 19.1. console log show Show server's console output Usage: Table 19.1. Positional arguments Value Summary <server> Server to show console log (name or id) Table 19.2. Command arguments Value Summary -h, --help Show this help message and exit --lines <num-lines> Number of lines to display from the end of the log (default=all) 19.2. console url show Show server's remote console URL Usage: Table 19.3. Positional arguments Value Summary <server> Server to show url (name or id) Table 19.4. Command arguments Value Summary -h, --help Show this help message and exit --novnc Show novnc console url (default) --xvpvnc Show xvpvnc console url --spice Show spice console url --rdp Show rdp console url --serial Show serial console url --mks Show webmks console url Table 19.5. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 19.6. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 19.7. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 19.8. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack console log show [-h] [--lines <num-lines>] <server>",
"openstack console url show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--novnc | --xvpvnc | --spice | --rdp | --serial | --mks] <server>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/console |
Chapter 15. Azure Storage Blob Sink | Chapter 15. Azure Storage Blob Sink Upload data to Azure Storage Blob. Important The Azure Storage Blob Sink Kamelet is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . The Kamelet expects the following headers to be set: file / ce-file : as the file name to upload If the header won't be set the exchange ID will be used as file name. 15.1. Configuration Options The following table summarizes the configuration options available for the azure-storage-blob-sink Kamelet: Property Name Description Type Default Example accessKey * Access Key The Azure Storage Blob access Key. string accountName * Account Name The Azure Storage Blob account name. string containerName * Container Name The Azure Storage Blob container name. string credentialType Credential Type Determines the credential strategy to adopt. Possible values are SHARED_ACCOUNT_KEY, SHARED_KEY_CREDENTIAL and AZURE_IDENTITY string "SHARED_ACCOUNT_KEY" operation Operation Name The operation to perform. string "uploadBlockBlob" Note Fields marked with an asterisk (*) are mandatory. 15.2. Dependencies At runtime, the azure-storage-blob-sink Kamelet relies upon the presence of the following dependencies: camel:azure-storage-blob camel:kamelet 15.3. Usage This section describes how you can use the azure-storage-blob-sink . 15.3.1. Knative Sink You can use the azure-storage-blob-sink Kamelet as a Knative sink by binding it to a Knative object. azure-storage-blob-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-blob-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-blob-sink properties: accessKey: "The Access Key" accountName: "The Account Name" containerName: "The Container Name" 15.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 15.3.1.2. Procedure for using the cluster CLI Save the azure-storage-blob-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f azure-storage-blob-sink-binding.yaml 15.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel azure-storage-blob-sink -p "sink.accessKey=The Access Key" -p "sink.accountName=The Account Name" -p "sink.containerName=The Container Name" This command creates the KameletBinding in the current namespace on the cluster. 15.3.2. Kafka Sink You can use the azure-storage-blob-sink Kamelet as a Kafka sink by binding it to a Kafka topic. azure-storage-blob-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-blob-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-blob-sink properties: accessKey: "The Access Key" accountName: "The Account Name" containerName: "The Container Name" 15.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 15.3.2.2. Procedure for using the cluster CLI Save the azure-storage-blob-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f azure-storage-blob-sink-binding.yaml 15.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic azure-storage-blob-sink -p "sink.accessKey=The Access Key" -p "sink.accountName=The Account Name" -p "sink.containerName=The Container Name" This command creates the KameletBinding in the current namespace on the cluster. 15.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/azure-storage-blob-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-blob-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-blob-sink properties: accessKey: \"The Access Key\" accountName: \"The Account Name\" containerName: \"The Container Name\"",
"apply -f azure-storage-blob-sink-binding.yaml",
"kamel bind channel:mychannel azure-storage-blob-sink -p \"sink.accessKey=The Access Key\" -p \"sink.accountName=The Account Name\" -p \"sink.containerName=The Container Name\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-blob-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-blob-sink properties: accessKey: \"The Access Key\" accountName: \"The Account Name\" containerName: \"The Container Name\"",
"apply -f azure-storage-blob-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic azure-storage-blob-sink -p \"sink.accessKey=The Access Key\" -p \"sink.accountName=The Account Name\" -p \"sink.containerName=The Container Name\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/azure-storage-blob-sink |
5.3.10. Removing Volume Groups | 5.3.10. Removing Volume Groups To remove a volume group that contains no logical volumes, use the vgremove command. | [
"vgremove officevg Volume group \"officevg\" successfully removed"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/VG_remove |
Chapter 9. Ceph File System snapshot scheduling | Chapter 9. Ceph File System snapshot scheduling As a storage administrator, you can take a point-in-time snapshot of a Ceph File System (CephFS) directory. CephFS snapshots are asynchronous, and you can choose which directory snapshots are created in. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Deployment of a Ceph File System. 9.1. Ceph File System snapshot schedules A Ceph File System (CephFS) can schedule snapshots of a file system directory. The scheduling of snapshots is managed by the Ceph Manager, and relies on Python Timers. The snapshot schedule data is stored as an object in the CephFS metadata pool, and at runtime, all the schedule data lives in a serialized SQLite database. Important The scheduler is precisely based on the specified time to keep snapshots apart when a storage cluster is under normal load. When the Ceph Manager is under a heavy load, it's possible that a snapshot might not get scheduled right away, resulting in a slightly delayed snapshot. If this happens, then the scheduled snapshot acts as if there was no delay. Scheduled snapshots that are delayed do not cause drift in the overall schedule. Usage Scheduling snapshots for a Ceph File System (CephFS) is managed by the snap_schedule Ceph Manager module. This module provides an interface to add, query, and delete snapshot schedules, and to manage the retention policies. This module also implements the ceph fs snap-schedule command, with several subcommands to manage schedules, and retention policies. All of the subcommands take the CephFS volume path and subvolume path arguments to specify the file system path when using multiple Ceph File Systems. Not specifying the CephFS volume path, the argument defaults to the first file system listed in the fs_map , and not specifying the subvolume path argument defaults to nothing. Snapshot schedules are identified by the file system path, the repeat interval, and the start time. The repeat interval defines the time between two subsequent snapshots. The interval format is a number plus a time designator: h (our), d (ay), or w (eek). For example, having an interval of 4h , means one snapshot every four hours. The start time is a string value in the ISO format, %Y-%m-%dT%H:%M:%S , and if not specified, the start time uses a default value of last midnight. For example, you schedule a snapshot at 14:45 , using the default start time value, with a repeat interval of 1h , the first snapshot will be taken at 15:00. Retention policies are identified by the file system path, and the retention policy specifications. Defining a retention policy consist of either a number plus a time designator or a concatenated pairs in the format of COUNT TIME_PERIOD . The policy ensures a number of snapshots are kept, and the snapshots are at least for a specified time period apart. The time period designators are: h (our), d (ay), w (eek), M (onth), Y (ear), and n . The n time period designator is a special modifier, which means keep the last number of snapshots regardless of timing. For example, 4d means keeping four snapshots that are at least one day, or longer apart from each other. Additional Resources See the Creating a snapshot for a Ceph File System section in the Red Hat Ceph Storage File System Guide for more details. See the Creating a snapshot schedule for a Ceph File System section in the Red Hat Ceph Storage File System Guide for more details. 9.2. Adding a snapshot schedule for a Ceph File System Add a snapshot schedule for a CephFS path that does not exist yet. You can create one or more schedules for a single path. Schedules are considered different, if their repeat interval and start times are different. A CephFS path can only have one retention policy, but a retention policy can have multiple count-time period pairs. Note Once the scheduler module is enabled, running the ceph fs snap-schedule command displays the available subcommands and their usage format. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Deployment of a Ceph File System. Root-level access to a Ceph Manager and Metadata Server (MDS) nodes. Enable CephFS snapshots on the file system. Procedure Log into the Cephadm shell on a Ceph Manager node: Example Enable the snap_schedule module: Example Log into the client node: Example Add a new schedule for a Ceph File System: Syntax Example Note START_TIME is represented in ISO 8601 format. This example creates a snapshot schedule for the path /cephfs within the filesystem mycephfs , snapshotting every hour, and starts on 27 June 2022 9:50 PM. Add a new retention policy for snapshots of a CephFS volume path: Syntax Example 1 This example keeps 14 snapshots at least one hour apart. 2 This example keeps 4 snapshots at least one day apart. 3 This example keeps 14 hourly, and 4 weekly snapshots. List the snapshot schedules to verify the new schedule is created. Syntax Example This example lists all schedules in the directory tree. Check the status of a snapshot schedule: Syntax Example This example displays the status of the snapshot schedule for the CephFS /cephfs path in JSON format. The default format is plain text, if not specified. Additional Resources See the Ceph File System snapshot schedules section in the Red Hat Ceph Storage File System Guide for more details. See the Ceph File System snapshots section in the Red Hat Ceph Storage File System Guide for more details. 9.3. Adding a snapshot schedule for Ceph File System subvolume To manage the retention policies for Ceph File System (CephFS) subvolume snapshots, you can have different schedules for a single path. Schedules are considered different, if their repeat interval and start times are different. Add a snapshot schedule for a CephFS file path that does not exist yet. A CephFS path can only have one retention policy, but a retention policy can have multiple count-time period pairs. Note Once the scheduler module is enabled, running the ceph fs snap-schedule command displays the available subcommands and their usage format. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System (CephFS) deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume and subvolume group created. You can create a snapshot schedule for: A directory in a subvolume. A subvolume in default group. A subvolume in non-default group. However, the commands are different. Procedure To create a snapshot schedule for a directory in the subvolume: Get the absolute path of the subvolume where directory exists: Syntax Example Add a snapshot schedule to a directory in the subvolume: Syntax Note Path in snap-schedule command would be <absolute_path_of_ subvolume>/<relative_path_of_test_dir>, refer step1 for absolute_path of subvolume. Example Note START_TIME is represented in ISO 8601 format. This example creates a snapshot schedule for the subvolume path, snapshotting every hour, and starts on 27 June 2022 9:50 PM. To create a snapshot schedule for a subvolume in a default group, run the following command: Syntax Example Note The path must be defined and cannot be left empty.There is no dependency on path string value, you can define it as / , -` or /.. . To create a snapshot schedule for a subvolume in a non-default group, run the following command: Syntax Example Note The path must be defined and cannot be left empty.There is no dependency on path string value, you can define it as / , -` or /.. . 9.3.1. Adding retention policy for snapshot schedules of a CephFS volume path To define how many snapshots to retain in the volume path at any time, you must add a retention policy after you have created a snapshot schedule. You can create retention policies for directories within a subvolume group, subvolume within the default group and to the subvolume withing a non-default group. Prerequisites A running and healthy IBM Storage Ceph cluster with Ceph File System (CephFS) deployed. A minimum of read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume and subvolume group created. A snapshot schedule. Procedure Add a new retention policy for snapshot schedules in a directory of a CephFS subvolume: Syntax Example 1 This example keeps 14 snapshots at least one hour apart. 2 This example keeps 4 snapshots at least one day apart. 3 This example keeps 14 hourly, and 4 weekly snapshots. Add a retention policy to a snapshot schedule created for a subvolume in the default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... Add a retention policy to the snapshot schedule created for a subvolume group in the non-default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... 9.3.2. Listing CephFS snapshot schedules By listing and adhering to snapshot schedules, you can ensure robust data protection and efficient management. Prerequisites A running and healthy IBM Storage Ceph cluster with Ceph File System (CephFS) deployed. A minimum of read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume and subvolume group created. A snapshot schedule. Procedure List the snapshot schedules: Syntax Example This example lists all schedules in the directory tree. 9.3.3. Checking status of CephFS snapshot schedules You can check the status of snapshot schedules using the command in the procedure for snapshots created in a directory of a subvolume, for a subvolume in the default subvolume group and for a subvolume created in a non-default group. Prerequisites A running and healthy IBM Storage Ceph cluster with Ceph File System (CephFS) deployed. A minimum of read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume and subvolume group created. A snapshot schedule. Procedure Check the status of a snapshot schedule created for a directory in the subvolume: Syntax Example This example displays the status of the snapshot schedule for the /volumes/_nogroup/subv1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. path in JSON format. The default format is plain text, if not specified. Check the status of the snapshot schedule created for a subvolume in the default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... Check the status of the snapshot schedule created for a subvolume in the non-default group: .Syntax Example + IMPORTANT: The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... 9.4. Activating snapshot schedule for a Ceph File System This section provides the steps to manually set the snapshot schedule to active for a Ceph File System (CephFS). Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Procedure Activate the snapshot schedule: Syntax Example This example activates all schedules for the CephFS /cephfs path. 9.5. Activating snapshot schedule for a Ceph File System sub volume This section provides the steps to manually set the snapshot schedule to active for a Ceph File System (CephFS) sub volume. Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Procedure Activate the snapshot schedule created for a directory in a subvolume: Syntax Example This example activates all schedules for the CephFS /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. path. Activate the snapshot schedule created for a subvolume in the default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... Activate the snapshot schedule created for a subvolume in the non-default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... 9.6. Deactivating snapshot schedule for a Ceph File System This section provides the steps to manually set the snapshot schedule to inactive for a Ceph File System (CephFS). This action will exclude the snapshot from scheduling until it is activated again. Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Snapshot schedule is created and is in active state. Procedure Deactivate a snapshot schedule for a CephFS path: Syntax Example This example deactivates the daily snapshots for the /cephfs path, thereby pausing any further snapshot creation. 9.7. Deactivating snapshot schedule for a Ceph File System sub volume This section provides the steps to manually set the snapshot schedule to inactive for a Ceph File System (CephFS) sub volume. This action excludes the snapshot from scheduling until it is activated again. Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Snapshot schedule is created and is in active state. Procedure Deactivate a snapshot schedule for a directory in a CephFS subvolume: Syntax Example This example deactivates the daily snapshots for the /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. path, thereby pausing any further snapshot creation. Deactivate the snapshot schedule created for a subvolume in the default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... Deactivate the snapshot schedule created for a subvolume in the non-default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... 9.8. Removing a snapshot schedule for a Ceph File System This section provides the step to remove snapshot schedule of a Ceph File System (CephFS). Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Snapshot schedule is created. Procedure Remove a specific snapshot schedule: Syntax Example This example removes the specific snapshot schedule for the /cephfs volume, that is snapshotting every four hours, and started on 16 May 2022 2:00 PM. Remove all snapshot schedules for a specific CephFS volume path: Syntax Example This example removes all the snapshot schedules for the /cephfs volume path. 9.9. Removing a snapshot schedule for a Ceph File System sub volume This section provides the step to remove snapshot schedule of a Ceph File System (CephFS) sub volume. Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Snapshot schedule is created. Procedure Remove a specific snapshot schedule created for a directory in a CephFS subvolume: Syntax Example This example removes the specific snapshot schedule for the /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. volume, that is snapshotting every four hours, and started on 16 May 2022 2:00 PM. Remove a specific snapshot schedule created for a subvolume in the default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... Remove a specific snapshot schedule created for a subvolume in the non-default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... 9.10. Removing snapshot schedule retention policy for a Ceph File System This section provides the step to remove the retention policy of the scheduled snapshots for a Ceph File System (CephFS). Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Snapshot schedule created for a CephFS volume path. Procedure Remove a retention policy on a CephFS path: Syntax Example 1 This example removes 4 hourly snapshots. 2 This example removes 14 daily, and 4 weekly snapshots. 9.11. Removing snapshot schedule retention policy for a Ceph File System sub volume This section provides the step to remove the retention policy of the scheduled snapshots for a Ceph File System (CephFS) sub volume. Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Snapshot schedule created for a CephFS sub volume path. Procedure Remove a retention policy for a directory in a CephFS subvolume: Syntax Example 1 This example removes 4 hourly snapshots. 2 This example removes 14 daily, and 4 weekly snapshots. Remove a retention policy created on snapshot schedule for a subvolume in the default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... Remove a retention policy created on snapshot schedule for a subvolume in the non-default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... See the Deployment of the Ceph File System section in the Red Hat Ceph Storage File System Guide . | [
"cephadm shell",
"ceph mgr module enable snap_schedule",
"cephadm shell",
"ceph fs snap-schedule add FILE_SYSTEM_VOLUME_PATH REPEAT_INTERVAL [ START_TIME ] --fs CEPH_FILE_SYSTEM_NAME",
"ceph fs snap-schedule add /cephfs_kernelf739cwtus2/pmo9axbwsi 1h 2022-06-27T21:50:00 --fs mycephfs",
"ceph fs snap-schedule retention add FILE_SYSTEM_VOLUME_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT",
"ceph fs snap-schedule retention add /cephfs h 14 1 ceph fs snap-schedule retention add /cephfs d 4 2 ceph fs snap-schedule retention add /cephfs 14h4w 3",
"ceph fs snap-schedule list FILE_SYSTEM_VOLUME_PATH [--format=plain|json] [--recursive=true]",
"ceph fs snap-schedule list /cephfs --recursive=true",
"ceph fs snap-schedule status FILE_SYSTEM_VOLUME_PATH [--format=plain|json]",
"ceph fs snap-schedule status /cephfs --format=json",
"ceph fs subvolume getpath VOLUME_NAME SUBVOLUME_NAME SUBVOLUME_GROUP_NAME",
"ceph fs subvolume getpath cephfs subvol_1 subvolgroup_1",
"ceph fs snap-schedule add SUBVOLUME_DIR_PATH SNAP_SCHEDULE [ START_TIME ] --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME",
"ceph fs snap-schedule add /cephfs_kernelf739cwtus2/pmo9axbwsi 1h 2022-06-27T21:50:00 --fs cephfs --subvol subvol_1 Schedule set for path /..",
"ceph fs snap-schedule add /.. SNAP_SCHEDULE [ START_TIME] --fs CEPH_FILE_SYSTEM_NAME --subvol _SUBVOLUME_NAME",
"ceph fs snap-schedule add - 2M --subvol sv_non_def_1",
"ceph fs snap-schedule add /.. SNAP_SCHEDULE [ START_TIME] --fs CEPH_FILE_SYSTEM_NAME --subvol _SUBVOLUME_NAME --group NON_DEFAULT_SUBVOLGROUP_NAME",
"ceph fs snap-schedule add - 2M --fs cephfs --subvol sv_non_def_1 --group svg1",
"ceph fs snap-schedule retention add SUBVOLUME_DIR_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT",
"ceph fs snap-schedule retention add /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. h 14 1 ceph fs snap-schedule retention add /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. d 4 2 ceph fs snap-schedule retention add /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 14h4w 3 Retention added to path /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/..",
"ceph fs snap-schedule retention add / [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD_COUNT --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME",
"ceph fs snap-schedule retention add / 5h --fs cephfs --subvol sv_sched Retention added to path /volumes/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..",
"ceph fs snap-schedule retention add / [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD_COUNT --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME --group NON_DEFAULT_SUBVOLGROUP_NAME",
"ceph fs snap-schedule retention add / 5h --fs cephfs --subvol sv_sched --group subvolgroup_cg Retention added to path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a54j0dda7f16/..",
"ceph fs snap-schedule list SUBVOLUME_VOLUME_PATH [--format=plain|json] [--recursive=true]",
"ceph fs snap-schedule list / --recursive=true /volumes/_nogroup/subv1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 4h",
"ceph fs snap-schedule status SUBVOLUME_DIR_PATH [--format=plain|json]",
"ceph fs snap-schedule status /volumes/_nogroup/subv1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. --format=json {\"fs\": \"cephfs\", \"subvol\": \"subvol_1\", \"path\": \"/volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/..\", \"rel_path\": \"/..\", \"schedule\": \"4h\", \"retention\": {\"h\": 14}, \"start\": \"2022-05-16T14:00:00\", \"created\": \"2023-03-20T08:47:18\", \"first\": null, \"last\": null, \"last_pruned\": null, \"created_count\": 0, \"pruned_count\": 0, \"active\": true}",
"ceph fs snap-schedule status --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME",
"ceph fs snap-schedule status --fs cephfs --subvol sv_sched {\"fs\": \"cephfs\", \"subvol\": \"sv_sched\", \"group\": \"subvolgroup_cg\", \"path\": \"/volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..\", \"rel_path\": \"/volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..\", \"schedule\": \"1h\", \"retention\": {\"h\": 5}, \"start\": \"2024-05-21T00:00:00\", \"created\": \"2024-05-21T09:18:58\", \"first\": null, \"last\": null, \"last_pruned\": null, \"created_count\": 0, \"pruned_count\": 0, \"active\": true}",
"ceph fs snap-schedule status --fs _CEPH_FILE_SYSTEM_NAME_ --subvol _SUBVOLUME_NAME_ --group _NON-DEFAULT_SUBVOLGROUP_NAME_",
"ceph fs snap-schedule status --fs cephfs --subvol sv_sched --group subvolgroup_cg {\"fs\": \"cephfs\", \"subvol\": \"sv_sched\", \"group\": \"subvolgroup_cg\", \"path\": \"/volumes/subvolgroup_cg/sv_sched/e564329a-kj87-4763-gh0y-b56c8sev7t23/..\", \"rel_path\": \"/volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..\", \"schedule\": \"1h\", \"retention\": {\"h\": 5}, \"start\": \"2024-05-21T00:00:00\", \"created\": \"2024-05-21T09:18:58\", \"first\": null, \"last\": null, \"last_pruned\": null, \"created_count\": 0, \"pruned_count\": 0, \"active\": true}",
"ceph fs snap-schedule activate FILE_SYSTEM_VOLUME_PATH [ REPEAT_INTERVAL ]",
"ceph fs snap-schedule activate /cephfs",
"ceph fs snap-schedule activate SUBVOL_DIR_PATH [ REPEAT_INTERVAL ]",
"ceph fs snap-schedule activate /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/..",
"ceph fs snap-schedule activate /.. REPEAT_INTERVAL --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME",
"ceph fs snap-schedule activate / --fs cephfs --subvol sv_sched Schedule activated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..",
"ceph fs snap-schedule activate /.. [ REPEAT_INTERVAL ] --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME",
"ceph fs snap-schedule activate / --fs cephfs --subvol sv_sched --group subvolgroup_cg Schedule activated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..",
"ceph fs snap-schedule deactivate FILE_SYSTEM_VOLUME_PATH [ REPEAT_INTERVAL ]",
"ceph fs snap-schedule deactivate /cephfs 1d",
"ceph fs snap-schedule deactivate SUBVOL_DIR_PATH [ REPEAT_INTERVAL ]",
"ceph fs snap-schedule deactivate /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 1d",
"ceph fs snap-schedule deactivate / REPEAT_INTERVAL --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME",
"ceph fs snap-schedule deactivate / --fs cephfs --subvol sv_sched Schedule deactivated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..",
"ceph fs snap-schedule deactivate / REPEAT_INTERVAL --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME",
"ceph fs snap-schedule deactivate / --fs cephfs --subvol sv_sched --group subvolgroup_cg Schedule deactivated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..",
"ceph fs snap-schedule remove FILE_SYSTEM_VOLUME_PATH [ REPEAT_INTERVAL ] [ START_TIME ]",
"ceph fs snap-schedule remove /cephfs 4h 2022-05-16T14:00:00",
"ceph fs snap-schedule remove FILE_SYSTEM_VOLUME_PATH",
"ceph fs snap-schedule remove /cephfs",
"ceph fs snap-schedule remove SUBVOL_DIR_PATH [ REPEAT_INTERVAL ] [ START_TIME ]",
"ceph fs snap-schedule remove /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 4h 2022-05-16T14:00:00",
"ceph fs snap-schedule remove / --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME",
"ceph fs snap-schedule remove / --fs cephfs --subvol sv_sched Schedule removed for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..",
"ceph fs snap-schedule remove / --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME",
"ceph fs snap-schedule remove / --fs cephfs --subvol sv_sched --group subvolgroup_cg Schedule removed for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..",
"ceph fs snap-schedule retention remove FILE_SYSTEM_VOLUME_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT",
"ceph fs snap-schedule retention remove /cephfs h 4 1 ceph fs snap-schedule retention remove /cephfs 14d4w 2",
"ceph fs snap-schedule retention remove SUBVOL_DIR_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT",
"ceph fs snap-schedule retention remove /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. h 4 1 ceph fs snap-schedule retention remove /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 14d4w 2",
"ceph fs snap-schedule retention remove / TIME_PERIOD_PAIR TIME_PERIOD COUNT --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME",
"ceph fs snap-schedule retention remove / 5h --fs cephfs --subvol sv_sched --group subvolgroup_cg Retention removed from path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..",
"ceph fs snap-schedule retention remove / TIME_PERIOD_PAIR TIME_PERIOD COUNT --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME",
"ceph fs snap-schedule retention remove / 5h --fs cephfs --subvol sv_sched --group subvolgroup_cg Retention removed from path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/.."
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/file_system_guide/ceph-file-system-snapshot-scheduling |
E.2.2. /proc/cmdline | E.2.2. /proc/cmdline This file shows the parameters passed to the kernel at the time it is started. A sample /proc/cmdline file looks like the following: This tells us that the kernel is mounted read-only (signified by (ro) ), located on the first logical volume ( LogVol00 ) of the first volume group ( /dev/VolGroup00 ). LogVol00 is the equivalent of a disk partition in a non-LVM system (Logical Volume Management), just as /dev/VolGroup00 is similar in concept to /dev/hda1 , but much more extensible. For more information on LVM used in Red Hat Enterprise Linux, see http://www.tldp.org/HOWTO/LVM-HOWTO/index.html . , rhgb signals that the rhgb package has been installed, and graphical booting is supported, assuming /etc/inittab shows a default runlevel set to id:5:initdefault: . Finally, quiet indicates all verbose kernel messages are suppressed at boot time. | [
"ro root=/dev/VolGroup00/LogVol00 rhgb quiet 3"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-cmdline |
Chapter 8. Backing OpenShift Container Platform applications with OpenShift Data Foundation | Chapter 8. Backing OpenShift Container Platform applications with OpenShift Data Foundation You cannot directly install OpenShift Data Foundation during the OpenShift Container Platform installation. However, you can install OpenShift Data Foundation on an existing OpenShift Container Platform by using the Operator Hub and then configure the OpenShift Container Platform applications to be backed by OpenShift Data Foundation. Prerequisites OpenShift Container Platform is installed and you have administrative access to OpenShift Web Console. OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure In the OpenShift Web Console, perform one of the following: Click Workloads Deployments . In the Deployments page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. Click Workloads Deployment Configs . In the Deployment Configs page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment Config to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. In the Add Storage page, you can choose one of the following options: Click the Use existing claim option and select a suitable PVC from the drop-down list. Click the Create new claim option. Select the appropriate CephFS or RBD storage class from the Storage Class drop-down list. Provide a name for the Persistent Volume Claim. Select ReadWriteOnce (RWO) or ReadWriteMany (RWX) access mode. Note ReadOnlyMany (ROX) is deactivated as it is not supported. Select the size of the desired storage capacity. Note You can expand the block PVs but cannot reduce the storage capacity after the creation of Persistent Volume Claim. Specify the mount path and subpath (if required) for the mount path volume inside the container. Click Save . Verification steps Depending on your configuration, perform one of the following: Click Workloads Deployments . Click Workloads Deployment Configs . Set the Project as required. Click the deployment for which you added storage to display the deployment details. Scroll down to Volumes and verify that your deployment has a Type that matches the Persistent Volume Claim that you assigned. Click the Persistent Volume Claim name and verify the storage class name in the Persistent Volume Claim Overview page. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/managing_and_allocating_storage_resources/backing-openshift-container-platform-applications-with-openshift-data-foundation_rhodf |
1.4. Differences between GFS and GFS2 | 1.4. Differences between GFS and GFS2 This section lists the improvements and changes that GFS2 offers over GFS. Migrating from GFS to GFS2 requires that you convert your GFS file systems to GFS2 with the gfs2_convert utility. For information on the gfs2_convert utility, see Appendix B, Converting a File System from GFS to GFS2 . 1.4.1. GFS2 Command Names In general, the functionality of GFS2 is identical to GFS. The names of the file system commands, however, specify GFS2 instead of GFS. Table 1.1, "GFS and GFS2 Commands" shows the equivalent GFS and GFS2 commands and functionality. Table 1.1. GFS and GFS2 Commands GFS Command GFS2 Command Description mount mount Mount a file system. The system can determine whether the file system is a GFS or GFS2 file system type. For information on the GFS2 mount options see the gfs2_mount(8) man page. umount umount Unmount a file system. fsck gfs_fsck fsck fsck.gfs2 Check and repair an unmounted file system. gfs_grow gfs2_grow Grow a mounted file system. gfs_jadd gfs2_jadd Add a journal to a mounted file system. gfs_mkfs mkfs -t gfs mkfs.gfs2 mkfs -t gfs2 Create a file system on a storage device. gfs_quota gfs2_quota Manage quotas on a mounted file system. As of the Red Hat Enterprise Linux 6.1 release, GFS2 supports the standard Linux quota facilities. For further information on quota management in GFS2, see Section 4.5, "GFS2 Quota Management" . gfs_tool tunegfs2 mount parameters dmsetup suspend Configure, tune, or gather information about a file system. The tunegfs2 command is supported as of the Red Hat Enterprise Linux 6.2 release. There is also a gfs2_tool command. gfs_edit gfs2_edit Display, print, or edit file system internal structures. The gfs2_edit command can be used for GFS file systems as well as GFS2 file system. gfs_tool setflag jdata/inherit_jdata chattr +j (preferred) Enable journaling on a file or directory. setfacl/getfacl setfacl/getfacl Set or get file access control list for a file or directory. setfattr/getfattr setfattr/getfattr Set or get the extended attributes of a file. For a full listing of the supported options for the GFS2 file system commands, see the man pages for those commands. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s1-ov-GFS-GFS2-diffs |
Chapter 7. Enabling Chinese, Japanese, or Korean text input | Chapter 7. Enabling Chinese, Japanese, or Korean text input If you write with Chinese, Japanese, or Korean characters, you can configure RHEL to input text in your language. 7.1. Input methods Certain scripts, such as Chinese, Japanese, or Korean, require keyboard input to go through an Input Method Engine (IME) to enter native text. An input method is a set of conversion rules between the text input and the selected script. An IME is a software that performs the input conversion specified by the input method. To input text in these scripts, you must set up an IME. If you installed the system in your native language and selected your language at the GNOME Initial Setup screen, the input method for your language is enabled by default. 7.2. Available input method engines The following input method engines (IMEs) are available on RHEL from the listed packages: Table 7.1. Available input method engines Languages Scripts IME name Package Chinese Simplified Chinese Intelligent Pinyin ibus-libpinyin Chinese Traditional Chinese New Zhuyin ibus-libzhuyin Japanese Kanji, Hiragana, Katakana Anthy ibus-anthy Korean Hangul Hangul ibus-hangul Other Various M17N ibus-m17n 7.3. Installing input method engines This procedure installs input method engines (IMEs) that you can use to input Chinese, Japanese, and Korean text. Procedure Install all available input method packages: 7.4. Switching the input method in GNOME This procedure sets up the input method for your script, such as for Chinese, Japanese, or Korean scripts. Prerequisites The input method packages are installed. Procedure Go to the system menu , which is accessible from the top-right screen corner, and click Settings . Select the Keyboard section. In the Input Sources list, review the currently enabled input methods. If your input method is missing: Click the + button under the Input Sources list. Select your language. Note If you cannot find your language in the menu, click the three dots icon ( More... ) at the end of the menu. Select the input method that you want to use. A cog wheel icon marks all input methods to distinguish them from simple keyboard layouts. Confirm your selection by clicking Add . Switch the active input method using one of the following ways: Click the input method indicator on the right side of the top panel and select your input method. Switch between the enabled input methods using the Super + Space keyboard shortcut. Verification Open a text editor. Type text in your language. Verify that the text appears in your native script. 7.5. Additional resources Installing a font for the Chinese standard GB 18030 character set (Red Hat Knowledgebase) | [
"dnf install @input-methods"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/getting_started_with_the_gnome_desktop_environment/assembly_enabling-chinese-japanese-or-korean-text-input_getting-started-with-the-gnome-desktop-environment |
Chapter 4. Configuring the LVS Routers with Piranha Configuration Tool | Chapter 4. Configuring the LVS Routers with Piranha Configuration Tool The Piranha Configuration Tool provides a structured approach to creating the necessary configuration file for LVS - /etc/sysconfig/ha/lvs.cf . This chapter describes the basic operation of the Piranha Configuration Tool and how to activate LVS once configuration is complete. Important The configuration file for LVS follows strict formatting rules. Using the Piranha Configuration Tool is the best way to prevent syntax errors in the lvs.cf and therefore prevent software failures. 4.1. Necessary Software The piranha-gui service must be running on the primary LVS router to use the Piranha Configuration Tool . To configure LVS, you minimally need a text-only Web browser, such as links . If you are accessing the LVS router from another machine, you also need an ssh connection to the primary LVS router as the root user. While configuring the primary LVS router it is a good idea to keep a concurrent ssh connection in a terminal window. This connection provides a secure way to restart pulse and other services, configure network packet filters, and monitor /var/log/messages during trouble shooting. The four sections walk through each of the configuration pages of the Piranha Configuration Tool and give instructions on using it to set up LVS. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/ch-lvs-piranha-vsa |
Anaconda Customization Guide | Anaconda Customization Guide Red Hat Enterprise Linux 7 Changing the installer appearance and creating custom add-ons Vladimir Slavik [email protected] Sharon Moroney [email protected] Petr Bokoc Vratislav Podzimek | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/anaconda_customization_guide/index |
5.5. About Linking Attributes | 5.5. About Linking Attributes A class of service dynamically supplies attribute values for entries which all have attributes with the same value , like building addresses, postal codes, or main office numbers. These are shared attribute values, which are updated in a single template entry. Frequently, though, there are relationships between entries where there needs to be a way to express linkage between them, but the values (and possibly even the attributes) that express that relationship are different. Red Hat Directory Server provides a way to link specified attributes together, so that when one attribute in one entry is altered, a corresponding attribute on a related entry is automatically updated. The first attribute has a DN value, which points to the entry to update; the second entry attribute also has a DN value which is a back-pointer to the first entry. For example, group entries list their members in attributes like member . It is natural for there to be an indication in the user entry of what groups the user belongs to; this is set in the memberOf attribute. The memberOf attribute is a managed attribute through the MemberOf Plug-in. The plug-in polls every group entry for changes in their respective member attributes. Whenever a group member is added or dropped from the group, the corresponding user entry is updated with changed memberOf attributes. In this way, the member (and other member attributes) and memberOf attributes are linked . The MemberOf Plug-in is limited to a single instance, exclusively for a (single) group member attribute (with some other behaviors unique to groups, like handling nested groups). Another plug-in, the Linked Attributes Plug-in, allows multiple instances of the plug-in. Each instance configures one attribute which is manually maintained by the administrator ( linkType ) and one attribute which is automatically maintained by the plug-in ( managedType ). Figure 5.6. Basic Linked Attribute Configuration Note To preserve data consistency, only the plug-in process should maintain the managed attribute. Consider creating an ACI that will restrict all write access to any managed attribute. A Linked Attribute Plug-in instance can be restricted to a single subtree within the directory. This can allow more flexible customization of attribute combinations and affected entries. If no scope is set, then the plug-in operates on the entire directory. Figure 5.7. Restricting the Linked Attribute Plug-in to a Specific Subtree 5.5.1. Schema Requirements for Linking Attributes Both the managed attribute and linked attribute must require the Distinguished Name syntax in their attribute definitions. The plug-in identifies the entries to maintain by pulling the DN from the link attribute, and then it automatically assigns the original entry DN as the value of the managed attribute. This means that both attributes have to use DNs as their values. The managed attribute must be multi-valued. Users may be members of more than one group, be authors of more than one document, or have multiple "see also" reference entries. If the managed attribute is single-valued, then the value is not updated properly. This is not much of an issue with the default schema because many of the standard elements are multi-valued. It is particularly important to consider, however, when using custom schema. Figure 5.8. Wrong: Using a Single-Valued Linked Attribute 5.5.2. Using Linked Attributes with Replication In a simple replication scenario (supplier-consumer), then the plug-in only has to exist on the supplier, because no writes can be made on the consumer. For multi-supplier replication, each supplier must have its own plug-in instance, all configured identically, and the managed attributes must be excluded from replication using fractional replication. Figure 5.9. Linking Attributes and Replication Any changes that are made on one supplier automatically trigger the plug-in to manage the values on the corresponding directory entries, so the data stay consistent across servers. However, the managed attributes must be maintained by the plug-in instance for the data to be consistent between the linked entries. This means that managed attribute values should be maintained solely by the plug-in processes, not the replication process, even in a multi-supplier replication environment. | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/about-linking-attributes |
Chapter 120. AclRuleTopicResource schema reference | Chapter 120. AclRuleTopicResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleTopicResource type from AclRuleGroupResource , AclRuleClusterResource , AclRuleTransactionalIdResource . It must have the value topic for the type AclRuleTopicResource . Property Property type Description type string Must be topic . name string Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern. patternType string (one of [prefix, literal]) Describes the pattern used in the resource field. The supported types are literal and prefix . With literal pattern type, the resource field will be used as a definition of a full topic name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-AclRuleTopicResource-reference |
Chapter 20. Constant | Chapter 20. Constant The Constant Expression Language is really just a way to use a constant value or object. Note This is a fixed constant value (or object) that is only set once during starting up the route, do not use this if you want dynamic values during routing. 20.1. Dependencies When using constant with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency> 20.2. Constant Options The Constant language supports 2 options, which are listed below. Name Default Java Type Description resultType String Sets the class name of the constant type. trim Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 20.3. Example The setHeader EIP can utilize a constant expression like: <route> <from uri="seda:a"/> <setHeader name="theHeader"> <constant>the value</constant> </setHeader> <to uri="mock:b"/> </route> in this case, the message coming from the seda:a endpoint will have the header with key theHeader set its value as the value (string type). And the same example using Java DSL: from("seda:a") .setHeader("theHeader", constant("the value")) .to("mock:b"); 20.3.1. Specifying type of value The option resultType can be used to specify the type of the value, when the value is given as a String value, which happens when using XML or YAML DSL: For example to set a header with int type you can do: <route> <from uri="seda:a"/> <setHeader name="zipCode"> <constant resultType="int">90210</constant> </setHeader> <to uri="mock:b"/> </route> 20.4. Loading constant from external resource You can externalize the constant and have Camel load it from a resource such as "classpath:" , "file:" , or "http:" . This is done using the following syntax: "resource:scheme:location" , eg to refer to a file on the classpath you can do: .setHeader("myHeader").constant("resource:classpath:constant.txt") 20.5. Dependencies The Constant language is part of camel-core . 20.6. Spring Boot Auto-Configuration The component supports 147 options, which are listed below. Name Description Default Type camel.cloud.consul.service-discovery.acl-token Sets the ACL token to be used with Consul. String camel.cloud.consul.service-discovery.block-seconds The seconds to wait for a watch event, default 10 seconds. 10 Integer camel.cloud.consul.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.consul.service-discovery.connect-timeout-millis Connect timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.datacenter The data center. String camel.cloud.consul.service-discovery.enabled Enable the component. true Boolean camel.cloud.consul.service-discovery.password Sets the password to be used for basic authentication. String camel.cloud.consul.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.consul.service-discovery.read-timeout-millis Read timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.url The Consul agent URL. String camel.cloud.consul.service-discovery.user-name Sets the username to be used for basic authentication. String camel.cloud.consul.service-discovery.write-timeout-millis Write timeout for OkHttpClient. Long camel.cloud.dns.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.dns.service-discovery.domain The domain name;. String camel.cloud.dns.service-discovery.enabled Enable the component. true Boolean camel.cloud.dns.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.dns.service-discovery.proto The transport protocol of the desired service. _tcp String camel.cloud.etcd.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.etcd.service-discovery.enabled Enable the component. true Boolean camel.cloud.etcd.service-discovery.password The password to use for basic authentication. String camel.cloud.etcd.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.etcd.service-discovery.service-path The path to look for for service discovery. /services/ String camel.cloud.etcd.service-discovery.timeout To set the maximum time an action could take to complete. Long camel.cloud.etcd.service-discovery.type To set the discovery type, valid values are on-demand and watch. on-demand String camel.cloud.etcd.service-discovery.uris The URIs the client can connect to. String camel.cloud.etcd.service-discovery.user-name The user name to use for basic authentication. String camel.cloud.kubernetes.service-discovery.api-version Sets the API version when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-data Sets the Certificate Authority data when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-file Sets the Certificate Authority data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-data Sets the Client Certificate data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-file Sets the Client Certificate data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-algo Sets the Client Keystore algorithm, such as RSA when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-data Sets the Client Keystore data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-file Sets the Client Keystore data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-passphrase Sets the Client Keystore passphrase when using client lookup. String camel.cloud.kubernetes.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.kubernetes.service-discovery.dns-domain Sets the DNS domain to use for DNS lookup. String camel.cloud.kubernetes.service-discovery.enabled Enable the component. true Boolean camel.cloud.kubernetes.service-discovery.lookup How to perform service lookup. Possible values: client, dns, environment. When using client, then the client queries the kubernetes master to obtain a list of active pods that provides the service, and then random (or round robin) select a pod. When using dns the service name is resolved as name.namespace.svc.dnsDomain. When using dnssrv the service name is resolved with SRV query for . ... svc... When using environment then environment variables are used to lookup the service. By default environment is used. environment String camel.cloud.kubernetes.service-discovery.master-url Sets the URL to the master when using client lookup. String camel.cloud.kubernetes.service-discovery.namespace Sets the namespace to use. Will by default use namespace from the ENV variable KUBERNETES_MASTER. String camel.cloud.kubernetes.service-discovery.oauth-token Sets the OAUTH token for authentication (instead of username/password) when using client lookup. String camel.cloud.kubernetes.service-discovery.password Sets the password for authentication when using client lookup. String camel.cloud.kubernetes.service-discovery.port-name Sets the Port Name to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.port-protocol Sets the Port Protocol to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.kubernetes.service-discovery.trust-certs Sets whether to turn on trust certificate check when using client lookup. false Boolean camel.cloud.kubernetes.service-discovery.username Sets the username for authentication when using client lookup. String camel.cloud.ribbon.load-balancer.client-name Sets the Ribbon client name. String camel.cloud.ribbon.load-balancer.configurations Define additional configuration definitions. Map camel.cloud.ribbon.load-balancer.enabled Enable the component. true Boolean camel.cloud.ribbon.load-balancer.namespace The namespace. String camel.cloud.ribbon.load-balancer.password The password. String camel.cloud.ribbon.load-balancer.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.ribbon.load-balancer.username The username. String camel.hystrix.allow-maximum-size-to-diverge-from-core-size Allows the configuration for maximumSize to take effect. That value can then be equal to, or higher, than coreSize. false Boolean camel.hystrix.circuit-breaker-enabled Whether to use a HystrixCircuitBreaker or not. If false no circuit-breaker logic will be used and all requests permitted. This is similar in effect to circuitBreakerForceClosed() except that continues tracking metrics and knowing whether it should be open/closed, this property results in not even instantiating a circuit-breaker. true Boolean camel.hystrix.circuit-breaker-error-threshold-percentage Error percentage threshold (as whole number such as 50) at which point the circuit breaker will trip open and reject requests. It will stay tripped for the duration defined in circuitBreakerSleepWindowInMilliseconds; The error percentage this is compared against comes from HystrixCommandMetrics.getHealthCounts(). 50 Integer camel.hystrix.circuit-breaker-force-closed If true the HystrixCircuitBreaker#allowRequest() will always return true to allow requests regardless of the error percentage from HystrixCommandMetrics.getHealthCounts(). The circuitBreakerForceOpen() property takes precedence so if it set to true this property does nothing. false Boolean camel.hystrix.circuit-breaker-force-open If true the HystrixCircuitBreaker.allowRequest() will always return false, causing the circuit to be open (tripped) and reject all requests. This property takes precedence over circuitBreakerForceClosed();. false Boolean camel.hystrix.circuit-breaker-request-volume-threshold Minimum number of requests in the metricsRollingStatisticalWindowInMilliseconds() that must exist before the HystrixCircuitBreaker will trip. If below this number the circuit will not trip regardless of error percentage. 20 Integer camel.hystrix.circuit-breaker-sleep-window-in-milliseconds The time in milliseconds after a HystrixCircuitBreaker trips open that it should wait before trying requests again. 5000 Integer camel.hystrix.configurations Define additional configuration definitions. Map camel.hystrix.core-pool-size Core thread-pool size that gets passed to java.util.concurrent.ThreadPoolExecutor#setCorePoolSize(int). 10 Integer camel.hystrix.enabled Enable the component. true Boolean camel.hystrix.execution-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.run(). Requests beyond the concurrent limit will be rejected. Applicable only when executionIsolationStrategy == SEMAPHORE. 20 Integer camel.hystrix.execution-isolation-strategy What isolation strategy HystrixCommand.run() will be executed with. If THREAD then it will be executed on a separate thread and concurrent requests limited by the number of threads in the thread-pool. If SEMAPHORE then it will be executed on the calling thread and concurrent requests limited by the semaphore count. THREAD String camel.hystrix.execution-isolation-thread-interrupt-on-timeout Whether the execution thread should attempt an interrupt (using Future#cancel ) when a thread times out. Applicable only when executionIsolationStrategy() == THREAD. true Boolean camel.hystrix.execution-timeout-enabled Whether the timeout mechanism is enabled for this command. true Boolean camel.hystrix.execution-timeout-in-milliseconds Time in milliseconds at which point the command will timeout and halt execution. If executionIsolationThreadInterruptOnTimeout == true and the command is thread-isolated, the executing thread will be interrupted. If the command is semaphore-isolated and a HystrixObservableCommand, that command will get unsubscribed. 1000 Integer camel.hystrix.fallback-enabled Whether HystrixCommand.getFallback() should be attempted when failure occurs. true Boolean camel.hystrix.fallback-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.getFallback(). Requests beyond the concurrent limit will fail-fast and not attempt retrieving a fallback. 10 Integer camel.hystrix.group-key Sets the group key to use. The default value is CamelHystrix. CamelHystrix String camel.hystrix.keep-alive-time Keep-alive time in minutes that gets passed to ThreadPoolExecutor#setKeepAliveTime(long,TimeUnit). 1 Integer camel.hystrix.max-queue-size Max queue size that gets passed to BlockingQueue in HystrixConcurrencyStrategy.getBlockingQueue(int) This should only affect the instantiation of a threadpool - it is not eliglible to change a queue size on the fly. For that, use queueSizeRejectionThreshold(). -1 Integer camel.hystrix.maximum-size Maximum thread-pool size that gets passed to ThreadPoolExecutor#setMaximumPoolSize(int) . This is the maximum amount of concurrency that can be supported without starting to reject HystrixCommands. Please note that this setting only takes effect if you also set allowMaximumSizeToDivergeFromCoreSize. 10 Integer camel.hystrix.metrics-health-snapshot-interval-in-milliseconds Time in milliseconds to wait between allowing health snapshots to be taken that calculate success and error percentages and affect HystrixCircuitBreaker.isOpen() status. On high-volume circuits the continual calculation of error percentage can become CPU intensive thus this controls how often it is calculated. 500 Integer camel.hystrix.metrics-rolling-percentile-bucket-size Maximum number of values stored in each bucket of the rolling percentile. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-percentile-enabled Whether percentile metrics should be captured using HystrixRollingPercentile inside HystrixCommandMetrics. true Boolean camel.hystrix.metrics-rolling-percentile-window-buckets Number of buckets the rolling percentile window is broken into. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 6 Integer camel.hystrix.metrics-rolling-percentile-window-in-milliseconds Duration of percentile rolling window in milliseconds. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10000 Integer camel.hystrix.metrics-rolling-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-statistical-window-in-milliseconds This property sets the duration of the statistical rolling window, in milliseconds. This is how long metrics are kept for the thread pool. The window is divided into buckets and rolls by those increments. 10000 Integer camel.hystrix.queue-size-rejection-threshold Queue size rejection threshold is an artificial max size at which rejections will occur even if maxQueueSize has not been reached. This is done because the maxQueueSize of a BlockingQueue can not be dynamically changed and we want to support dynamically changing the queue size that affects rejections. This is used by HystrixCommand when queuing a thread for execution. 5 Integer camel.hystrix.request-log-enabled Whether HystrixCommand execution and events should be logged to HystrixRequestLog. true Boolean camel.hystrix.thread-pool-key Sets the thread pool key to use. Will by default use the same value as groupKey has been configured to use. CamelHystrix String camel.hystrix.thread-pool-rolling-number-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10 Integer camel.hystrix.thread-pool-rolling-number-statistical-window-in-milliseconds Duration of statistical rolling window in milliseconds. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10000 Integer camel.language.constant.enabled Whether to enable auto configuration of the constant language. This is enabled by default. Boolean camel.language.constant.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.csimple.enabled Whether to enable auto configuration of the csimple language. This is enabled by default. Boolean camel.language.csimple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.exchangeproperty.enabled Whether to enable auto configuration of the exchangeProperty language. This is enabled by default. Boolean camel.language.exchangeproperty.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.file.enabled Whether to enable auto configuration of the file language. This is enabled by default. Boolean camel.language.file.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.header.enabled Whether to enable auto configuration of the header language. This is enabled by default. Boolean camel.language.header.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.ref.enabled Whether to enable auto configuration of the ref language. This is enabled by default. Boolean camel.language.ref.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.simple.enabled Whether to enable auto configuration of the simple language. This is enabled by default. Boolean camel.language.simple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.tokenize.enabled Whether to enable auto configuration of the tokenize language. This is enabled by default. Boolean camel.language.tokenize.group-delimiter Sets the delimiter to use when grouping. If this has not been set then token will be used as the delimiter. String camel.language.tokenize.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.resilience4j.automatic-transition-from-open-to-half-open-enabled Enables automatic transition from OPEN to HALF_OPEN state once the waitDurationInOpenState has passed. false Boolean camel.resilience4j.circuit-breaker-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreaker instance to lookup and use from the registry. When using this, then any other circuit breaker options are not in use. String camel.resilience4j.config-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreakerConfig instance to lookup and use from the registry. String camel.resilience4j.configurations Define additional configuration definitions. Map camel.resilience4j.enabled Enable the component. true Boolean camel.resilience4j.failure-rate-threshold Configures the failure rate threshold in percentage. If the failure rate is equal or greater than the threshold the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 50 percentage. Float camel.resilience4j.minimum-number-of-calls Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. For example, if minimumNumberOfCalls is 10, then at least 10 calls must be recorded, before the failure rate can be calculated. If only 9 calls have been recorded the CircuitBreaker will not transition to open even if all 9 calls have failed. Default minimumNumberOfCalls is 100. 100 Integer camel.resilience4j.permitted-number-of-calls-in-half-open-state Configures the number of permitted calls when the CircuitBreaker is half open. The size must be greater than 0. Default size is 10. 10 Integer camel.resilience4j.sliding-window-size Configures the size of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. slidingWindowSize configures the size of the sliding window. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. The slidingWindowSize must be greater than 0. The minimumNumberOfCalls must be greater than 0. If the slidingWindowType is COUNT_BASED, the minimumNumberOfCalls cannot be greater than slidingWindowSize . If the slidingWindowType is TIME_BASED, you can pick whatever you want. Default slidingWindowSize is 100. 100 Integer camel.resilience4j.sliding-window-type Configures the type of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. Default slidingWindowType is COUNT_BASED. COUNT_BASED String camel.resilience4j.slow-call-duration-threshold Configures the duration threshold (seconds) above which calls are considered as slow and increase the slow calls percentage. Default value is 60 seconds. 60 Integer camel.resilience4j.slow-call-rate-threshold Configures a threshold in percentage. The CircuitBreaker considers a call as slow when the call duration is greater than slowCallDurationThreshold Duration. When the percentage of slow calls is equal or greater the threshold, the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 100 percentage which means that all recorded calls must be slower than slowCallDurationThreshold. Float camel.resilience4j.wait-duration-in-open-state Configures the wait duration (in seconds) which specifies how long the CircuitBreaker should stay open, before it switches to half open. Default value is 60 seconds. 60 Integer camel.resilience4j.writable-stack-trace-enabled Enables writable stack traces. When set to false, Exception.getStackTrace returns a zero length array. This may be used to reduce log spam when the circuit breaker is open as the cause of the exceptions is already known (the circuit breaker is short-circuiting calls). true Boolean camel.rest.api-component The name of the Camel component to use as the REST API (such as swagger) If no API Component has been explicit configured, then Camel will lookup if there is a Camel component responsible for servicing and generating the REST API documentation, or if a org.apache.camel.spi.RestApiProcessorFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.api-context-path Sets a leading API context-path the REST API services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. String camel.rest.api-context-route-id Sets the route id to use for the route that services the REST API. The route will by default use an auto assigned route id. String camel.rest.api-host To use an specific hostname for the API documentation (eg swagger) This can be used to override the generated host with this configured hostname. String camel.rest.api-property Allows to configure as many additional properties for the api documentation (swagger). For example set property api.title to my cool stuff. Map camel.rest.api-vendor-extension Whether vendor extension is enabled in the Rest APIs. If enabled then Camel will include additional information as vendor extension (eg keys starting with x-) such as route ids, class names etc. Not all 3rd party API gateways and tools supports vendor-extensions when importing your API docs. false Boolean camel.rest.binding-mode Sets the binding mode to use. The default value is off. RestBindingMode camel.rest.client-request-validation Whether to enable validation of the client request to check whether the Content-Type and Accept headers from the client is supported by the Rest-DSL configuration of its consumes/produces settings. This can be turned on, to enable this check. In case of validation error, then HTTP Status codes 415 or 406 is returned. The default value is false. false Boolean camel.rest.component The Camel Rest component to use for the REST transport (consumer), such as netty-http, jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.component-property Allows to configure as many additional properties for the rest component in use. Map camel.rest.consumer-property Allows to configure as many additional properties for the rest consumer in use. Map camel.rest.context-path Sets a leading context-path the REST services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. Or for components such as camel-jetty or camel-netty-http that includes a HTTP server. String camel.rest.cors-headers Allows to configure custom CORS headers. Map camel.rest.data-format-property Allows to configure as many additional properties for the data formats in use. For example set property prettyPrint to true to have json outputted in pretty mode. The properties can be prefixed to denote the option is only for either JSON or XML and for either the IN or the OUT. The prefixes are: json.in. json.out. xml.in. xml.out. For example a key with value xml.out.mustBeJAXBElement is only for the XML data format for the outgoing. A key without a prefix is a common key for all situations. Map camel.rest.enable-cors Whether to enable CORS headers in the HTTP response. The default value is false. false Boolean camel.rest.endpoint-property Allows to configure as many additional properties for the rest endpoint in use. Map camel.rest.host The hostname to use for exposing the REST service. String camel.rest.host-name-resolver If no hostname has been explicit configured, then this resolver is used to compute the hostname the REST service will be using. RestHostNameResolver camel.rest.json-data-format Name of specific json data format to use. By default json-jackson will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.port The port number to use for exposing the REST service. Notice if you use servlet component then the port number configured here does not apply, as the port number in use is the actual port number the servlet component is using. eg if using Apache Tomcat its the tomcat http port, if using Apache Karaf its the HTTP service in Karaf that uses port 8181 by default etc. Though in those situations setting the port number here, allows tooling and JMX to know the port number, so its recommended to set the port number to the number that the servlet engine uses. String camel.rest.producer-api-doc Sets the location of the api document (swagger api) the REST producer will use to validate the REST uri and query parameters are valid accordingly to the api document. This requires adding camel-swagger-java to the classpath, and any miss configuration will let Camel fail on startup and report the error(s). The location of the api document is loaded from classpath by default, but you can use file: or http: to refer to resources to load from file or http url. String camel.rest.producer-component Sets the name of the Camel component to use as the REST producer. String camel.rest.scheme The scheme to use for exposing the REST service. Usually http or https is supported. The default value is http. String camel.rest.skip-binding-on-error-code Whether to skip binding on output if there is a custom HTTP error code header. This allows to build custom error messages that do not bind to json / xml etc, as success messages otherwise will do. false Boolean camel.rest.use-x-forward-headers Whether to use X-Forward headers for Host and related setting. The default value is true. true Boolean camel.rest.xml-data-format Name of specific XML data format to use. By default jaxb will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.api-context-id-pattern Deprecated Sets an CamelContext id pattern to only allow Rest APIs from rest services within CamelContext's which name matches the pattern. The pattern name refers to the CamelContext name, to match on the current CamelContext only. For any other value, the pattern uses the rules from PatternHelper#matchPattern(String,String). String camel.rest.api-context-listing Deprecated Sets whether listing of all available CamelContext's with REST services in the JVM is enabled. If enabled it allows to discover these contexts, if false then only the current CamelContext is in use. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency>",
"<route> <from uri=\"seda:a\"/> <setHeader name=\"theHeader\"> <constant>the value</constant> </setHeader> <to uri=\"mock:b\"/> </route>",
"from(\"seda:a\") .setHeader(\"theHeader\", constant(\"the value\")) .to(\"mock:b\");",
"<route> <from uri=\"seda:a\"/> <setHeader name=\"zipCode\"> <constant resultType=\"int\">90210</constant> </setHeader> <to uri=\"mock:b\"/> </route>",
".setHeader(\"myHeader\").constant(\"resource:classpath:constant.txt\")"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-constant-language-starter |
4.2. Configuration File Blacklist | 4.2. Configuration File Blacklist The blacklist section of the multipath configuration file specifies the devices that will not be used when the system configures multipath devices. Devices that are blacklisted will not be grouped into a multipath device. In older releases of Red Hat Enterprise Linux, multipath always tried to create a multipath device for every path that was not explicitly blacklisted. As of Red Hat Enterprise Linux 6, however, if the find_multipaths configuration parameter is set to yes , then multipath will create a device only if one of three conditions are met: There are at least two paths that are not blacklisted with the same WWID. The user manually forces the creation of the device by specifying a device with the multipath command. A path has the same WWID as a multipath device that was previously created (even if that multipath device does not currently exist). Whenever a multipath device is created, multipath remembers the WWID of the device so that it will automatically create the device again as soon as it sees a path with that WWID. This allows you to have multipath automatically choose the correct paths to make into multipath devices, without have to edit the multipath blacklist. If you have previously created a multipath device without using the find_multipaths parameter and then you later set the parameter to yes , you may need to remove the WWIDs of any device you do not want created as a multipath device from the /etc/multipath/wwids file. The following shows a sample /etc/multipath/wwids file. The WWIDs are enclosed by slashes (/): With the find_multipaths parameter set to yes , you need to blacklist only the devices with multiple paths that you do not want to be multipathed. Because of this, it will generally not be necessary to blacklist devices. If you do need to blacklist devices, you can do so according to the following criteria: By WWID, as described in Section 4.2.1, "Blacklisting by WWID" By device name, as described in Section 4.2.2, "Blacklisting By Device Name" By device type, as described in Section 4.2.3, "Blacklisting By Device Type" By udev property, as described in Section 4.2.4, "Blacklisting By udev Property (Red Hat Enterprise Linux 7.5 and Later)" By device protocol, as described in Section 4.2.5, "Blacklisting By Device Protocol (Red Hat Enterprise Linux 7.6 and Later)" By default, a variety of device types are blacklisted, even after you comment out the initial blacklist section of the configuration file. For information, see Section 4.2.2, "Blacklisting By Device Name" . 4.2.1. Blacklisting by WWID You can specify individual devices to blacklist by their World-Wide IDentification with a wwid entry in the blacklist section of the configuration file. The following example shows the lines in the configuration file that would blacklist a device with a WWID of 26353900f02796769. 4.2.2. Blacklisting By Device Name You can blacklist device types by device name so that they will not be grouped into a multipath device by specifying a devnode entry in the blacklist section of the configuration file. The following example shows the lines in the configuration file that would blacklist all SCSI devices, since it blacklists all sd* devices. You can use a devnode entry in the blacklist section of the configuration file to specify individual devices to blacklist rather than all devices of a specific type. This is not recommended, however, since unless it is statically mapped by udev rules, there is no guarantee that a specific device will have the same name on reboot. For example, a device name could change from /dev/sda to /dev/sdb on reboot. By default, the following devnode entries are compiled in the default blacklist; the devices that these entries blacklist do not generally support DM Multipath. To enable multipathing on any of these devices, you would need to specify them in the blacklist_exceptions section of the configuration file, as described in Section 4.2.6, "Blacklist Exceptions" . 4.2.3. Blacklisting By Device Type You can specify specific device types in the blacklist section of the configuration file with a device section. The following example blacklists all IBM DS4200 and HP devices. 4.2.4. Blacklisting By udev Property (Red Hat Enterprise Linux 7.5 and Later) The blacklist and blacklist_exceptions sections of the multipath.conf configuration file support the property parameter. This parameter allows users to blacklist certain types of devices. The property parameter takes a regular expression string that is matched against the udev environment variable name for the device. The following example blacklists all devices with the udev property ID_ATA . 4.2.5. Blacklisting By Device Protocol (Red Hat Enterprise Linux 7.6 and Later) You can specify the protocol for a device to be excluded from multipathing in the blacklist section of the configuration file with a protocol section. The protocol strings that multipath recognizes are scsi:fcp, scsi:spi, scsi:ssa, scsi:sbp, scsi:srp, scsi:iscsi, scsi:sas, scsi:adt, scsi:ata, scsi:unspec, ccw, cciss, nvme, and undef. The protocol that a path is using can be viewed by running the command multipathd show paths format "%d %P" . The following example blacklists all devices with an undefined protocol or an unknown SCSI transport type. 4.2.6. Blacklist Exceptions You can use the blacklist_exceptions section of the configuration file to enable multipathing on devices that have been blacklisted by default. For example, if you have a large number of devices and want to multipath only one of them (with the WWID of 3600d0230000000000e13955cc3757803), instead of individually blacklisting each of the devices except the one you want, you could instead blacklist all of them, and then allow only the one you want by adding the following lines to the /etc/multipath.conf file. When specifying devices in the blacklist_exceptions section of the configuration file, you must specify the exceptions in the same way they were specified in the blacklist. For example, a WWID exception will not apply to devices specified by a devnode blacklist entry, even if the blacklisted device is associated with that WWID. Similarly, devnode exceptions apply only to devnode entries, and device exceptions apply only to device entries. The property parameter works differently than the other blacklist_exception parameters. If the parameter is set, the device must have a udev variable that matches. Otherwise, the device is blacklisted. This parameter allows users to blacklist SCSI devices that multipath should ignore, such as USB sticks and local hard drives. To allow only SCSI devices that could reasonably be multipathed, set this parameter to SCSI_IDENT_|ID_WWN) as in the following example. | [
"Multipath wwids, Version : 1.0 NOTE: This file is automatically maintained by multipath and multipathd. You should not need to edit this file in normal circumstances. # Valid WWIDs: /3600d0230000000000e13955cc3757802/ /3600d0230000000000e13955cc3757801/ /3600d0230000000000e13955cc3757800/ /3600d02300069c9ce09d41c31f29d4c00/ /SWINSYS SF2372 0E13955CC3757802/ /3600d0230000000000e13955cc3757803/",
"blacklist { wwid 26353900f02796769 }",
"blacklist { devnode \"^sd[a-z]\" }",
"blacklist { devnode \"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*\" devnode \"^(td|ha)d[a-z]\" }",
"blacklist { device { vendor \"IBM\" product \"3S42\" #DS4200 Product 10 } device { vendor \"HP\" product \"*\" } }",
"blacklist { property \"ID_ATA\" }",
"blacklist { protocol \"scsi:unspec\" protocol \"undef\" }",
"blacklist { wwid \"*\" } blacklist_exceptions { wwid \"3600d0230000000000e13955cc3757803\" }",
"blacklist_exceptions { property \"(SCSI_IDENT_|ID_WWN)\" }"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/dm_multipath/config_file_blacklist |
Preface | Preface The company SSO integration feature allows you to log in to your Red Hat account by using your company login credentials instead of your Red Hat account credentials. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_customer_portal/1/html/using_company_single_sign-on_integration/pr01 |
Chapter 5. Using interactive Hammer shell | Chapter 5. Using interactive Hammer shell You can issue hammer commands through an interactive shell. To start the shell, run the following command: In the shell, you can enter sub-commands directly without typing "hammer", which can be useful for testing commands before using them in a script. To exit the shell, type exit or press Ctrl + D . | [
"hammer shell"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/using_the_hammer_cli_tool/using-interactive-hammer-shell |
Chapter 8. Metal3RemediationTemplate [infrastructure.cluster.x-k8s.io/v1beta1] | Chapter 8. Metal3RemediationTemplate [infrastructure.cluster.x-k8s.io/v1beta1] Description Metal3RemediationTemplate is the Schema for the metal3remediationtemplates API. Type object 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Metal3RemediationTemplateSpec defines the desired state of Metal3RemediationTemplate. status object Metal3RemediationTemplateStatus defines the observed state of Metal3RemediationTemplate. 8.1.1. .spec Description Metal3RemediationTemplateSpec defines the desired state of Metal3RemediationTemplate. Type object Required template Property Type Description template object Metal3RemediationTemplateResource describes the data needed to create a Metal3Remediation from a template. 8.1.2. .spec.template Description Metal3RemediationTemplateResource describes the data needed to create a Metal3Remediation from a template. Type object Required spec Property Type Description spec object Spec is the specification of the desired behavior of the Metal3Remediation. 8.1.3. .spec.template.spec Description Spec is the specification of the desired behavior of the Metal3Remediation. Type object Property Type Description strategy object Strategy field defines remediation strategy. 8.1.4. .spec.template.spec.strategy Description Strategy field defines remediation strategy. Type object Property Type Description retryLimit integer Sets maximum number of remediation retries. timeout string Sets the timeout between remediation retries. type string Type of remediation. 8.1.5. .status Description Metal3RemediationTemplateStatus defines the observed state of Metal3RemediationTemplate. Type object Required status Property Type Description status object Metal3RemediationStatus defines the observed state of Metal3Remediation 8.1.6. .status.status Description Metal3RemediationStatus defines the observed state of Metal3Remediation Type object Property Type Description lastRemediated string LastRemediated identifies when the host was last remediated phase string Phase represents the current phase of machine remediation. E.g. Pending, Running, Done etc. retryCount integer RetryCount can be used as a counter during the remediation. Field can hold number of reboots etc. 8.2. API endpoints The following API endpoints are available: /apis/infrastructure.cluster.x-k8s.io/v1beta1/metal3remediationtemplates GET : list objects of kind Metal3RemediationTemplate /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediationtemplates DELETE : delete collection of Metal3RemediationTemplate GET : list objects of kind Metal3RemediationTemplate POST : create a Metal3RemediationTemplate /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediationtemplates/{name} DELETE : delete a Metal3RemediationTemplate GET : read the specified Metal3RemediationTemplate PATCH : partially update the specified Metal3RemediationTemplate PUT : replace the specified Metal3RemediationTemplate /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediationtemplates/{name}/status GET : read status of the specified Metal3RemediationTemplate PATCH : partially update status of the specified Metal3RemediationTemplate PUT : replace status of the specified Metal3RemediationTemplate 8.2.1. /apis/infrastructure.cluster.x-k8s.io/v1beta1/metal3remediationtemplates Table 8.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind Metal3RemediationTemplate Table 8.2. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplateList schema 401 - Unauthorized Empty 8.2.2. /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediationtemplates Table 8.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Metal3RemediationTemplate Table 8.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Metal3RemediationTemplate Table 8.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.8. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplateList schema 401 - Unauthorized Empty HTTP method POST Description create a Metal3RemediationTemplate Table 8.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.10. Body parameters Parameter Type Description body Metal3RemediationTemplate schema Table 8.11. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 201 - Created Metal3RemediationTemplate schema 202 - Accepted Metal3RemediationTemplate schema 401 - Unauthorized Empty 8.2.3. /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediationtemplates/{name} Table 8.12. Global path parameters Parameter Type Description name string name of the Metal3RemediationTemplate namespace string object name and auth scope, such as for teams and projects Table 8.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Metal3RemediationTemplate Table 8.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 8.15. Body parameters Parameter Type Description body DeleteOptions schema Table 8.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Metal3RemediationTemplate Table 8.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 8.18. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Metal3RemediationTemplate Table 8.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.20. Body parameters Parameter Type Description body Patch schema Table 8.21. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Metal3RemediationTemplate Table 8.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.23. Body parameters Parameter Type Description body Metal3RemediationTemplate schema Table 8.24. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 201 - Created Metal3RemediationTemplate schema 401 - Unauthorized Empty 8.2.4. /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediationtemplates/{name}/status Table 8.25. Global path parameters Parameter Type Description name string name of the Metal3RemediationTemplate namespace string object name and auth scope, such as for teams and projects Table 8.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Metal3RemediationTemplate Table 8.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 8.28. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Metal3RemediationTemplate Table 8.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.30. Body parameters Parameter Type Description body Patch schema Table 8.31. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Metal3RemediationTemplate Table 8.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.33. Body parameters Parameter Type Description body Metal3RemediationTemplate schema Table 8.34. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationTemplate schema 201 - Created Metal3RemediationTemplate schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/provisioning_apis/metal3remediationtemplate-infrastructure-cluster-x-k8s-io-v1beta1 |
Chapter 1. Installing OpenShift Pipelines | Chapter 1. Installing OpenShift Pipelines This guide walks cluster administrators through the process of installing the Red Hat OpenShift Pipelines Operator to an OpenShift Container Platform cluster. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed oc CLI. You have installed OpenShift Pipelines ( tkn ) CLI on your local system. Your cluster has the Marketplace capability enabled or the Red Hat Operator catalog source configured manually. Note In a cluster with both Windows and Linux nodes, Red Hat OpenShift Pipelines can run on only Linux nodes. 1.1. Installing the Red Hat OpenShift Pipelines Operator in web console You can install Red Hat OpenShift Pipelines using the Operator listed in the OpenShift Container Platform OperatorHub. When you install the Red Hat OpenShift Pipelines Operator, the custom resources (CRs) required for the pipelines configuration are automatically installed along with the Operator. The default Operator custom resource definition (CRD) config.operator.tekton.dev is now replaced by tektonconfigs.operator.tekton.dev . In addition, the Operator provides the following additional CRDs to individually manage OpenShift Pipelines components: tektonpipelines.operator.tekton.dev , tektontriggers.operator.tekton.dev and tektonaddons.operator.tekton.dev . If you have OpenShift Pipelines already installed on your cluster, the existing installation is seamlessly upgraded. The Operator will replace the instance of config.operator.tekton.dev on your cluster with an instance of tektonconfigs.operator.tekton.dev and additional objects of the other CRDs as necessary. Warning If you manually changed your existing installation, such as, changing the target namespace in the config.operator.tekton.dev CRD instance by making changes to the resource name - cluster field, then the upgrade path is not smooth. In such cases, the recommended workflow is to uninstall your installation and reinstall the Red Hat OpenShift Pipelines Operator. The Red Hat OpenShift Pipelines Operator now provides the option to choose the components that you want to install by specifying profiles as part of the TektonConfig custom resource (CR). The TektonConfig CR is automatically installed when the Operator is installed. The supported profiles are: Lite: This installs only Tekton Pipelines. Basic: This installs Tekton Pipelines, Tekton Triggers, and Tekton Chains. All: This is the default profile used when the TektonConfig CR is installed. This profile installs all of the Tekton components, including Tekton Pipelines, Tekton Triggers, Tekton Chains, Pipelines as Code, and Tekton Addons. Tekton Addons includes the ClusterTasks , ClusterTriggerBindings , ConsoleCLIDownload , ConsoleQuickStart , and ConsoleYAMLSample resources, as well as the tasks available using the cluster resolver from the openshift-pipelines namespace. Procedure In the Administrator perspective of the web console, navigate to Operators OperatorHub . Use the Filter by keyword box to search for Red Hat OpenShift Pipelines Operator in the catalog. Click the Red Hat OpenShift Pipelines Operator tile. Read the brief description about the Operator on the Red Hat OpenShift Pipelines Operator page. Click Install . On the Install Operator page: Select All namespaces on the cluster (default) for the Installation Mode . This mode installs the Operator in the default openshift-operators namespace, which enables the Operator to watch and be made available to all namespaces in the cluster. Select Automatic for the Approval Strategy . This ensures that the future upgrades to the Operator are handled automatically by the Operator Lifecycle Manager (OLM). If you select the Manual approval strategy, OLM creates an update request. As a cluster administrator, you must then manually approve the OLM update request to update the Operator to the new version. Select an Update Channel . The latest channel enables installation of the most recent stable version of the Red Hat OpenShift Pipelines Operator. Currently, it is the default channel for installing the Red Hat OpenShift Pipelines Operator. To install a specific version of the Red Hat OpenShift Pipelines Operator, cluster administrators can use the corresponding pipelines-<version> channel. For example, to install the Red Hat OpenShift Pipelines Operator version 1.8.x , you can use the pipelines-1.8 channel. Note Starting with OpenShift Container Platform 4.11, the preview and stable channels for installing and upgrading the Red Hat OpenShift Pipelines Operator are not available. However, in OpenShift Container Platform 4.10 and earlier versions, you can use the preview and stable channels for installing and upgrading the Operator. Click Install . You will see the Operator listed on the Installed Operators page. Note The Operator is installed automatically into the openshift-operators namespace. Verify that the Status is set to Succeeded Up to date to confirm successful installation of Red Hat OpenShift Pipelines Operator. Warning The success status may show as Succeeded Up to date even if installation of other components is in-progress. Therefore, it is important to verify the installation manually in the terminal. Verify that all components of the Red Hat OpenShift Pipelines Operator were installed successfully. Login to the cluster on the terminal, and run the following command: USD oc get tektonconfig config Example output NAME VERSION READY REASON config 1.15.0 True If the READY condition is True , the Operator and its components have been installed successfully. Additonally, check the components' versions by running the following command: USD oc get tektonpipeline,tektontrigger,tektonchain,tektonaddon,pac Example output 1.2. Installing the OpenShift Pipelines Operator by using the CLI You can install Red Hat OpenShift Pipelines Operator from OperatorHub by using the command-line interface (CLI). Procedure Create a Subscription object YAML file to subscribe a namespace to the Red Hat OpenShift Pipelines Operator, for example, sub.yaml : Example Subscription YAML apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-pipelines-operator namespace: openshift-operators spec: channel: <channel_name> 1 name: openshift-pipelines-operator-rh 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4 1 Name of the channel that you want to subscribe. The pipelines-<version> channel is the default channel. For example, the default channel for Red Hat OpenShift Pipelines Operator version 1.7 is pipelines-1.7 . The latest channel enables installation of the most recent stable version of the Red Hat OpenShift Pipelines Operator. 2 Name of the Operator to subscribe to. 3 Name of the CatalogSource object that provides the Operator. 4 Namespace of the CatalogSource object. Use openshift-marketplace for the default OperatorHub catalog sources. Create the Subscription object by running the following command: USD oc apply -f sub.yaml The subscription installs the Red Hat OpenShift Pipelines Operator into the openshift-operators namespace. The Operator automatically installs OpenShift Pipelines into the default openshift-pipelines target namespace. 1.3. Red Hat OpenShift Pipelines Operator in a restricted environment The Red Hat OpenShift Pipelines Operator enables support for installation of pipelines in a restricted network environment. The Operator installs a proxy webhook that sets the proxy environment variables in the containers of the pod created by tekton-controllers based on the cluster proxy object. It also sets the proxy environment variables in the TektonPipelines , TektonTriggers , Controllers , Webhooks , and Operator Proxy Webhook resources. By default, the proxy webhook is disabled for the openshift-pipelines namespace. To disable it for any other namespace, you can add the operator.tekton.dev/disable-proxy: true label to the namespace object. 1.4. Additional resources You can learn more about installing Operators on OpenShift Container Platform in the adding Operators to a cluster section. To install Tekton Chains using the Red Hat OpenShift Pipelines Operator, see Using Tekton Chains for Red Hat OpenShift Pipelines supply chain security . To install and deploy in-cluster Tekton Hub, see Using Tekton Hub with Red Hat OpenShift Pipelines . For more information on using pipelines in a restricted environment, see: Mirroring images to run pipelines in a restricted environment Configuring Samples Operator for a restricted cluster About disconnected installation mirroring | [
"oc get tektonconfig config",
"NAME VERSION READY REASON config 1.15.0 True",
"oc get tektonpipeline,tektontrigger,tektonchain,tektonaddon,pac",
"NAME VERSION READY REASON tektonpipeline.operator.tekton.dev/pipeline v0.47.0 True NAME VERSION READY REASON tektontrigger.operator.tekton.dev/trigger v0.23.1 True NAME VERSION READY REASON tektonchain.operator.tekton.dev/chain v0.16.0 True NAME VERSION READY REASON tektonaddon.operator.tekton.dev/addon 1.11.0 True NAME VERSION READY REASON openshiftpipelinesascode.operator.tekton.dev/pipelines-as-code v0.19.0 True",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-pipelines-operator namespace: openshift-operators spec: channel: <channel_name> 1 name: openshift-pipelines-operator-rh 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4",
"oc apply -f sub.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html/installing_and_configuring/installing-pipelines |
Chapter 13. Installing a cluster on GCP in a restricted network with user-provisioned infrastructure | Chapter 13. Installing a cluster on GCP in a restricted network with user-provisioned infrastructure In OpenShift Container Platform version 4.12, you can install a cluster on Google Cloud Platform (GCP) that uses infrastructure that you provide and an internal mirror of the installation release content. Important While you can install an OpenShift Container Platform cluster by using mirrored installation release content, your cluster still requires internet access to use the GCP APIs. The steps for performing a user-provided infrastructure install are outlined here. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 13.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. If you use a firewall, you configured it to allow the sites that your cluster requires access to. While you might need to grant access to more sites, you must grant access to *.googleapis.com and accounts.google.com . If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 13.2. About installations in restricted networks In OpenShift Container Platform 4.12, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 13.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 13.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 13.4. Configuring your GCP project Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 13.4.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 13.4.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You may also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 13.1. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 13.2. Optional API services API service Console service name Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 13.4.3. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the same project that you host the OpenShift Container Platform cluster. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 13.4.4. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 13.3. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 6 1 Firewall rules Networking Global 11 1 Forwarding rules Compute Global 2 0 Health checks Compute Global 2 0 Images Compute Global 1 0 Networks Networking Global 1 0 Routers Networking Global 1 0 Routes Networking Global 2 0 Subnetworks Compute Global 2 0 Target pools Networking Global 2 0 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 13.4.5. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. You can create the service account key in JSON format, or attach the service account to a GCP virtual machine. See Creating service account keys and Creating and enabling service accounts for instances in the GCP documentation. You must have a service account key or a virtual machine with an attached service account to create the cluster. Note If you use a virtual machine with an attached service account to create your cluster, you must set credentialsMode: Manual in the install-config.yaml file before installation. 13.4.6. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If the security policies for your organization require a more restrictive set of permissions, you can create a service account with the following permissions. Important If you configure the Cloud Credential Operator to operate in passthrough mode, you must use roles rather than granular permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin IAM Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for using passthrough credentials mode Compute Load Balancer Admin IAM Role Viewer Required roles for user-provisioned GCP infrastructure Deployment Manager Editor The roles are applied to the service accounts that the control plane and compute machines use: Table 13.4. GCP service account permissions Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin 13.4.7. Required GCP permissions for user-provisioned infrastructure When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If the security policies for your organization require a more restrictive set of permissions, you can create custom roles with the necessary permissions. The following permissions are required for the user-provisioned infrastructure for creating and deleting the OpenShift Container Platform cluster. Important If you configure the Cloud Credential Operator to operate in passthrough mode, you must use roles rather than granular permissions. For more information, see "Required roles for using passthrough credentials mode" in the "Required GCP roles" section. Example 13.1. Required permissions for creating network resources compute.addresses.create compute.addresses.createInternal compute.addresses.delete compute.addresses.get compute.addresses.list compute.addresses.use compute.addresses.useInternal compute.firewalls.create compute.firewalls.delete compute.firewalls.get compute.firewalls.list compute.forwardingRules.create compute.forwardingRules.get compute.forwardingRules.list compute.forwardingRules.setLabels compute.networks.create compute.networks.get compute.networks.list compute.networks.updatePolicy compute.routers.create compute.routers.get compute.routers.list compute.routers.update compute.routes.list compute.subnetworks.create compute.subnetworks.get compute.subnetworks.list compute.subnetworks.use compute.subnetworks.useExternalIp Example 13.2. Required permissions for creating load balancer resources compute.regionBackendServices.create compute.regionBackendServices.get compute.regionBackendServices.list compute.regionBackendServices.update compute.regionBackendServices.use compute.targetPools.addInstance compute.targetPools.create compute.targetPools.get compute.targetPools.list compute.targetPools.removeInstance compute.targetPools.use Example 13.3. Required permissions for creating DNS resources dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.list dns.resourceRecordSets.update Example 13.4. Required permissions for creating Service Account resources iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccountKeys.get iam.serviceAccountKeys.list iam.serviceAccounts.actAs iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 13.5. Required permissions for creating compute resources compute.disks.create compute.disks.get compute.disks.list compute.instanceGroups.create compute.instanceGroups.delete compute.instanceGroups.get compute.instanceGroups.list compute.instanceGroups.update compute.instanceGroups.use compute.instances.create compute.instances.delete compute.instances.get compute.instances.list compute.instances.setLabels compute.instances.setMetadata compute.instances.setServiceAccount compute.instances.setTags compute.instances.use compute.machineTypes.get compute.machineTypes.list Example 13.6. Required for creating storage resources storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.list storage.objects.create storage.objects.delete storage.objects.get storage.objects.list Example 13.7. Required permissions for creating health check resources compute.healthChecks.create compute.healthChecks.get compute.healthChecks.list compute.healthChecks.useReadOnly compute.httpHealthChecks.create compute.httpHealthChecks.get compute.httpHealthChecks.list compute.httpHealthChecks.useReadOnly Example 13.8. Required permissions to get GCP zone and region related information compute.globalOperations.get compute.regionOperations.get compute.regions.list compute.zoneOperations.get compute.zones.get compute.zones.list Example 13.9. Required permissions for checking services and quotas monitoring.timeSeries.list serviceusage.quotas.get serviceusage.services.list Example 13.10. Required IAM permissions for installation iam.roles.get Example 13.11. Required Images permissions for installation compute.images.create compute.images.delete compute.images.get compute.images.list Example 13.12. Optional permission for running gather bootstrap compute.instances.getSerialPortOutput Example 13.13. Required permissions for deleting network resources compute.addresses.delete compute.addresses.deleteInternal compute.addresses.list compute.firewalls.delete compute.firewalls.list compute.forwardingRules.delete compute.forwardingRules.list compute.networks.delete compute.networks.list compute.networks.updatePolicy compute.routers.delete compute.routers.list compute.routes.list compute.subnetworks.delete compute.subnetworks.list Example 13.14. Required permissions for deleting load balancer resources compute.regionBackendServices.delete compute.regionBackendServices.list compute.targetPools.delete compute.targetPools.list Example 13.15. Required permissions for deleting DNS resources dns.changes.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.resourceRecordSets.delete dns.resourceRecordSets.list Example 13.16. Required permissions for deleting Service Account resources iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 13.17. Required permissions for deleting compute resources compute.disks.delete compute.disks.list compute.instanceGroups.delete compute.instanceGroups.list compute.instances.delete compute.instances.list compute.instances.stop compute.machineTypes.list Example 13.18. Required for deleting storage resources storage.buckets.delete storage.buckets.getIamPolicy storage.buckets.list storage.objects.delete storage.objects.list Example 13.19. Required permissions for deleting health check resources compute.healthChecks.delete compute.healthChecks.list compute.httpHealthChecks.delete compute.httpHealthChecks.list Example 13.20. Required Images permissions for deletion compute.images.delete compute.images.list Example 13.21. Required permissions to get Region related information compute.regions.get Example 13.22. Required Deployment Manager permissions deploymentmanager.deployments.create deploymentmanager.deployments.delete deploymentmanager.deployments.get deploymentmanager.deployments.list deploymentmanager.manifests.get deploymentmanager.operations.get deploymentmanager.resources.list Additional resources Optimizing storage 13.4.8. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) europe-west12 (Turin, Italy) me-central1 (Doha, Qatar, Middle East) me-west1 (Tel Aviv, Israel) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) Note To determine which machine type instances are available by region and zone, see the Google documentation . 13.4.9. Installing and configuring CLI tools for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must install and configure the CLI tools for GCP. Prerequisites You created a project to host your cluster. You created a service account and granted it the required permissions. Procedure Install the following binaries in USDPATH : gcloud gsutil See Install the latest Cloud SDK version in the GCP documentation. Authenticate using the gcloud tool with your configured service account. See Authorizing with a service account in the GCP documentation. 13.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 13.5.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 13.5. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 13.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 13.6. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 13.5.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 13.23. Machine series A2 A3 C2 C2D C3 C3D C4 E2 M1 N1 N2 N2D N4 Tau T2D 13.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . 13.6. Creating the installation files for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 13.6.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_id> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 13.6.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Have the imageContentSources values that were generated during mirror registry creation. Obtain the contents of the certificate for your mirror registry. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VPC to install the cluster in under the parent platform.gcp field: network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet> For platform.gcp.network , specify the name for the existing Google VPC. For platform.gcp.controlPlaneSubnet and platform.gcp.computeSubnet , specify the existing subnets to deploy the control plane machines and compute machines, respectively. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 13.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 13.6.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Optional: If you do not want the cluster to provision compute machines, remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources Optional: Adding the ingress DNS records 13.7. Exporting common variables 13.7.1. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Google Cloud Platform (GCP). The infrastructure name is also used to locate the appropriate GCP resources during an OpenShift Container Platform installation. The provided Deployment Manager templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 13.7.2. Exporting common variables for Deployment Manager templates You must export a common set of variables that are used with the provided Deployment Manager templates used to assist in completing a user-provided infrastructure install on Google Cloud Platform (GCP). Note Specific Deployment Manager templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Generate the Ignition config files for your cluster. Install the jq package. Procedure Export the following common variables to be used by the provided Deployment Manager templates: USD export BASE_DOMAIN='<base_domain>' USD export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' USD export NETWORK_CIDR='10.0.0.0/16' USD export MASTER_SUBNET_CIDR='10.0.0.0/17' USD export WORKER_SUBNET_CIDR='10.0.128.0/17' USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 USD export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` USD export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` USD export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` USD export REGION=`jq -r .gcp.region <installation_directory>/metadata.json` 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 13.8. Creating a VPC in GCP You must create a VPC in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements. One way to create the VPC is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Procedure Copy the template from the Deployment Manager template for the VPC section of this topic and save it as 01_vpc.py on your computer. This template describes the VPC that your cluster requires. Create a 01_vpc.yaml resource definition file: USD cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 master_subnet_cidr is the CIDR for the master subnet, for example 10.0.0.0/17 . 4 worker_subnet_cidr is the CIDR for the worker subnet, for example 10.0.128.0/17 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml 13.8.1. Deployment Manager template for the VPC You can use the following Deployment Manager template to deploy the VPC that you need for your OpenShift Container Platform cluster: Example 13.24. 01_vpc.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources} 13.9. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 13.9.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 13.9.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 13.7. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 13.8. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 13.9. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 13.10. Creating load balancers in GCP You must configure load balancers in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for the internal load balancer section of this topic and save it as 02_lb_int.py on your computer. This template describes the internal load balancing objects that your cluster requires. For an external cluster, also copy the template from the Deployment Manager template for the external load balancer section of this topic and save it as 02_lb_ext.py on your computer. This template describes the external load balancing objects that your cluster requires. Export the variables that the deployment template uses: Export the cluster network location: USD export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`) Export the control plane subnet location: USD export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`) Export the three zones that the cluster uses: USD export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`) USD export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`) USD export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`) Create a 02_infra.yaml resource definition file: USD cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF 1 2 Required only when deploying an external cluster. 3 infra_id is the INFRA_ID infrastructure name from the extraction step. 4 region is the region to deploy the cluster into, for example us-central1 . 5 control_subnet is the URI to the control subnet. 6 zones are the zones to deploy the control plane instances into, like us-east1-b , us-east1-c , and us-east1-d . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml Export the cluster IP address: USD export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`) For an external cluster, also export the cluster public IP address: USD export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`) 13.10.1. Deployment Manager template for the external load balancer You can use the following Deployment Manager template to deploy the external load balancer that you need for your OpenShift Container Platform cluster: Example 13.25. 02_lb_ext.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources} 13.10.2. Deployment Manager template for the internal load balancer You can use the following Deployment Manager template to deploy the internal load balancer that you need for your OpenShift Container Platform cluster: Example 13.26. 02_lb_int.py Deployment Manager template def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': "HTTPS" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources} You will need this template in addition to the 02_lb_ext.py template when you create an external cluster. 13.11. Creating a private DNS zone in GCP You must configure a private DNS zone in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create this component is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for the private DNS section of this topic and save it as 02_dns.py on your computer. This template describes the private DNS objects that your cluster requires. Create a 02_dns.yaml resource definition file: USD cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 cluster_domain is the domain for the cluster, for example openshift.example.com . 3 cluster_network is the selfLink URL to the cluster network. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml The templates do not create DNS entries due to limitations of Deployment Manager, so you must create them manually: Add the internal DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone For an external cluster, also add the external DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} 13.11.1. Deployment Manager template for the private DNS You can use the following Deployment Manager template to deploy the private DNS that you need for your OpenShift Container Platform cluster: Example 13.27. 02_dns.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources} 13.12. Creating firewall rules in GCP You must create firewall rules in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for firewall rules section of this topic and save it as 03_firewall.py on your computer. This template describes the security groups that your cluster requires. Create a 03_firewall.yaml resource definition file: USD cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF 1 allowed_external_cidr is the CIDR range that can access the cluster API and SSH to the bootstrap host. For an internal cluster, set this value to USD{NETWORK_CIDR} . 2 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 cluster_network is the selfLink URL to the cluster network. 4 network_cidr is the CIDR of the VPC network, for example 10.0.0.0/16 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml 13.12.1. Deployment Manager template for firewall rules You can use the following Deployment Manager template to deploy the firewall rues that you need for your OpenShift Container Platform cluster: Example 13.28. 03_firewall.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources} 13.13. Creating IAM roles in GCP You must create IAM roles in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for IAM roles section of this topic and save it as 03_iam.py on your computer. This template describes the IAM roles that your cluster requires. Create a 03_iam.yaml resource definition file: USD cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml Export the variable for the master service account: USD export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the worker service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`) The templates do not create the policy bindings due to limitations of Deployment Manager, so you must create them manually: USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.instanceAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.securityAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/iam.serviceAccountUser" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/storage.admin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.viewer" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/storage.admin" Create a service account key and store it locally for later use: USD gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT} 13.13.1. Deployment Manager template for IAM roles You can use the following Deployment Manager template to deploy the IAM roles that you need for your OpenShift Container Platform cluster: Example 13.29. 03_iam.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources} 13.14. Creating the RHCOS cluster image for the GCP infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Google Cloud Platform (GCP) for your OpenShift Container Platform nodes. Procedure Obtain the RHCOS image from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The file name contains the OpenShift Container Platform version number in the format rhcos-<version>-<arch>-gcp.<arch>.tar.gz . Create the Google storage bucket: USD gsutil mb gs://<bucket_name> Upload the RHCOS image to the Google storage bucket: USD gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name> Export the uploaded RHCOS image location as a variable: USD export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz Create the cluster image: USD gcloud compute images create "USD{INFRA_ID}-rhcos-image" \ --source-uri="USD{IMAGE_SOURCE}" 13.15. Creating the bootstrap machine in GCP You must create the bootstrap machine in Google Cloud Platform (GCP) to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Ensure pyOpenSSL is installed. Procedure Copy the template from the Deployment Manager template for the bootstrap machine section of this topic and save it as 04_bootstrap.py on your computer. This template describes the bootstrap machine that your cluster requires. Export the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that the installation program requires: USD export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`) Create a bucket and upload the bootstrap.ign file: USD gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition USD gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/ Create a signed URL for the bootstrap instance to use to access the Ignition config. Export the URL from the output as a variable: USD export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print USD5}'` Create a 04_bootstrap.yaml resource definition file: USD cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 zone is the zone to deploy the bootstrap instance into, for example us-central1-b . 4 cluster_network is the selfLink URL to the cluster network. 5 control_subnet is the selfLink URL to the control subnet. 6 image is the selfLink URL to the RHCOS image. 7 machine_type is the machine type of the instance, for example n1-standard-4 . 8 root_volume_size is the boot disk size for the bootstrap machine. 9 bootstrap_ign is the URL output when creating a signed URL. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the bootstrap machine manually. Add the bootstrap instance to the internal load balancer instance group: USD gcloud compute instance-groups unmanaged add-instances \ USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap Add the bootstrap instance group to the internal load balancer backend service: USD gcloud compute backend-services add-backend \ USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} 13.15.1. Deployment Manager template for the bootstrap machine You can use the following Deployment Manager template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 13.30. 04_bootstrap.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{"ignition":{"config":{"replace":{"source":"' + context.properties['bootstrap_ign'] + '"}},"version":"3.2.0"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources} 13.16. Creating the control plane machines in GCP You must create the control plane machines in Google Cloud Platform (GCP) for your cluster to use. One way to create these machines is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the Deployment Manager template for control plane machines section of this topic and save it as 05_control_plane.py on your computer. This template describes the control plane machines that your cluster requires. Export the following variable required by the resource definition: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign` Create a 05_control_plane.yaml resource definition file: USD cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 zones are the zones to deploy the control plane instances into, for example us-central1-a , us-central1-b , and us-central1-c . 3 control_subnet is the selfLink URL to the control subnet. 4 image is the selfLink URL to the RHCOS image. 5 machine_type is the machine type of the instance, for example n1-standard-4 . 6 service_account_email is the email address for the master service account that you created. 7 ignition is the contents of the master.ign file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the control plane machines manually. Run the following commands to add the control plane machines to the appropriate instance groups: USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2 For an external cluster, you must also run the following commands to add the control plane machines to the target pools: USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_0}" --instances=USD{INFRA_ID}-master-0 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_1}" --instances=USD{INFRA_ID}-master-1 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_2}" --instances=USD{INFRA_ID}-master-2 13.16.1. Deployment Manager template for control plane machines You can use the following Deployment Manager template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 13.31. 05_control_plane.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources} 13.17. Wait for bootstrap completion and remove bootstrap resources in GCP After you create all of the required infrastructure in Google Cloud Platform (GCP), wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} USD gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign USD gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition USD gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap 13.18. Creating additional worker machines in GCP You can create worker machines in Google Cloud Platform (GCP) for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Deployment Manager template. Additional instances can be launched by including additional resources of type 06_worker.py in the file. Note If you do not use the provided Deployment Manager template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the Deployment Manager template for worker machines section of this topic and save it as 06_worker.py on your computer. This template describes the worker machines that your cluster requires. Export the variables that the resource definition uses. Export the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`) Export the email address for your service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the location of the compute machine Ignition config file: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign` Create a 06_worker.yaml resource definition file: USD cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF 1 name is the name of the worker machine, for example worker-0 . 2 9 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 10 zone is the zone to deploy the worker machine into, for example us-central1-a . 4 11 compute_subnet is the selfLink URL to the compute subnet. 5 12 image is the selfLink URL to the RHCOS image. 1 6 13 machine_type is the machine type of the instance, for example n1-standard-4 . 7 14 service_account_email is the email address for the worker service account that you created. 8 15 ignition is the contents of the worker.ign file. Optional: If you want to launch additional instances, include additional resources of type 06_worker.py in your 06_worker.yaml resource definition file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202210040145 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-48-x86-64-202206140145 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-48-x86-64-202206140145 13.18.1. Deployment Manager template for worker machines You can use the following Deployment Manager template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 13.32. 06_worker.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources} 13.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 13.20. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 13.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 13.22. Optional: Adding the ingress DNS records If you removed the DNS zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites Configure a GCP account. Remove the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Create the worker machines. Procedure Wait for the Ingress router to create a load balancer and populate the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98 Add the A record to your zones: To use A records: Export the variable for the router IP address: USD export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add the A record to the private zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone For an external cluster, also add the A record to the public zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} To add explicit domains instead of using a wildcard, create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com 13.23. Completing a GCP installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Google Cloud Platform (GCP) user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned GCP infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Observe the running state of your cluster. Run the following command to view the current cluster version and status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete Run the following command to view the Operators managed on the control plane by the Cluster Version Operator (CVO): USD oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m Run the following command to view your cluster pods: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m ... openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m When the current cluster version is AVAILABLE , the installation is complete. 13.24. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 13.25. steps Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster | [
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_id> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet>",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"export BASE_DOMAIN='<base_domain>' export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' export NETWORK_CIDR='10.0.0.0/16' export MASTER_SUBNET_CIDR='10.0.0.0/17' export WORKER_SUBNET_CIDR='10.0.128.0/17' export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` export REGION=`jq -r .gcp.region <installation_directory>/metadata.json`",
"cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}",
"export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`)",
"export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`)",
"export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)",
"export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)",
"export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)",
"cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml",
"export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)",
"export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}",
"def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}",
"cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}",
"cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}",
"cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml",
"export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)",
"gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"",
"gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}",
"gsutil mb gs://<bucket_name>",
"gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>",
"export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz",
"gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"",
"export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)",
"gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition",
"gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/",
"export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`",
"cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap",
"gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign`",
"cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}",
"gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign",
"gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition",
"gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap",
"export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)",
"export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign`",
"cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98",
"export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete",
"oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_gcp/installing-restricted-networks-gcp |
Chapter 2. Filtering your Microsoft Azure data before integrating it into hybrid committed spend | Chapter 2. Filtering your Microsoft Azure data before integrating it into hybrid committed spend To share a subset of your billing data with RH, you can configure a function script in Microsoft Azure. This script copies exports an object storage bucket that hybrid committed spend can then access and filter. To integrate your Microsoft Azure account: Create a storage account and resource group. Configure Storage Account Contributor and Reader roles for access. Create a function to filter the data you want to send to Red Hat. Schedule daily cost exports to a storage account accessible to Red Hat. If you are using RHEL metering, after you integrate your data with cost management, go to Adding RHEL metering to a Microsoft Azure integration to finish configuring your integration for RHEL metering. Note Because third-party products and documentation can change, instructions for configuring the third-party integrations provided are general and correct at the time of publishing. For the most up-to-date information, see the Microsoft Azure's documentation . Add your Microsoft Azure integration to hybrid committed spend from the Integrations page . 2.1. Adding a Microsoft Azure account Add your Microsoft Azure account as an integration so that hybrid committed spend can process the cost and usage data. Prerequisites You must have a Red Hat user account with Cloud Administrator entitlements. You must have a service account . Your service account must have the correct roles assigned in Hybrid Cloud Console to enable cost management access. For more information, see the User Access Configuration Guide . In cost management: Click Settings Menu > Integrations . In the Cloud tab, click Add integration . In the Add a cloud integration wizard, select Microsoft Azure and click . Enter a name for your integration and click . In the Select application step, select Hybrid committed spend and click . In the Specify cost export scope step, select I wish to manually customize the data set sent to Cost Management . If you are registering RHEL usage billing, select Include RHEL usage . Otherwise, proceed to the step. Click . 2.2. Creating a Microsoft Azure resource group and storage account Create a storage account in Microsoft Azure to house your billing exports and a second storage account to house your filtered data. In your Microsoft Azure account : In the search bar, enter "storage" and click Storage accounts . On the Storage accounts page, click Create . In the Resource Group field, click Create new . Enter a name and click OK . In this example, use filtered-data-group . In the Instance details section, enter a name in the Storage account name field. For example, use filtereddata . Copy the names of the resource group and storage account so you can add them to the Add a cloud integration wizard in Red Hat Hybrid Cloud Console and click Review . Review the storage account and click Create . In cost management: In the Add a cloud integration wizard, paste the resource group and storage account names that you copied into Resource group name and Storage account name . You will continue using the wizard in the following sections. 2.3. Finding your Microsoft Azure subscription ID Find your subscription_id in the Microsoft Azure Cloud Shell and add it to the Add a cloud integration wizard in hybrid committed spend. In your Microsoft Azure account : Click Cloud Shell . Enter the following command to get your Subscription ID: az account show --query "{subscription_id: id }" Copy the value that is generated for subscription_id . Example response { "subscription_id": 00000000-0000-0000-000000000000 } In cost management: In the Subscription ID field of the Add a cloud integration wizard, paste the value that you copied in the step. Click . You will continue using the wizard in the following sections. 2.4. Creating Microsoft Azure roles for Red Hat access To grant Red Hat access to your data, you must configure dedicated roles in Microsoft Azure. If you have an additional resource under the same Azure subscription, you might not need to create a new service account. In cost management: In the Roles section of the Add a cloud integration wizard, copy the az ad sp create-for-rbac command to create a service principal with the Cost Management Storage Account Contributor role. In your Microsoft Azure account : Click Cloud Shell . In the cloud shell prompt, paste the command that you copied. Copy the values that are generated for the client ID, secret, and tenant: Example response { "client_id": "00000000-0000-0000-000000000000", "secret": "00000000-0000-0000-000000000000", "tenant": "00000000-0000-0000-000000000000" } In cost management: Return to the Add a cloud integration wizard and paste the values that you copied into their corresponding fields on the Roles page. Click . Review your information and click Add to complete your integration. In the pop-up screen that appears, copy the Source UUID for your function script. 2.5. Creating a daily export in Microsoft Azure , set up an automatic export of your cost data to your Microsoft Azure storage account before you filter it for cost management. In your Microsoft Azure account : In the search bar, enter "cost exports" and click the result. Click Create . In Select a template , click Cost and usage (actual) to export your standard usage and purchase charges. Follow the steps in the Azure wizard: You can either create a new resource group and storage account or select existing ones. In this example, we use billingexportdata for the storage account and billinggroup for the resource group. You must set Format to CSV . Set Compression type to None or Gzip . Review the information and click Create . In cost management: Return to the Add a cloud integration wizard and complete the steps in Daily export Click . You will continue using the wizard in the following sections. 2.6. Creating a function in Microsoft Azure Creating a function in Azure filters your data and adds it to the storage account that you created to share with Red Hat. You can use the example Python script in this section to gather and share the filtered cost data from your export. Prerequisites You must have Visual Studio Code installed on your device. You must have the Microsoft Azure functions extension installed in Visual Studio Code. To create an Azure function, Microsoft recommends that you use their Microsoft Visual Studio Code IDE to develop and deploy code. For more information about configuring Visual Studio Code, see Quickstart: Create a function in Azure with Python using Visual Studio Code . In your Microsoft Azure account : Enter functions in the search bar and select Function App . Click Create . Select a hosting option for your function and click Select . On the Create Function App page, add your resource group. In the Instance Details section, name your function app. In Runtime stack , select Python . In Version , select latest. Click Review + create : Click Create . Wait for the resource to be created and then click Go to resource to view. In Visual Studio Code: Click the Microsoft Azure tab and sign in to Azure. In the Workspaces drop-down, click Azure Functions which appears as an icon with an orange lightning bolt. Click Create Function . Follow the prompts to set a local location and select a language and version for your function. In this example, select Python , Model 2 and the latest version available. In Select a template for your function dialog, select Timer trigger , name the function, and then press enter. Set the cron expression to control when the function runs. In this example, use 0 9 * * * to run the function daily at 9 AM: Click Create . Click Open in the current window . In your requirements.txt file: After you create the function in your development environment, open the requirements.txt file, add the following requirements, and save the file: In init .py: Copy the Python script and paste it into` init .py`. Change the values in the section marked # Required vars to update to the values that correspond to your environment. The example script uses secrets from Azure Key Vaults to configure your service account client_id and client_secret as environment variables. You can alternatively enter your credentials directly into the script, although this is not best practice. The default script has built-in options for filtering your data or RHEL subscription filtering. You must uncomment the type of filtering you want to use or write your own custom filtering. Remove the comment from one of the following not both: filtered_data = hcs_filtering(df) filtered_data = rhel_filtering(df) If you want to write more customized filtering, you must include the following required columns: Some of the columns differ depending on the report type. The example script normalizes these columns and all filtered reports must follow this example. To filter the data, you must add dataframe filtering. For example: Exact matching: df.loc[(df["publishertype"] == "Marketplace")] Filters out all data that does not have a publisherType of Marketplace. Contains: df.loc[df["publishername"].astype(str).str.contains("Red Hat")] Filters all data that does not contain Red Hat in the publisherName . You can stack filters by using & (for AND) and | (for OR) with your df.loc clause. More useful filters: subscriptionid Filters specific subscriptions. resourcegroup Filters specific resource groups. resourcelocation Filters data in a specific region. You can use servicename , servicetier , metercategory and metersubcategory to filter specific service types. After you build your custom query, update the custom query in the example script under # custom filtering basic example # . Save the file. In Visual Studio Code: Right click the Function window and click Deploy to Function App . Select the function app that you created in the steps. 2.7. Configuring function roles in Microsoft Azure Configure dedicated credentials to grant your function blob access to Microsoft Azure cost data. These credentials enable your function to access, filter, and transfer the data from the original storage container to the filtered storage container. In your Microsoft Azure account : Enter functions in the search bar and select your function. In the Settings menu, click Identity . Complete the following set of steps twice , one time for each of the two storage accounts that you created in the section Creating a Microsoft Azure resource group and storage account: Click Azure role assignments . Click Add role assignment . In the Scope field, select Storage . In the Resource field, select one of your two storage accounts. Our examples used filtereddata and billingeportdata . In Role , select Storage Blob Data Contributor . Click Save . Click Add role assignment again. In the Scope field, select Storage . In the Resource field, select the same storage account again . This time, in Role , select Storage Queue Data Contributor . Click Save . Repeat this entire process for the other storage account that you created. After completing these steps, you have successfully set up your Azure integration. | [
"az account show --query \"{subscription_id: id }\"",
"{ \"subscription_id\": 00000000-0000-0000-000000000000 }",
"{ \"client_id\": \"00000000-0000-0000-000000000000\", \"secret\": \"00000000-0000-0000-000000000000\", \"tenant\": \"00000000-0000-0000-000000000000\" }",
"azure-functions pandas requests azure-identity azure-storage-blob",
"'additionalinfo', 'billingaccountid', 'billingaccountname', 'billingcurrencycode', 'billingperiodenddate', 'billingperiodstartdate', 'chargetype', 'consumedservice', 'costinbillingcurrency', 'date', 'effectiveprice', 'metercategory', 'meterid', 'metername', 'meterregion', 'metersubcategory', 'offerid', 'productname', 'publishername', 'publishertype', 'quantity', 'reservationid', 'reservationname', 'resourcegroup', 'resourceid', 'resourcelocation', 'resourcename', 'servicefamily', 'serviceinfo1', 'serviceinfo2', 'subscriptionid', 'tags', 'unitofmeasure', 'unitprice'",
"column_translation = {\"billingcurrency\": \"billingcurrencycode\", \"currency\": \"billingcurrencycode\", \"instanceid\": \"resourceid\", \"instancename\": \"resourceid\", \"pretaxcost\": \"costinbillingcurrency\", \"product\": \"productname\", \"resourcegroupname\": \"resourcegroup\", \"subscriptionguid\": \"subscriptionid\", \"servicename\": \"metercategory\", \"usage_quantity\": \"quantity\"}"
] | https://docs.redhat.com/en/documentation/hybrid_committed_spend/1-latest/html/integrating_microsoft_azure_data_into_hybrid_committed_spend/assembly-adding-filtered-azure-int-hcs |
Chapter 5. Managing Kickstart and Configuration Files Using authconfig | Chapter 5. Managing Kickstart and Configuration Files Using authconfig The --update option updates all of the configuration files with the configuration changes. There are a couple of alternative options with slightly different behavior: --kickstart writes the updated configuration to a kickstart file. --test displays the full configuration with changes, but does not edit any configuration files. Additionally, authconfig can be used to back up and restore configurations. All archives are saved to a unique subdirectory in the /var/lib/authconfig/ directory. For example, the --savebackup option gives the backup directory as 2011-07-01 : This backs up all of the authentication configuration files beneath the /var/lib/authconfig/backup-2011-07-01 directory. Any of the saved backups can be used to restore the configuration using the --restorebackup option, giving the name of the manually saved configuration: Additionally, authconfig automatically makes a backup of the configuration before it applies any changes (with the --update option). The configuration can be restored from the most recent automatic backup, without having to specify the exact backup, using the --restorelastbackup option. | [
"authconfig --savebackup=2011-07-01",
"authconfig --restorebackup=2011-07-01"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/authconfig-kickstart-cmd |
5.8. Mounting File Systems | 5.8. Mounting File Systems By default, when a file system that supports extended attributes is mounted, the security context for each file is obtained from the security.selinux extended attribute of the file. Files in file systems that do not support extended attributes are assigned a single, default security context from the policy configuration, based on file system type. Use the mount -o context command to override existing extended attributes, or to specify a different, default context for file systems that do not support extended attributes. This is useful if you do not trust a file system to supply the correct attributes, for example, removable media used in multiple systems. The mount -o context command can also be used to support labeling for file systems that do not support extended attributes, such as File Allocation Table (FAT) or NFS volumes. The context specified with the context is not written to disk: the original contexts are preserved, and are seen when mounting without a context option (if the file system had extended attributes in the first place). For further information about file system labeling, refer to James Morris's "Filesystem Labeling in SELinux" article: http://www.linuxjournal.com/article/7426 . 5.8.1. Context Mounts To mount a file system with the specified context, overriding existing contexts if they exist, or to specify a different, default context for a file system that does not support extended attributes, as the Linux root user, use the mount -o context= SELinux_user:role:type:level command when mounting the desired file system. Context changes are not written to disk. By default, NFS mounts on the client side are labeled with a default context defined by policy for NFS volumes. In common policies, this default context uses the nfs_t type. Without additional mount options, this may prevent sharing NFS volumes via other services, such as the Apache HTTP Server. The following example mounts an NFS volume so that it can be shared via the Apache HTTP Server: Newly-created files and directories on this file system appear to have the SELinux context specified with -o context . However, since these changes are not written to disk, the context specified with this option does not persist between mounts. Therefore, this option must be used with the same context specified during every mount to retain the desired context. For information about making context mount persistent, refer to the Section 5.8.5, "Making Context Mounts Persistent" . Type Enforcement is the main permission control used in SELinux targeted policy. For the most part, SELinux users and roles can be ignored, so, when overriding the SELinux context with -o context , use the SELinux system_u user and object_r role, and concentrate on the type. If you are not using the MLS policy or multi-category security, use the s0 level. Note When a file system is mounted with a context option, context changes (by users and processes) are prohibited. For example, running the chcon command on a file system mounted with a context option results in a Operation not supported error. | [
"~]# mount server:/export /local/mount/point -o \\ context=\"system_u:object_r:httpd_sys_content_t:s0\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-working_with_selinux-mounting_file_systems |
Understanding OpenShift GitOps | Understanding OpenShift GitOps Red Hat OpenShift GitOps 1.11 Introduction to OpenShift GitOps Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.11/html/understanding_openshift_gitops/index |
Chapter 1. Setting up an Argo CD instance | Chapter 1. Setting up an Argo CD instance By default, the Red Hat OpenShift GitOps installs an instance of Argo CD in the openshift-gitops namespace with additional permissions for managing certain cluster-scoped resources. To manage cluster configurations or deploy applications, you can install and deploy a new Argo CD instance. By default, any new instance has permissions to manage resources only in the namespace where it is deployed. 1.1. Installing an Argo CD instance To manage cluster configurations or deploy applications, you can install and deploy a new Argo CD instance. Prerequisites You have access to the cluster with cluster-admin privileges. You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster. Procedure Log in to the OpenShift Container Platform web console. In the Administrator perspective of the web console, click Operators Installed Operators . Create or select the project where you want to install the Argo CD instance from the Project drop-down menu. Select OpenShift GitOps Operator from the installed operators list and click the Argo CD tab. Click Create ArgoCD to configure the parameters: Enter the Name of the instance. By default, the Name is set to example . Create an external OS Route to access Argo CD server. Click Server Route and check Enabled . Optional: You can also configure YAML for creating an external OS Route by adding the following configuration: Example Argo CD with external OS route created apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example namespace: openshift-gitops spec: server: route: enabled: true Go to Networking Routes <instance_name>-server in the project where the Argo CD instance is installed. On the Details tab, click the Argo CD web UI link under Route details Location . The Argo CD web UI opens in a separate browser window. Optional: To log in with your OpenShift Container Platform credentials, ensure you are a user of the cluster-admins group and then select the LOG IN VIA OPENSHIFT option in the Argo CD user interface. Note To be a user of the cluster-admins group, use the oc adm groups new cluster-admins <user> command, where <user> is the default cluster role that you can bind to users and groups cluster-wide or locally. Obtain the password for the Argo CD instance: Use the navigation panel to go to the Workloads Secrets page. Use the Project drop-down list and select the namespace where the Argo CD instance is created. Select the <argo_CD_instance_name>-cluster instance to display the password. On the Details tab, copy the password under Data admin.password . Use admin as the Username and the copied password as the Password to log in to the Argo CD UI in the new window. 1.2. Enabling replicas for Argo CD server and repo server Argo CD-server and Argo CD-repo-server workloads are stateless. To better distribute your workloads among pods, you can increase the number of Argo CD-server and Argo CD-repo-server replicas. However, if a horizontal autoscaler is enabled on the Argo CD-server, it overrides the number of replicas you set. Procedure Set the replicas parameters for the repo and server spec to the number of replicas you want to run: Example Argo CD custom resource apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: repo spec: repo: replicas: <number_of_replicas> server: replicas: <number_of_replicas> route: enabled: true path: / tls: insecureEdgeTerminationPolicy: Redirect termination: passthrough wildcardPolicy: None 1.3. Deploying resources to a different namespace To allow Argo CD to manage resources in other namespaces apart from where it is installed, configure the target namespace with a argocd.argoproj.io/managed-by label. Procedure Configure the target namespace by running the following command: USD oc label namespace <target_namespace> \ argocd.argoproj.io/managed-by=<argocd_namespace> where: <target_namespace> Specifies the name of the namespace you want Argo CD to manage. <argocd_namespace> Specifies the name of the namespace where Argo CD is installed. 1.4. Customizing the Argo CD console link In a multi-tenant cluster, users might have to deal with multiple instances of Argo CD. For example, after installing an Argo CD instance in your namespace, you might find a different Argo CD instance attached to the Argo CD console link, instead of your own Argo CD instance, in the Console Application Launcher. You can customize the Argo CD console link by setting the DISABLE_DEFAULT_ARGOCD_CONSOLELINK environment variable: When you set DISABLE_DEFAULT_ARGOCD_CONSOLELINK to true , the Argo CD console link is permanently deleted. When you set DISABLE_DEFAULT_ARGOCD_CONSOLELINK to false or use the default value, the Argo CD console link is temporarily deleted and visible again when the Argo CD route is reconciled. Prerequisites You have logged in to the OpenShift Container Platform cluster as an administrator. You have installed the Red Hat OpenShift GitOps Operator. Procedure In the Administrator perspective, navigate to Administration CustomResourceDefinitions . Find the Subscription CRD and click to open it. Select the Instances tab and click the openshift-gitops-operator subscription. Select the YAML tab and make your customization: To enable or disable the Argo CD console link, edit the value of DISABLE_DEFAULT_ARGOCD_CONSOLELINK as needed: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator spec: config: env: - name: DISABLE_DEFAULT_ARGOCD_CONSOLELINK value: 'true' | [
"apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example namespace: openshift-gitops spec: server: route: enabled: true",
"apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: repo spec: repo: replicas: <number_of_replicas> server: replicas: <number_of_replicas> route: enabled: true path: / tls: insecureEdgeTerminationPolicy: Redirect termination: passthrough wildcardPolicy: None",
"oc label namespace <target_namespace> argocd.argoproj.io/managed-by=<argocd_namespace>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator spec: config: env: - name: DISABLE_DEFAULT_ARGOCD_CONSOLELINK value: 'true'"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.12/html/argo_cd_instance/setting-up-argocd-instance |
Preface | Preface To scale the storage capacity of OpenShift Data Foundation in internal mode, you can do either of the following: Scale up storage nodes - Add storage capacity to the existing Red Hat OpenShift Data Foundation worker nodes Scale out storage nodes - Add new worker nodes containing storage capacity For scaling your storage in external mode, see Red Hat Ceph Storage documentation . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/scaling_storage/preface-scaling-storage |
Preface | Preface After installing Red Hat Ansible Automation Platform, your system might need extra configuration to ensure your deployment runs smoothly. This guide provides procedures for configuration tasks that you can perform after installing Red Hat Ansible Automation Platform. Disclaimer : Links contained in this information to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/operating_ansible_automation_platform/pr01 |
Chapter 4. Brokers | Chapter 4. Brokers 4.1. Brokers Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Events are sent from an event source to a broker as an HTTP POST request. After events have entered the broker, they can be filtered by CloudEvent attributes using triggers, and sent as an HTTP POST request to an event sink. 4.2. Broker types Cluster administrators can set the default broker implementation for a cluster. When you create a broker, the default broker implementation is used, unless you provide set configurations in the Broker object. 4.2.1. Default broker implementation for development purposes Knative provides a default, channel-based broker implementation. This channel-based broker can be used for development and testing purposes, but does not provide adequate event delivery guarantees for production environments. The default broker is backed by the InMemoryChannel channel implementation by default. If you want to use Apache Kafka to reduce network hops, use the Knative broker implementation for Apache Kafka. Do not configure the channel-based broker to be backed by the KafkaChannel channel implementation. 4.2.2. Production-ready Knative broker implementation for Apache Kafka For production-ready Knative Eventing deployments, Red Hat recommends using the Knative broker implementation for Apache Kafka. The broker is an Apache Kafka native implementation of the Knative broker, which sends CloudEvents directly to the Kafka instance. The Knative broker has a native integration with Kafka for storing and routing events. This allows better integration with Kafka for the broker and trigger model over other broker types, and reduces network hops. Other benefits of the Knative broker implementation include: At-least-once delivery guarantees Ordered delivery of events, based on the CloudEvents partitioning extension Control plane high availability A horizontally scalable data plane The Knative broker implementation for Apache Kafka stores incoming CloudEvents as Kafka records, using the binary content mode. This means that all CloudEvent attributes and extensions are mapped as headers on the Kafka record, while the data spec of the CloudEvent corresponds to the value of the Kafka record. 4.3. Creating brokers Knative provides a default, channel-based broker implementation. This channel-based broker can be used for development and testing purposes, but does not provide adequate event delivery guarantees for production environments. If a cluster administrator has configured your OpenShift Serverless deployment to use Apache Kafka as the default broker type, creating a broker by using the default settings creates a Knative broker for Apache Kafka. If your OpenShift Serverless deployment is not configured to use the Knative broker for Apache Kafka as the default broker type, the channel-based broker is created when you use the default settings in the following procedures. 4.3.1. Creating a broker by using the Knative CLI Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Using the Knative ( kn ) CLI to create brokers provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn broker create command to create a broker. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a broker: USD kn broker create <broker_name> Verification Use the kn command to list all existing brokers: USD kn broker list Example output NAME URL AGE CONDITIONS READY REASON default http://broker-ingress.knative-eventing.svc.cluster.local/test/default 45s 5 OK / 5 True Optional: If you are using the OpenShift Container Platform web console, you can navigate to the Topology view in the Developer perspective, and observe that the broker exists: 4.3.2. Creating a broker by annotating a trigger Brokers can be used in combination with triggers to deliver events from an event source to an event sink. You can create a broker by adding the eventing.knative.dev/injection: enabled annotation to a Trigger object. Important If you create a broker by using the eventing.knative.dev/injection: enabled annotation, you cannot delete this broker without cluster administrator permissions. If you delete the broker without having a cluster administrator remove this annotation first, the broker is created again after deletion. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a Trigger object as a YAML file that has the eventing.knative.dev/injection: enabled annotation: apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: annotations: eventing.knative.dev/injection: enabled name: <trigger_name> spec: broker: default subscriber: 1 ref: apiVersion: serving.knative.dev/v1 kind: Service name: <service_name> 1 Specify details about the event sink, or subscriber , that the trigger sends events to. Apply the Trigger YAML file: USD oc apply -f <filename> Verification You can verify that the broker has been created successfully by using the oc CLI, or by observing it in the Topology view in the web console. Enter the following oc command to get the broker: USD oc -n <namespace> get broker default Example output NAME READY REASON URL AGE default True http://broker-ingress.knative-eventing.svc.cluster.local/test/default 3m56s Optional: If you are using the OpenShift Container Platform web console, you can navigate to the Topology view in the Developer perspective, and observe that the broker exists: 4.3.3. Creating a broker by labeling a namespace Brokers can be used in combination with triggers to deliver events from an event source to an event sink. You can create the default broker automatically by labelling a namespace that you own or have write permissions for. Note Brokers created using this method are not removed if you remove the label. You must manually delete them. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have cluster or dedicated administrator permissions if you are using Red Hat OpenShift Service on AWS or OpenShift Dedicated. Procedure Label a namespace with eventing.knative.dev/injection=enabled : USD oc label namespace <namespace> eventing.knative.dev/injection=enabled Verification You can verify that the broker has been created successfully by using the oc CLI, or by observing it in the Topology view in the web console. Use the oc command to get the broker: USD oc -n <namespace> get broker <broker_name> Example command USD oc -n default get broker default Example output NAME READY REASON URL AGE default True http://broker-ingress.knative-eventing.svc.cluster.local/test/default 3m56s Optional: If you are using the OpenShift Container Platform web console, you can navigate to the Topology view in the Developer perspective, and observe that the broker exists: 4.3.4. Deleting a broker that was created by injection If you create a broker by injection and later want to delete it, you must delete it manually. Brokers created by using a namespace label or trigger annotation are not deleted permanently if you remove the label or annotation. Prerequisites Install the OpenShift CLI ( oc ). Procedure Remove the eventing.knative.dev/injection=enabled label from the namespace: USD oc label namespace <namespace> eventing.knative.dev/injection- Removing the annotation prevents Knative from recreating the broker after you delete it. Delete the broker from the selected namespace: USD oc -n <namespace> delete broker <broker_name> Verification Use the oc command to get the broker: USD oc -n <namespace> get broker <broker_name> Example command USD oc -n default get broker default Example output No resources found. Error from server (NotFound): brokers.eventing.knative.dev "default" not found 4.3.5. Creating a broker by using the web console After Knative Eventing is installed on your cluster, you can create a broker by using the web console. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a broker. Prerequisites You have logged in to the OpenShift Container Platform web console. The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure In the Developer perspective, navigate to +Add Broker . The Broker page is displayed. Optional. Update the Name of the broker. If you do not update the name, the generated broker is named default . Click Create . Verification You can verify that the broker was created by viewing broker components in the Topology page. In the Developer perspective, navigate to Topology . View the mt-broker-ingress , mt-broker-filter , and mt-broker-controller components. 4.3.6. Creating a broker by using the Administrator perspective Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Events are sent from an event source to a broker as an HTTP POST request. After events have entered the broker, they can be filtered by CloudEvent attributes using triggers, and sent as an HTTP POST request to an event sink. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Administrator perspective. You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Serverless Eventing . In the Create list, select Broker . You will be directed to the Create Broker page. Optional: Modify the YAML configuration for the broker. Click Create . 4.3.7. steps Configure event delivery parameters that are applied in cases where an event fails to be delivered to an event sink. 4.3.8. Additional resources Configuring the default broker class Triggers Connect a broker to a sink using the Developer perspective 4.4. Configuring the default broker backing channel If you are using a channel-based broker, you can set the default backing channel type for the broker to either InMemoryChannel or KafkaChannel . Prerequisites You have administrator permissions on OpenShift Container Platform. You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster. You have installed the OpenShift ( oc ) CLI. If you want to use Apache Kafka channels as the default backing channel type, you must also install the KnativeKafka CR on your cluster. Procedure Modify the KnativeEventing custom resource (CR) to add configuration details for the config-br-default-channel config map: apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: 1 config-br-default-channel: channel-template-spec: | apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel 2 spec: numPartitions: 6 3 replicationFactor: 3 4 1 In spec.config , you can specify the config maps that you want to add modified configurations for. 2 The default backing channel type configuration. In this example, the default channel implementation for the cluster is KafkaChannel . 3 The number of partitions for the Kafka channel that backs the broker. 4 The replication factor for the Kafka channel that backs the broker. Apply the updated KnativeEventing CR: USD oc apply -f <filename> 4.5. Configuring the default broker class You can use the config-br-defaults config map to specify default broker class settings for Knative Eventing. You can specify the default broker class for the entire cluster or for one or more namespaces. Currently the MTChannelBasedBroker and Kafka broker types are supported. Prerequisites You have administrator permissions on OpenShift Container Platform. You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster. If you want to use the Knative broker for Apache Kafka as the default broker implementation, you must also install the KnativeKafka CR on your cluster. Procedure Modify the KnativeEventing custom resource to add configuration details for the config-br-defaults config map: apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: defaultBrokerClass: Kafka 1 config: 2 config-br-defaults: 3 default-br-config: | clusterDefault: 4 brokerClass: Kafka apiVersion: v1 kind: ConfigMap name: kafka-broker-config 5 namespace: knative-eventing 6 namespaceDefaults: 7 my-namespace: brokerClass: MTChannelBasedBroker apiVersion: v1 kind: ConfigMap name: config-br-default-channel 8 namespace: knative-eventing 9 ... 1 The default broker class for Knative Eventing. 2 In spec.config , you can specify the config maps that you want to add modified configurations for. 3 The config-br-defaults config map specifies the default settings for any broker that does not specify spec.config settings or a broker class. 4 The cluster-wide default broker class configuration. In this example, the default broker class implementation for the cluster is Kafka . 5 The kafka-broker-config config map specifies default settings for the Kafka broker. See "Configuring Knative broker for Apache Kafka settings" in the "Additional resources" section. 6 The namespace where the kafka-broker-config config map exists. 7 The namespace-scoped default broker class configuration. In this example, the default broker class implementation for the my-namespace namespace is MTChannelBasedBroker . You can specify default broker class implementations for multiple namespaces. 8 The config-br-default-channel config map specifies the default backing channel for the broker. See "Configuring the default broker backing channel" in the "Additional resources" section. 9 The namespace where the config-br-default-channel config map exists. Important Configuring a namespace-specific default overrides any cluster-wide settings. 4.6. Knative broker implementation for Apache Kafka For production-ready Knative Eventing deployments, Red Hat recommends using the Knative broker implementation for Apache Kafka. The broker is an Apache Kafka native implementation of the Knative broker, which sends CloudEvents directly to the Kafka instance. The Knative broker has a native integration with Kafka for storing and routing events. This allows better integration with Kafka for the broker and trigger model over other broker types, and reduces network hops. Other benefits of the Knative broker implementation include: At-least-once delivery guarantees Ordered delivery of events, based on the CloudEvents partitioning extension Control plane high availability A horizontally scalable data plane The Knative broker implementation for Apache Kafka stores incoming CloudEvents as Kafka records, using the binary content mode. This means that all CloudEvent attributes and extensions are mapped as headers on the Kafka record, while the data spec of the CloudEvent corresponds to the value of the Kafka record. 4.6.1. Creating an Apache Kafka broker when it is not configured as the default broker type If your OpenShift Serverless deployment is not configured to use Kafka broker as the default broker type, you can use one of the following procedures to create a Kafka-based broker. 4.6.1.1. Creating an Apache Kafka broker by using YAML Creating Knative resources by using YAML files uses a declarative API, which enables you to describe applications declaratively and in a reproducible manner. To create a Kafka broker by using YAML, you must create a YAML file that defines a Broker object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift CLI ( oc ). Procedure Create a Kafka-based broker as a YAML file: apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: Kafka 1 name: example-kafka-broker spec: config: apiVersion: v1 kind: ConfigMap name: kafka-broker-config 2 namespace: knative-eventing 1 The broker class. If not specified, brokers use the default class as configured by cluster administrators. To use the Kafka broker, this value must be Kafka . 2 The default config map for Knative brokers for Apache Kafka. This config map is created when the Kafka broker functionality is enabled on the cluster by a cluster administrator. Apply the Kafka-based broker YAML file: USD oc apply -f <filename> 4.6.1.2. Creating an Apache Kafka broker that uses an externally managed Kafka topic If you want to use a Kafka broker without allowing it to create its own internal topic, you can use an externally managed Kafka topic instead. To do this, you must create a Kafka Broker object that uses the kafka.eventing.knative.dev/external.topic annotation. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your OpenShift Container Platform cluster. You have access to a Kafka instance such as Red Hat AMQ Streams , and have created a Kafka topic. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift CLI ( oc ). Procedure Create a Kafka-based broker as a YAML file: apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: Kafka 1 kafka.eventing.knative.dev/external.topic: <topic_name> 2 ... 1 The broker class. If not specified, brokers use the default class as configured by cluster administrators. To use the Kafka broker, this value must be Kafka . 2 The name of the Kafka topic that you want to use. Apply the Kafka-based broker YAML file: USD oc apply -f <filename> 4.6.1.3. Knative Broker implementation for Apache Kafka with isolated data plane Important The Knative Broker implementation for Apache Kafka with isolated data plane is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The Knative Broker implementation for Apache Kafka has 2 planes: Control plane Consists of controllers that talk to the Kubernetes API, watch for custom objects, and manage the data plane. Data plane The collection of components that listen for incoming events, talk to Apache Kafka, and send events to the event sinks. The Knative Broker implementation for Apache Kafka data plane is where events flow. The implementation consists of kafka-broker-receiver and kafka-broker-dispatcher deployments. When you configure a Broker class of Kafka , the Knative Broker implementation for Apache Kafka uses a shared data plane. This means that the kafka-broker-receiver and kafka-broker-dispatcher deployments in the knative-eventing namespace are used for all Apache Kafka Brokers in the cluster. However, when you configure a Broker class of KafkaNamespaced , the Apache Kafka broker controller creates a new data plane for each namespace where a broker exists. This data plane is used by all KafkaNamespaced brokers in that namespace. This provides isolation between the data planes, so that the kafka-broker-receiver and kafka-broker-dispatcher deployments in the user namespace are only used for the broker in that namespace. Important As a consequence of having separate data planes, this security feature creates more deployments and uses more resources. Unless you have such isolation requirements, use a regular Broker with a class of Kafka . 4.6.1.4. Creating a Knative broker for Apache Kafka that uses an isolated data plane Important The Knative Broker implementation for Apache Kafka with isolated data plane is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To create a KafkaNamespaced broker, you must set the eventing.knative.dev/broker.class annotation to KafkaNamespaced . Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your OpenShift Container Platform cluster. You have access to an Apache Kafka instance, such as Red Hat AMQ Streams , and have created a Kafka topic. You have created a project, or have access to a project, with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift CLI ( oc ). Procedure Create an Apache Kafka-based broker by using a YAML file: apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: KafkaNamespaced 1 name: default namespace: my-namespace 2 spec: config: apiVersion: v1 kind: ConfigMap name: my-config 3 ... 1 To use the Apache Kafka broker with isolated data planes, the broker class value must be KafkaNamespaced . 2 3 The referenced ConfigMap object my-config must be in the same namespace as the Broker object, in this case my-namespace . Apply the Apache Kafka-based broker YAML file: USD oc apply -f <filename> Important The ConfigMap object in spec.config must be in the same namespace as the Broker object: apiVersion: v1 kind: ConfigMap metadata: name: my-config namespace: my-namespace data: ... After the creation of the first Broker object with the KafkaNamespaced class, the kafka-broker-receiver and kafka-broker-dispatcher deployments are created in the namespace. Subsequently, all brokers with the KafkaNamespaced class in the same namespace will use the same data plane. If no brokers with the KafkaNamespaced class exist in the namespace, the data plane in the namespace is deleted. 4.6.2. Configuring Apache Kafka broker settings You can configure the replication factor, bootstrap servers, and the number of topic partitions for a Kafka broker, by creating a config map and referencing this config map in the Kafka Broker object. Prerequisites You have cluster or dedicated administrator permissions on OpenShift Container Platform. The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource (CR) are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project that has the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift CLI ( oc ). Procedure Modify the kafka-broker-config config map, or create your own config map that contains the following configuration: apiVersion: v1 kind: ConfigMap metadata: name: <config_map_name> 1 namespace: <namespace> 2 data: default.topic.partitions: <integer> 3 default.topic.replication.factor: <integer> 4 bootstrap.servers: <list_of_servers> 5 1 The config map name. 2 The namespace where the config map exists. 3 The number of topic partitions for the Kafka broker. This controls how quickly events can be sent to the broker. A higher number of partitions requires greater compute resources. 4 The replication factor of topic messages. This prevents against data loss. A higher replication factor requires greater compute resources and more storage. 5 A comma separated list of bootstrap servers. This can be inside or outside of the OpenShift Container Platform cluster, and is a list of Kafka clusters that the broker receives events from and sends events to. Important The default.topic.replication.factor value must be less than or equal to the number of Kafka broker instances in your cluster. For example, if you only have one Kafka broker, the default.topic.replication.factor value should not be more than "1" . Example Kafka broker config map apiVersion: v1 kind: ConfigMap metadata: name: kafka-broker-config namespace: knative-eventing data: default.topic.partitions: "10" default.topic.replication.factor: "3" bootstrap.servers: "my-cluster-kafka-bootstrap.kafka:9092" Apply the config map: USD oc apply -f <config_map_filename> Specify the config map for the Kafka Broker object: Example Broker object apiVersion: eventing.knative.dev/v1 kind: Broker metadata: name: <broker_name> 1 namespace: <namespace> 2 annotations: eventing.knative.dev/broker.class: Kafka 3 spec: config: apiVersion: v1 kind: ConfigMap name: <config_map_name> 4 namespace: <namespace> 5 ... 1 The broker name. 2 The namespace where the broker exists. 3 The broker class annotation. In this example, the broker is a Kafka broker that uses the class value Kafka . 4 The config map name. 5 The namespace where the config map exists. Apply the broker: USD oc apply -f <broker_filename> 4.6.3. Security configuration for the Knative broker implementation for Apache Kafka Kafka clusters are generally secured by using the TLS or SASL authentication methods. You can configure a Kafka broker or channel to work against a protected Red Hat AMQ Streams cluster by using TLS or SASL. Note Red Hat recommends that you enable both SASL and TLS together. 4.6.3.1. Configuring TLS authentication for Apache Kafka brokers Transport Layer Security (TLS) is used by Apache Kafka clients and servers to encrypt traffic between Knative and Kafka, as well as for authentication. TLS is the only supported method of traffic encryption for the Knative broker implementation for Apache Kafka. Prerequisites You have cluster or dedicated administrator permissions on OpenShift Container Platform. The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have a Kafka cluster CA certificate stored as a .pem file. You have a Kafka cluster client certificate and a key stored as .pem files. Install the OpenShift CLI ( oc ). Procedure Create the certificate files as a secret in the knative-eventing namespace: USD oc create secret -n knative-eventing generic <secret_name> \ --from-literal=protocol=SSL \ --from-file=ca.crt=caroot.pem \ --from-file=user.crt=certificate.pem \ --from-file=user.key=key.pem Important Use the key names ca.crt , user.crt , and user.key . Do not change them. Edit the KnativeKafka CR and add a reference to your secret in the broker spec: apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: broker: enabled: true defaultConfig: authSecretName: <secret_name> ... 4.6.3.2. Configuring SASL authentication for Apache Kafka brokers Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster; otherwise events cannot be produced or consumed. Prerequisites You have cluster or dedicated administrator permissions on OpenShift Container Platform. The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have a username and password for a Kafka cluster. You have chosen the SASL mechanism to use, for example, PLAIN , SCRAM-SHA-256 , or SCRAM-SHA-512 . If TLS is enabled, you also need the ca.crt certificate file for the Kafka cluster. Install the OpenShift CLI ( oc ). Procedure Create the certificate files as a secret in the knative-eventing namespace: USD oc create secret -n knative-eventing generic <secret_name> \ --from-literal=protocol=SASL_SSL \ --from-literal=sasl.mechanism=<sasl_mechanism> \ --from-file=ca.crt=caroot.pem \ --from-literal=password="SecretPassword" \ --from-literal=user="my-sasl-user" Use the key names ca.crt , password , and sasl.mechanism . Do not change them. If you want to use SASL with public CA certificates, you must use the tls.enabled=true flag, rather than the ca.crt argument, when creating the secret. For example: USD oc create secret -n <namespace> generic <kafka_auth_secret> \ --from-literal=tls.enabled=true \ --from-literal=password="SecretPassword" \ --from-literal=saslType="SCRAM-SHA-512" \ --from-literal=user="my-sasl-user" Edit the KnativeKafka CR and add a reference to your secret in the broker spec: apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: broker: enabled: true defaultConfig: authSecretName: <secret_name> ... 4.6.4. Additional resources Red Hat AMQ Streams documentation TLS and SASL on Kafka 4.7. Managing brokers After you have created a broker, you can manage your broker by using Knative ( kn ) CLI commands, or by modifying it in the OpenShift Container Platform web console. 4.7.1. Managing brokers using the CLI The Knative ( kn ) CLI provides commands that can be used to describe and list existing brokers. 4.7.1.1. Listing existing brokers by using the Knative CLI Using the Knative ( kn ) CLI to list brokers provides a streamlined and intuitive user interface. You can use the kn broker list command to list existing brokers in your cluster by using the Knative CLI. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. Procedure List all existing brokers: USD kn broker list Example output NAME URL AGE CONDITIONS READY REASON default http://broker-ingress.knative-eventing.svc.cluster.local/test/default 45s 5 OK / 5 True 4.7.1.2. Describing an existing broker by using the Knative CLI Using the Knative ( kn ) CLI to describe brokers provides a streamlined and intuitive user interface. You can use the kn broker describe command to print information about existing brokers in your cluster by using the Knative CLI. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. Procedure Describe an existing broker: USD kn broker describe <broker_name> Example command using default broker USD kn broker describe default Example output Name: default Namespace: default Annotations: eventing.knative.dev/broker.class=MTChannelBasedBroker, eventing.knative.dev/creato ... Age: 22s Address: URL: http://broker-ingress.knative-eventing.svc.cluster.local/default/default Conditions: OK TYPE AGE REASON ++ Ready 22s ++ Addressable 22s ++ FilterReady 22s ++ IngressReady 22s ++ TriggerChannelReady 22s 4.7.2. Connect a broker to a sink using the Developer perspective You can connect a broker to an event sink in the OpenShift Container Platform Developer perspective by creating a trigger. Prerequisites The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Developer perspective. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have created a sink, such as a Knative service or channel. You have created a broker. Procedure In the Topology view, point to the broker that you have created. An arrow appears. Drag the arrow to the sink that you want to connect to the broker. This action opens the Add Trigger dialog box. In the Add Trigger dialog box, enter a name for the trigger and click Add . Verification You can verify that the broker is connected to the sink by viewing the Topology page. In the Developer perspective, navigate to Topology . Click the line that connects the broker to the sink to see details about the trigger in the Details panel. | [
"kn broker create <broker_name>",
"kn broker list",
"NAME URL AGE CONDITIONS READY REASON default http://broker-ingress.knative-eventing.svc.cluster.local/test/default 45s 5 OK / 5 True",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: annotations: eventing.knative.dev/injection: enabled name: <trigger_name> spec: broker: default subscriber: 1 ref: apiVersion: serving.knative.dev/v1 kind: Service name: <service_name>",
"oc apply -f <filename>",
"oc -n <namespace> get broker default",
"NAME READY REASON URL AGE default True http://broker-ingress.knative-eventing.svc.cluster.local/test/default 3m56s",
"oc label namespace <namespace> eventing.knative.dev/injection=enabled",
"oc -n <namespace> get broker <broker_name>",
"oc -n default get broker default",
"NAME READY REASON URL AGE default True http://broker-ingress.knative-eventing.svc.cluster.local/test/default 3m56s",
"oc label namespace <namespace> eventing.knative.dev/injection-",
"oc -n <namespace> delete broker <broker_name>",
"oc -n <namespace> get broker <broker_name>",
"oc -n default get broker default",
"No resources found. Error from server (NotFound): brokers.eventing.knative.dev \"default\" not found",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: 1 config-br-default-channel: channel-template-spec: | apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel 2 spec: numPartitions: 6 3 replicationFactor: 3 4",
"oc apply -f <filename>",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: defaultBrokerClass: Kafka 1 config: 2 config-br-defaults: 3 default-br-config: | clusterDefault: 4 brokerClass: Kafka apiVersion: v1 kind: ConfigMap name: kafka-broker-config 5 namespace: knative-eventing 6 namespaceDefaults: 7 my-namespace: brokerClass: MTChannelBasedBroker apiVersion: v1 kind: ConfigMap name: config-br-default-channel 8 namespace: knative-eventing 9",
"apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: Kafka 1 name: example-kafka-broker spec: config: apiVersion: v1 kind: ConfigMap name: kafka-broker-config 2 namespace: knative-eventing",
"oc apply -f <filename>",
"apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: Kafka 1 kafka.eventing.knative.dev/external.topic: <topic_name> 2",
"oc apply -f <filename>",
"apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: KafkaNamespaced 1 name: default namespace: my-namespace 2 spec: config: apiVersion: v1 kind: ConfigMap name: my-config 3",
"oc apply -f <filename>",
"apiVersion: v1 kind: ConfigMap metadata: name: my-config namespace: my-namespace data:",
"apiVersion: v1 kind: ConfigMap metadata: name: <config_map_name> 1 namespace: <namespace> 2 data: default.topic.partitions: <integer> 3 default.topic.replication.factor: <integer> 4 bootstrap.servers: <list_of_servers> 5",
"apiVersion: v1 kind: ConfigMap metadata: name: kafka-broker-config namespace: knative-eventing data: default.topic.partitions: \"10\" default.topic.replication.factor: \"3\" bootstrap.servers: \"my-cluster-kafka-bootstrap.kafka:9092\"",
"oc apply -f <config_map_filename>",
"apiVersion: eventing.knative.dev/v1 kind: Broker metadata: name: <broker_name> 1 namespace: <namespace> 2 annotations: eventing.knative.dev/broker.class: Kafka 3 spec: config: apiVersion: v1 kind: ConfigMap name: <config_map_name> 4 namespace: <namespace> 5",
"oc apply -f <broker_filename>",
"oc create secret -n knative-eventing generic <secret_name> --from-literal=protocol=SSL --from-file=ca.crt=caroot.pem --from-file=user.crt=certificate.pem --from-file=user.key=key.pem",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: broker: enabled: true defaultConfig: authSecretName: <secret_name>",
"oc create secret -n knative-eventing generic <secret_name> --from-literal=protocol=SASL_SSL --from-literal=sasl.mechanism=<sasl_mechanism> --from-file=ca.crt=caroot.pem --from-literal=password=\"SecretPassword\" --from-literal=user=\"my-sasl-user\"",
"oc create secret -n <namespace> generic <kafka_auth_secret> --from-literal=tls.enabled=true --from-literal=password=\"SecretPassword\" --from-literal=saslType=\"SCRAM-SHA-512\" --from-literal=user=\"my-sasl-user\"",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: broker: enabled: true defaultConfig: authSecretName: <secret_name>",
"kn broker list",
"NAME URL AGE CONDITIONS READY REASON default http://broker-ingress.knative-eventing.svc.cluster.local/test/default 45s 5 OK / 5 True",
"kn broker describe <broker_name>",
"kn broker describe default",
"Name: default Namespace: default Annotations: eventing.knative.dev/broker.class=MTChannelBasedBroker, eventing.knative.dev/creato Age: 22s Address: URL: http://broker-ingress.knative-eventing.svc.cluster.local/default/default Conditions: OK TYPE AGE REASON ++ Ready 22s ++ Addressable 22s ++ FilterReady 22s ++ IngressReady 22s ++ TriggerChannelReady 22s"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/eventing/brokers |
11.2. Time/Date Based Expressions | 11.2. Time/Date Based Expressions Date expressions are used to control a resource or cluster option based on the current date/time. They can contain an optional date specification. Table 11.4. Properties of a Date Expression Field Description start A date/time conforming to the ISO8601 specification. end A date/time conforming to the ISO8601 specification. operation Compares the current date/time with the start or the end date or both the start and end date, depending on the context. Allowed values: * gt - True if the current date/time is after start * lt - True if the current date/time is before end * in-range - True if the current date/time is after start and before end * date-spec - performs a cron-like comparison to the current date/time | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/_time_date_based_expressions |
Chapter 13. Creating users | Chapter 13. Creating users You can create as many Business Central users as you require. User privileges and settings are controlled by the roles assigned to a user and the groups that a user belongs to. For this example, you must create two new users: Katy who will act as the bank's loan manager and approver, and Bill who will act as the broker requesting the loan. For more information on creating users, see the Creating users chapter of Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 . In Business Central, you can use groups and roles to control permissions for a collection of users. You can create as many groups and roles as you want but a group must have at least one user. For this example, the user or users working on the tasks must be assigned to one or more of the following groups and roles: approver group: For the Qualify task broker group: For the Correct Data and Increase Down Payment tasks manager role: For the Final Approval task Procedure Click the gear icon in the upper-right corner and click Users . Click , enter Katy , click , and click Create . Click Yes to set a password, enter Katy in both fields, and click Change . Enter Bill , click , and click Create . Click Yes to set a password, enter Bill in both fields, and click Change . Click the Groups tab and click , enter approver , and click . Select Katy from the user list, and click Add selected users . Enter broker , and click . Select Bill from the user list, and click Add selected users . Click the Users tab, select Katy , and click Edit Roles Add roles . Select manager , click Add to selected roles , and click Save . Click the Groups tab and click Edit Groups Add to groups . Select approver and kie-server , and click Add to selected groups . Click Save . Click the Users tab, select Bill from the user list, and click Edit Roles Add roles . Select user , and click Add to selected roles . Click the Groups tab, click , select kie-server , and click Add to selected groups . Click Save . | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_process_automation_manager/creating-users-proc |
Chapter 88. versions | Chapter 88. versions This chapter describes the commands under the versions command. 88.1. versions show Show available versions of services Usage: Table 88.1. Command arguments Value Summary -h, --help Show this help message and exit --all-interfaces Show values for all interfaces --interface <interface> Show versions for a specific interface. --region-name <region_name> Show versions for a specific region. --service <service> Show versions for a specific service. the argument should be either an exact match to what is in the catalog or a known official value or alias from service-types-authority ( https://service- types.openstack.org/) --status <status> Show versions for a specific status. valid values are: - SUPPORTED - CURRENT - DEPRECATED - EXPERIMENTAL Table 88.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 88.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 88.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 88.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack versions show [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-interfaces | --interface <interface>] [--region-name <region_name>] [--service <service>] [--status <status>]"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/versions |
Chapter 1. Migrating Red Hat Single Sign-On 7.6 to Red Hat build of Keycloak | Chapter 1. Migrating Red Hat Single Sign-On 7.6 to Red Hat build of Keycloak The purpose of this guide is to document the steps that are required to successfully migrate Red Hat Single Sign-On 7.6 to Red Hat build of Keycloak 24.0. The instructions address migration of the following elements: Red Hat Single Sign-On 7.6 server Operator deployments on OpenShift Template deployments on OpenShift Applications secured by Red Hat Single Sign-On 7.6 Custom providers Custom themes This guide also includes guidelines for migrating upstream Keycloak to Red Hat build of Keycloak 24.0. Before you start the migration, you might consider installing a new instance of Red Hat build of Keycloak to become familiar with the changes for this release. See the Red Hat build of Keycloak Getting Started Guide . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/migration_guide/migrating_red_hat_single_sign_on_7_6_to_red_hat_build_of_keycloak |
Chapter 13. Troubleshooting LVM | Chapter 13. Troubleshooting LVM You can use Logical Volume Manager (LVM) tools to troubleshoot a variety of issues in LVM volumes and groups. 13.1. Gathering diagnostic data on LVM If an LVM command is not working as expected, you can gather diagnostics in the following ways. Procedure Use the following methods to gather different kinds of diagnostic data: Add the -v argument to any LVM command to increase the verbosity level of the command output. Verbosity can be further increased by adding additional v's . A maximum of four such v's is allowed, for example, -vvvv . In the log section of the /etc/lvm/lvm.conf configuration file, increase the value of the level option. This causes LVM to provide more details in the system log. If the problem is related to the logical volume activation, enable LVM to log messages during the activation: Set the activation = 1 option in the log section of the /etc/lvm/lvm.conf configuration file. Execute the LVM command with the -vvvv option. Examine the command output. Reset the activation option to 0 . If you do not reset the option to 0 , the system might become unresponsive during low memory situations. Display an information dump for diagnostic purposes: Display additional system information: Examine the last backup of the LVM metadata in the /etc/lvm/backup/ directory and archived versions in the /etc/lvm/archive/ directory. Check the current configuration information: Check the /run/lvm/hints cache file for a record of which devices have physical volumes on them. Additional resources lvmdump(8) man page on your system 13.2. Displaying information about failed LVM devices Troubleshooting information about a failed Logical Volume Manager (LVM) volume can help you determine the reason of the failure. You can check the following examples of the most common LVM volume failures. Example 13.1. Failed volume groups In this example, one of the devices that made up the volume group myvg failed. The volume group usability then depends on the type of failure. For example, the volume group is still usable if RAID volumes are also involved. You can also see information about the failed device. Example 13.2. Failed logical volume In this example, one of the devices failed. This can be a reason for the logical volume in the volume group to fail. The command output shows the failed logical volumes. Example 13.3. Failed image of a RAID logical volume The following examples show the command output from the pvs and lvs utilities when an image of a RAID logical volume has failed. The logical volume is still usable. 13.3. Removing lost LVM physical volumes from a volume group If a physical volume fails, you can activate the remaining physical volumes in the volume group and remove all the logical volumes that used that physical volume from the volume group. Procedure Activate the remaining physical volumes in the volume group: Check which logical volumes will be removed: Remove all the logical volumes that used the lost physical volume from the volume group: Optional: If you accidentally removed logical volumes that you wanted to keep, you can reverse the vgreduce operation: Warning If you remove a thin pool, LVM cannot reverse the operation. 13.4. Finding the metadata of a missing LVM physical volume If the volume group's metadata area of a physical volume is accidentally overwritten or otherwise destroyed, you get an error message indicating that the metadata area is incorrect, or that the system was unable to find a physical volume with a particular UUID. This procedure finds the latest archived metadata of a physical volume that is missing or corrupted. Procedure Find the archived metadata file of the volume group that contains the physical volume. The archived metadata files are located at the /etc/lvm/archive/ volume-group-name_backup-number .vg path: Replace 00000-1248998876 with the backup-number. Select the last known valid metadata file, which has the highest number for the volume group. Find the UUID of the physical volume. Use one of the following methods. List the logical volumes: Examine the archived metadata file. Find the UUID as the value labeled id = in the physical_volumes section of the volume group configuration. Deactivate the volume group using the --partial option: 13.5. Restoring metadata on an LVM physical volume This procedure restores metadata on a physical volume that is either corrupted or replaced with a new device. You might be able to recover the data from the physical volume by rewriting the metadata area on the physical volume. Warning Do not attempt this procedure on a working LVM logical volume. You will lose your data if you specify the incorrect UUID. Prerequisites You have identified the metadata of the missing physical volume. For details, see Finding the metadata of a missing LVM physical volume . Procedure Restore the metadata on the physical volume: Note The command overwrites only the LVM metadata areas and does not affect the existing data areas. Example 13.4. Restoring a physical volume on /dev/vdb1 The following example labels the /dev/vdb1 device as a physical volume with the following properties: The UUID of FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk The metadata information contained in VG_00050.vg , which is the most recent good archived metadata for the volume group Restore the metadata of the volume group: Display the logical volumes on the volume group: The logical volumes are currently inactive. For example: If the segment type of the logical volumes is RAID, resynchronize the logical volumes: Activate the logical volumes: If the on-disk LVM metadata takes at least as much space as what overrode it, this procedure can recover the physical volume. If what overrode the metadata went past the metadata area, the data on the volume may have been affected. You might be able to use the fsck command to recover that data. Verification Display the active logical volumes: 13.6. Rounding errors in LVM output LVM commands that report the space usage in volume groups round the reported number to 2 decimal places to provide human-readable output. This includes the vgdisplay and vgs utilities. As a result of the rounding, the reported value of free space might be larger than what the physical extents on the volume group provide. If you attempt to create a logical volume the size of the reported free space, you might get the following error: To work around the error, you must examine the number of free physical extents on the volume group, which is the accurate value of free space. You can then use the number of extents to create the logical volume successfully. 13.7. Preventing the rounding error when creating an LVM volume When creating an LVM logical volume, you can specify the number of logical extents of the logical volume to avoid rounding error. Procedure Find the number of free physical extents in the volume group: Example 13.5. Free extents in a volume group For example, the following volume group has 8780 free physical extents: Create the logical volume. Enter the volume size in extents rather than bytes. Example 13.6. Creating a logical volume by specifying the number of extents Example 13.7. Creating a logical volume to occupy all the remaining space Alternatively, you can extend the logical volume to use a percentage of the remaining free space in the volume group. For example: Verification Check the number of extents that the volume group now uses: 13.8. LVM metadata and their location on disk LVM headers and metadata areas are available in different offsets and sizes. The default LVM disk header: Is found in label_header and pv_header structures. Is in the second 512-byte sector of the disk. Note that if a non-default location was specified when creating the physical volume (PV), the header can also be in the first or third sector. The standard LVM metadata area: Begins 4096 bytes from the start of the disk. Ends 1 MiB from the start of the disk. Begins with a 512 byte sector containing the mda_header structure. A metadata text area begins after the mda_header sector and goes to the end of the metadata area. LVM VG metadata text is written in a circular fashion into the metadata text area. The mda_header points to the location of the latest VG metadata within the text area. You can print LVM headers from a disk by using the # pvck --dump headers /dev/sda command. This command prints label_header , pv_header , mda_header , and the location of metadata text if found. Bad fields are printed with the CHECK prefix. The LVM metadata area offset will match the page size of the machine that created the PV, so the metadata area can also begin 8K, 16K or 64K from the start of the disk. Larger or smaller metadata areas can be specified when creating the PV, in which case the metadata area may end at locations other than 1 MiB. The pv_header specifies the size of the metadata area. When creating a PV, a second metadata area can be optionally enabled at the end of the disk. The pv_header contains the locations of the metadata areas. 13.9. Extracting VG metadata from a disk Choose one of the following procedures to extract VG metadata from a disk, depending on your situation. For information about how to save extracted metadata, see Saving extracted metadata to a file . Note For repair, you can use backup files in /etc/lvm/backup/ without extracting metadata from disk. Procedure Print current metadata text as referenced from valid mda_header : Example 13.8. Metadata text from valid mda_header Print the locations of all metadata copies found in the metadata area, based on finding a valid mda_header : Example 13.9. Locations of metadata copies in the metadata area Search for all copies of metadata in the metadata area without using an mda_header , for example, if headers are missing or damaged: Example 13.10. Copies of metadata in the metadata area without using an mda_header Include the -v option in the dump command to show the description from each copy of metadata: Example 13.11. Showing description from each copy of metadata This file can be used for repair. The first metadata area is used by default for dump metadata. If the disk has a second metadata area at the end of the disk, you can use the --settings "mda_num=2" option to use the second metadata area for dump metadata instead. 13.10. Saving extracted metadata to a file If you need to use dumped metadata for repair, it is required to save extracted metadata to a file with the -f option and the --setings option. Procedure If -f <filename> is added to --dump metadata , the raw metadata is written to the named file. You can use this file for repair. If -f <filename> is added to --dump metadata_all or --dump metadata_search , then raw metadata from all locations is written to the named file. To save one instance of metadata text from --dump metadata_all|metadata_search add --settings "metadata_offset=<offset>" where <offset> is from the listing output "metadata at <offset>". Example 13.12. Output of the command 13.11. Repairing a disk with damaged LVM headers and metadata using the pvcreate and the vgcfgrestore commands You can restore metadata and headers on a physical volume that is either corrupted or replaced with a new device. You might be able to recover the data from the physical volume by rewriting the metadata area on the physical volume. Warning These instructions should be used with extreme caution, and only if you are familiar with the implications of each command, the current layout of the volumes, the layout that you need to achieve, and the contents of the backup metadata file. These commands have the potential to corrupt data, and as such, it is recommended that you contact Red Hat Global Support Services for assistance in troubleshooting. Prerequisites You have identified the metadata of the missing physical volume. For details, see Finding the metadata of a missing LVM physical volume . Procedure Collect the following information needed for the pvcreate and vgcfgrestore commands. You can collect the information about your disk and UUID by running the # pvs -o+uuid command. metadata-file is the path to the most recent metadata backup file for the VG, for example, /etc/lvm/backup/ <vg-name> vg-name is the name of the VG that has the damaged or missing PV. UUID of the PV that was damaged on this device is the value taken from the output of the # pvs -i+uuid command. disk is the name of the disk where the PV is supposed to be, for example, /dev/sdb . Be certain this is the correct disk, or seek help, otherwise following these steps may lead to data loss. Recreate LVM headers on the disk: Optionally, verify that the headers are valid: Restore the VG metadata on the disk: Optionally, verify the metadata is restored: If there is no metadata backup file for the VG, you can get one by using the procedure in Saving extracted metadata to a file . Verification To verify that the new physical volume is intact and the volume group is functioning correctly, check the output of the following command: Additional resources pvck(8) man page Extracting LVM metadata backups from a physical volume How to repair metadata on physical volume online? (Red Hat Knowledgebase) How do I restore a volume group in Red Hat Enterprise Linux if one of the physical volumes that constitute the volume group has failed? (Red Hat Knowledgebase) 13.12. Repairing a disk with damaged LVM headers and metadata using the pvck command This is an alternative to the Repairing a disk with damaged LVM headers and metadata using the pvcreate and the vgcfgrestore commands . There may be cases where the pvcreate and the vgcfgrestore commands do not work. This method is more targeted at the damaged disk. This method uses a metadata input file that was extracted by pvck --dump , or a backup file from /etc/lvm/backup . When possible, use metadata saved by pvck --dump from another PV in the same VG, or from a second metadata area on the PV. For more information, see Saving extracted metadata to a file . Procedure Repair the headers and metadata on the disk: where <metadata-file> is a file containing the most recent metadata for the VG. This can be /etc/lvm/backup/ vg-name , or it can be a file containing raw metadata text from the pvck --dump metadata_search command output. <disk> is the name of the disk where the PV is supposed to be, for example, /dev/sdb . To prevent data loss, verify that is the correct disk. If you are not certain the disk is correct, contact Red Hat Support. Note If the metadata file is a backup file, the pvck --repair should be run on each PV that holds metadata in VG. If the metadata file is raw metadata that has been extracted from another PV, the pvck --repair needs to be run only on the damaged PV. Verification To check that the new physical volume is intact and the volume group is functioning correctly, check outputs of the following commands: Additional resources pvck(8) man page Extracting LVM metadata backups from a physical volume . How to repair metadata on physical volume online? (Red Hat Knowledgebase) How do I restore a volume group in Red Hat Enterprise Linux if one of the physical volumes that constitute the volume group has failed? (Red Hat Knowledgebase) 13.13. Troubleshooting LVM RAID You can troubleshoot various issues in LVM RAID devices to correct data errors, recover devices, or replace failed devices. 13.13.1. Checking data coherency in a RAID logical volume LVM provides scrubbing support for RAID logical volumes. RAID scrubbing is the process of reading all the data and parity blocks in an array and checking to see whether they are coherent. The lvchange --syncaction repair command initiates a background synchronization action on the array. Procedure Optional: Control the rate at which a RAID logical volume is initialized by setting any one of the following options: --maxrecoveryrate Rate[bBsSkKmMgG] sets the maximum recovery rate for a RAID logical volume so that it will not expel nominal I/O operations. --minrecoveryrate Rate[bBsSkKmMgG] sets the minimum recovery rate for a RAID logical volume to ensure that I/O for sync operations achieves a minimum throughput, even when heavy nominal I/O is present Replace 4K with the recovery rate value, which is an amount per second for each device in the array. If you provide no suffix, the options assume kiB per second per device. When you perform a RAID scrubbing operation, the background I/O required by the sync actions can crowd out other I/O to LVM devices, such as updates to volume group metadata. This might cause the other LVM operations to slow down. Note You can also use these maximum and minimum I/O rate while creating a RAID device. For example, lvcreate --type raid10 -i 2 -m 1 -L 10G --maxrecoveryrate 128 -n my_lv my_vg creates a 2-way RAID10 array my_lv, which is in the volume group my_vg with 3 stripes that is 10G in size with a maximum recovery rate of 128 kiB/sec/device. Display the number of discrepancies in the array, without repairing them: This command initiates a background synchronization action on the array. Optional: View the var/log/syslog file for the kernel messages. Correct the discrepancies in the array: This command repairs or replaces failed devices in a RAID logical volume. You can view the var/log/syslog file for the kernel messages after executing this command. Verification Display information about the scrubbing operation: Additional resources lvchange(8) and lvmraid(7) man pages on your system Minimum and maximum I/O rate options 13.13.2. Replacing a failed RAID device in a logical volume RAID is not similar to traditional LVM mirroring. In case of LVM mirroring, remove the failed devices. Otherwise, the mirrored logical volume would hang while RAID arrays continue running with failed devices. For RAID levels other than RAID1, removing a device would mean converting to a lower RAID level, for example, from RAID6 to RAID5, or from RAID4 or RAID5 to RAID0. Instead of removing a failed device and allocating a replacement, with LVM, you can replace a failed device that serves as a physical volume in a RAID logical volume by using the --repair argument of the lvconvert command. Prerequisites The volume group includes a physical volume that provides enough free capacity to replace the failed device. If no physical volume with enough free extents is available on the volume group, add a new, sufficiently large physical volume by using the vgextend utility. Procedure View the RAID logical volume: View the RAID logical volume after the /dev/sdc device fails: Replace the failed device: Optional: Manually specify the physical volume that replaces the failed device: Examine the logical volume with the replacement: Until you remove the failed device from the volume group, LVM utilities still indicate that LVM cannot find the failed device. Remove the failed device from the volume group: Verification View the available physical volumes after removing the failed device: Examine the logical volume after the replacing the failed device: Additional resources lvconvert(8) and vgreduce(8) man pages on your system 13.14. Troubleshooting duplicate physical volume warnings for multipathed LVM devices When using LVM with multipathed storage, LVM commands that list a volume group or logical volume might display messages such as the following: You can troubleshoot these warnings to understand why LVM displays them, or to hide the warnings. 13.14.1. Root cause of duplicate PV warnings When a multipath software such as Device Mapper Multipath (DM Multipath), EMC PowerPath, or Hitachi Dynamic Link Manager (HDLM) manages storage devices on the system, each path to a particular logical unit (LUN) is registered as a different SCSI device. The multipath software then creates a new device that maps to those individual paths. Because each LUN has multiple device nodes in the /dev directory that point to the same underlying data, all the device nodes contain the same LVM metadata. Table 13.1. Example device mappings in different multipath software Multipath software SCSI paths to a LUN Multipath device mapping to paths DM Multipath /dev/sdb and /dev/sdc /dev/mapper/mpath1 or /dev/mapper/mpatha EMC PowerPath /dev/emcpowera HDLM /dev/sddlmab As a result of the multiple device nodes, LVM tools find the same metadata multiple times and report them as duplicates. 13.14.2. Cases of duplicate PV warnings LVM displays the duplicate PV warnings in either of the following cases: Single paths to the same device The two devices displayed in the output are both single paths to the same device. The following example shows a duplicate PV warning in which the duplicate devices are both single paths to the same device. If you list the current DM Multipath topology using the multipath -ll command, you can find both /dev/sdd and /dev/sdf under the same multipath map. These duplicate messages are only warnings and do not mean that the LVM operation has failed. Rather, they are alerting you that LVM uses only one of the devices as a physical volume and ignores the others. If the messages indicate that LVM chooses the incorrect device or if the warnings are disruptive to users, you can apply a filter. The filter configures LVM to search only the necessary devices for physical volumes, and to leave out any underlying paths to multipath devices. As a result, the warnings no longer appear. Multipath maps The two devices displayed in the output are both multipath maps. The following examples show a duplicate PV warning for two devices that are both multipath maps. The duplicate physical volumes are located on two different devices rather than on two different paths to the same device. This situation is more serious than duplicate warnings for devices that are both single paths to the same device. These warnings often mean that the machine is accessing devices that it should not access: for example, LUN clones or mirrors. Unless you clearly know which devices you should remove from the machine, this situation might be unrecoverable. Red Hat recommends that you contact Red Hat Technical Support to address this issue. 13.14.3. Example LVM device filters that prevent duplicate PV warnings The following examples show LVM device filters that avoid the duplicate physical volume warnings that are caused by multiple storage paths to a single logical unit (LUN). You can configure the filter for logical volume manager (LVM) to check metadata for all devices. Metadata includes local hard disk drive with the root volume group on it and any multipath devices. By rejecting the underlying paths to a multipath device (such as /dev/sdb , /dev/sdd ), you can avoid these duplicate PV warnings, because LVM finds each unique metadata area once on the multipath device itself. To accept the second partition on the first hard disk drive and any device mapper (DM) Multipath devices and reject everything else, enter: To accept all HP SmartArray controllers and any EMC PowerPath devices, enter: To accept any partitions on the first IDE drive and any multipath devices, enter: 13.14.4. Additional resources Additional resources Limiting LVM device visibility and usage The LVM device filter | [
"lvmdump",
"lvs -v",
"pvs --all",
"dmsetup info --columns",
"lvmconfig",
"vgs --options +devices /dev/vdb1: open failed: No such device or address /dev/vdb1: open failed: No such device or address WARNING: Couldn't find device with uuid 42B7bu-YCMp-CEVD-CmKH-2rk6-fiO9-z1lf4s. WARNING: VG myvg is missing PV 42B7bu-YCMp-CEVD-CmKH-2rk6-fiO9-z1lf4s (last written to /dev/sdb1). WARNING: Couldn't find all devices for LV myvg/mylv while checking used and assumed devices. VG #PV #LV #SN Attr VSize VFree Devices myvg 2 2 0 wz-pn- <3.64t <3.60t [unknown](0) myvg 2 2 0 wz-pn- <3.64t <3.60t [unknown](5120),/dev/vdb1(0)",
"lvs --all --options +devices /dev/vdb1: open failed: No such device or address /dev/vdb1: open failed: No such device or address WARNING: Couldn't find device with uuid 42B7bu-YCMp-CEVD-CmKH-2rk6-fiO9-z1lf4s. WARNING: VG myvg is missing PV 42B7bu-YCMp-CEVD-CmKH-2rk6-fiO9-z1lf4s (last written to /dev/sdb1). WARNING: Couldn't find all devices for LV myvg/mylv while checking used and assumed devices. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices mylv myvg -wi-a---p- 20.00g [unknown](0) [unknown](5120),/dev/sdc1(0)",
"pvs Error reading device /dev/sdc1 at 0 length 4. Error reading device /dev/sdc1 at 4096 length 4. Couldn't find device with uuid b2J8oD-vdjw-tGCA-ema3-iXob-Jc6M-TC07Rn. WARNING: Couldn't find all devices for LV myvg/my_raid1_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV myvg/my_raid1_rmeta_1 while checking used and assumed devices. PV VG Fmt Attr PSize PFree /dev/sda2 rhel_bp-01 lvm2 a-- <464.76g 4.00m /dev/sdb1 myvg lvm2 a-- <836.69g 736.68g /dev/sdd1 myvg lvm2 a-- <836.69g <836.69g /dev/sde1 myvg lvm2 a-- <836.69g <836.69g [unknown] myvg lvm2 a-m <836.69g 736.68g",
"lvs -a --options name,vgname,attr,size,devices myvg Couldn't find device with uuid b2J8oD-vdjw-tGCA-ema3-iXob-Jc6M-TC07Rn. WARNING: Couldn't find all devices for LV myvg/my_raid1_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV myvg/my_raid1_rmeta_1 while checking used and assumed devices. LV VG Attr LSize Devices my_raid1 myvg rwi-a-r-p- 100.00g my_raid1_rimage_0(0),my_raid1_rimage_1(0) [my_raid1_rimage_0] myvg iwi-aor--- 100.00g /dev/sdb1(1) [my_raid1_rimage_1] myvg Iwi-aor-p- 100.00g [unknown](1) [my_raid1_rmeta_0] myvg ewi-aor--- 4.00m /dev/sdb1(0) [my_raid1_rmeta_1] myvg ewi-aor-p- 4.00m [unknown](0)",
"vgchange --activate y --partial myvg",
"vgreduce --removemissing --test myvg",
"vgreduce --removemissing --force myvg",
"vgcfgrestore myvg",
"cat /etc/lvm/archive/ myvg_00000-1248998876 .vg",
"lvs --all --options +devices Couldn't find device with uuid ' FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk '.",
"vgchange --activate n --partial myvg PARTIAL MODE. Incomplete logical volumes will be processed. WARNING: Couldn't find device with uuid 42B7bu-YCMp-CEVD-CmKH-2rk6-fiO9-z1lf4s . WARNING: VG myvg is missing PV 42B7bu-YCMp-CEVD-CmKH-2rk6-fiO9-z1lf4s (last written to /dev/vdb1 ). 0 logical volume(s) in volume group \" myvg \" now active",
"pvcreate --uuid physical-volume-uuid \\ --restorefile /etc/lvm/archive/ volume-group-name_backup-number .vg \\ block-device",
"pvcreate --uuid \"FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk\" \\ --restorefile /etc/lvm/archive/VG_00050.vg \\ /dev/vdb1 Physical volume \"/dev/vdb1\" successfully created",
"vgcfgrestore myvg Restored volume group myvg",
"lvs --all --options +devices myvg",
"LV VG Attr LSize Origin Snap% Move Log Copy% Devices mylv myvg -wi--- 300.00G /dev/vdb1 (0),/dev/vdb1(0) mylv myvg -wi--- 300.00G /dev/vdb1 (34728),/dev/vdb1(0)",
"lvchange --resync myvg/mylv",
"lvchange --activate y myvg/mylv",
"lvs --all --options +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices mylv myvg -wi--- 300.00G /dev/vdb1 (0),/dev/vdb1(0) mylv myvg -wi--- 300.00G /dev/vdb1 (34728),/dev/vdb1(0)",
"Insufficient free extents",
"vgdisplay myvg",
"--- Volume group --- VG Name myvg System ID Format lvm2 Metadata Areas 4 Metadata Sequence No 6 VG Access read/write [...] Free PE / Size 8780 / 34.30 GB",
"lvcreate --extents 8780 --name mylv myvg",
"lvcreate --extents 100%FREE --name mylv myvg",
"vgs --options +vg_free_count,vg_extent_count VG #PV #LV #SN Attr VSize VFree Free #Ext myvg 2 1 0 wz--n- 34.30G 0 0 8780",
"pvck --dump metadata <disk>",
"pvck --dump metadata /dev/sdb metadata text at 172032 crc Oxc627522f # vgname test segno 59 --- <raw metadata from disk> ---",
"pvck --dump metadata_all <disk>",
"pvck --dump metadata_all /dev/sdb metadata at 4608 length 815 crc 29fcd7ab vg test seqno 1 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv metadata at 5632 length 1144 crc 50ea61c3 vg test seqno 2 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv metadata at 7168 length 1450 crc 5652ea55 vg test seqno 3 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv",
"pvck --dump metadata_search <disk>",
"pvck --dump metadata_search /dev/sdb Searching for metadata at offset 4096 size 1044480 metadata at 4608 length 815 crc 29fcd7ab vg test seqno 1 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv metadata at 5632 length 1144 crc 50ea61c3 vg test seqno 2 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv metadata at 7168 length 1450 crc 5652ea55 vg test seqno 3 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv",
"pvck --dump metadata -v <disk>",
"pvck --dump metadata -v /dev/sdb metadata text at 199680 crc 0x628cf243 # vgname my_vg seqno 40 --- my_vg { id = \"dmEbPi-gsgx-VbvS-Uaia-HczM-iu32-Rb7iOf\" seqno = 40 format = \"lvm2\" status = [\"RESIZEABLE\", \"READ\", \"WRITE\"] flags = [] extent_size = 8192 max_lv = 0 max_pv = 0 metadata_copies = 0 physical_volumes { pv0 { id = \"8gn0is-Hj8p-njgs-NM19-wuL9-mcB3-kUDiOQ\" device = \"/dev/sda\" device_id_type = \"sys_wwid\" device_id = \"naa.6001405e635dbaab125476d88030a196\" status = [\"ALLOCATABLE\"] flags = [] dev_size = 125829120 pe_start = 8192 pe_count = 15359 } pv1 { id = \"E9qChJ-5ElL-HVEp-rc7d-U5Fg-fHxL-2QLyID\" device = \"/dev/sdb\" device_id_type = \"sys_wwid\" device_id = \"naa.6001405f3f9396fddcd4012a50029a90\" status = [\"ALLOCATABLE\"] flags = [] dev_size = 125829120 pe_start = 8192 pe_count = 15359 }",
"pvck --dump metadata_search --settings metadata_offset=5632 -f meta.txt /dev/sdb Searching for metadata at offset 4096 size 1044480 metadata at 5632 length 1144 crc 50ea61c3 vg test seqno 2 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv head -2 meta.txt test { id = \"FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv\"",
"pvcreate --restorefile <metadata-file> --uuid <UUID> <disk>",
"pvck --dump headers <disk>",
"vgcfgrestore --file <metadata-file> <vg-name>",
"pvck --dump metadata <disk>",
"vgs",
"pvck --repair -f <metadata-file> <disk>",
"vgs <vgname>",
"pvs <pvname>",
"lvs <lvname>",
"lvchange --maxrecoveryrate 4K my_vg/my_lv Logical volume _my_vg/my_lv_changed.",
"lvchange --syncaction repair my_vg/my_lv",
"lvchange --syncaction check my_vg/my_lv",
"lvchange --syncaction repair my_vg/my_lv",
"lvs -o +raid_sync_action,raid_mismatch_count my_vg/my_lv LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert SyncAction Mismatches my_lv my_vg rwi-a-r--- 500.00m 100.00 idle 0",
"lvs --all --options name,copy_percent,devices my_vg LV Cpy%Sync Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdc1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdc1(0) [my_lv_rmeta_2] /dev/sdd1(0)",
"lvs --all --options name,copy_percent,devices my_vg /dev/sdc: open failed: No such device or address Couldn't find device with uuid A4kRl2-vIzA-uyCb-cci7-bOod-H5tX-IzH4Ee. WARNING: Couldn't find all devices for LV my_vg/my_lv_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV my_vg/my_lv_rmeta_1 while checking used and assumed devices. LV Cpy%Sync Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] [unknown](1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] [unknown](0) [my_lv_rmeta_2] /dev/sdd1(0)",
"lvconvert --repair my_vg/my_lv /dev/sdc: open failed: No such device or address Couldn't find device with uuid A4kRl2-vIzA-uyCb-cci7-bOod-H5tX-IzH4Ee. WARNING: Couldn't find all devices for LV my_vg/my_lv_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV my_vg/my_lv_rmeta_1 while checking used and assumed devices. Attempt to replace failed RAID images (requires full device resync)? [y/n]: y Faulty devices in my_vg/my_lv successfully replaced.",
"lvconvert --repair my_vg/my_lv replacement_pv",
"lvs --all --options name,copy_percent,devices my_vg /dev/sdc: open failed: No such device or address /dev/sdc1: open failed: No such device or address Couldn't find device with uuid A4kRl2-vIzA-uyCb-cci7-bOod-H5tX-IzH4Ee. LV Cpy%Sync Devices my_lv 43.79 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdb1(0) [my_lv_rmeta_2] /dev/sdd1(0)",
"vgreduce --removemissing my_vg",
"pvscan PV /dev/sde1 VG rhel_virt-506 lvm2 [<7.00 GiB / 0 free] PV /dev/sdb1 VG my_vg lvm2 [<60.00 GiB / 59.50 GiB free] PV /dev/sdd1 VG my_vg lvm2 [<60.00 GiB / 59.50 GiB free] PV /dev/sdd1 VG my_vg lvm2 [<60.00 GiB / 59.50 GiB free]",
"lvs --all --options name,copy_percent,devices my_vg my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdb1(0) [my_lv_rmeta_2] /dev/sdd1(0)",
"Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/dm-5 not /dev/sdd Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/emcpowerb not /dev/sde Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/sddlmab not /dev/sdf",
"Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/sdd not /dev/sdf",
"Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/mapper/mpatha not /dev/mapper/mpathc Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/emcpowera not /dev/emcpowerh",
"filter = [ \"a|/dev/sda2USD|\", \"a|/dev/mapper/mpath.*|\", \"r|.*|\" ]",
"filter = [ \"a|/dev/cciss/.*|\", \"a|/dev/emcpower.*|\", \"r|.*|\" ]",
"filter = [ \"a|/dev/hda.*|\", \"a|/dev/mapper/mpath.*|\", \"r|.*|\" ]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_logical_volumes/troubleshooting-lvm_configuring-and-managing-logical-volumes |
Chapter 2. Selecting a cluster installation method and preparing it for users | Chapter 2. Selecting a cluster installation method and preparing it for users Before you install OpenShift Container Platform, decide what kind of installation process to follow and make sure you that you have all of the required resources to prepare the cluster for users. 2.1. Selecting a cluster installation type Before you install an OpenShift Container Platform cluster, you need to select the best installation instructions to follow. Think about your answers to the following questions to select the best option. 2.1.1. Do you want to install and manage an OpenShift Container Platform cluster yourself? If you want to install and manage OpenShift Container Platform yourself, you can install it on the following platforms: Alibaba Cloud Amazon Web Services (AWS) on 64-bit x86 instances Amazon Web Services (AWS) on 64-bit ARM instances Microsoft Azure Microsoft Azure Stack Hub Google Cloud Platform (GCP) Red Hat OpenStack Platform (RHOSP) Red Hat Virtualization (RHV) IBM Cloud VPC IBM Z and LinuxONE IBM Z and LinuxONE for Red Hat Enterprise Linux (RHEL) KVM IBM Power Nutanix VMware vSphere VMware Cloud (VMC) on AWS Bare metal or other platform agnostic infrastructure You can deploy an OpenShift Container Platform 4 cluster to both on-premise hardware and to cloud hosting services, but all of the machines in a cluster must be in the same datacenter or cloud hosting service. If you want to use OpenShift Container Platform but do not want to manage the cluster yourself, you have several managed service options. If you want a cluster that is fully managed by Red Hat, you can use OpenShift Dedicated or OpenShift Online . You can also use OpenShift as a managed service on Azure, AWS, IBM Cloud VPC, or Google Cloud. For more information about managed services, see the OpenShift Products page. If you install an OpenShift Container Platform cluster with a cloud virtual machine as a virtual bare metal, the corresponding cloud-based storage is not supported. 2.1.2. Have you used OpenShift Container Platform 3 and want to use OpenShift Container Platform 4? If you used OpenShift Container Platform 3 and want to try OpenShift Container Platform 4, you need to understand how different OpenShift Container Platform 4 is. OpenShift Container Platform 4 weaves the Operators that package, deploy, and manage Kubernetes applications and the operating system that the platform runs on, Red Hat Enterprise Linux CoreOS (RHCOS), together seamlessly. Instead of deploying machines and configuring their operating systems so that you can install OpenShift Container Platform on them, the RHCOS operating system is an integral part of the OpenShift Container Platform cluster. Deploying the operating system for the cluster machines is part of the installation process for OpenShift Container Platform. See Differences between OpenShift Container Platform 3 and 4 . Because you need to provision machines as part of the OpenShift Container Platform cluster installation process, you cannot upgrade an OpenShift Container Platform 3 cluster to OpenShift Container Platform 4. Instead, you must create a new OpenShift Container Platform 4 cluster and migrate your OpenShift Container Platform 3 workloads to them. For more information about migrating, see Migrating from OpenShift Container Platform 3 to 4 overview . Because you must migrate to OpenShift Container Platform 4, you can use any type of production cluster installation process to create your new cluster. 2.1.3. Do you want to use existing components in your cluster? Because the operating system is integral to OpenShift Container Platform, it is easier to let the installation program for OpenShift Container Platform stand up all of the infrastructure. These are called installer provisioned infrastructure installations. In this type of installation, you can provide some existing infrastructure to the cluster, but the installation program deploys all of the machines that your cluster initially needs. You can deploy an installer-provisioned infrastructure cluster without specifying any customizations to the cluster or its underlying machines to Alibaba Cloud , AWS , Azure , Azure Stack Hub , GCP , Nutanix , or VMC on AWS . These installation methods are the fastest way to deploy a production-capable OpenShift Container Platform cluster. If you need to perform basic configuration for your installer-provisioned infrastructure cluster, such as the instance type for the cluster machines, you can customize an installation for Alibaba Cloud , AWS , Azure , GCP , Nutanix , or VMC on AWS . For installer-provisioned infrastructure installations, you can use an existing VPC in AWS , vNet in Azure , or VPC in GCP . You can also reuse part of your networking infrastructure so that your cluster in AWS , Azure , GCP , or VMC on AWS can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. If you have existing accounts and credentials on these clouds, you can re-use them, but you might need to modify the accounts to have the required permissions to install OpenShift Container Platform clusters on them. You can use the installer-provisioned infrastructure method to create appropriate machine instances on your hardware for RHOSP , RHOSP with Kuryr , RHV , vSphere , and bare metal . Additionally, for vSphere , VMC on AWS , you can also customize additional network parameters during installation. If you want to reuse extensive cloud infrastructure, you can complete a user-provisioned infrastructure installation. With these installations, you manually deploy the machines that your cluster requires during the installation process. If you perform a user-provisioned infrastructure installation on AWS , Azure , Azure Stack Hub , GCP , or VMC on AWS , you can use the provided templates to help you stand up all of the required components. You can also reuse a shared VPC on GCP . Otherwise, you can use the provider-agnostic installation method to deploy a cluster into other clouds. You can also complete a user-provisioned infrastructure installation on your existing hardware. If you use RHOSP , RHV , IBM Z or LinuxONE , IBM Z or LinuxONE with RHEL KVM , IBM Power , or vSphere , use the specific installation instructions to deploy your cluster. If you use other supported hardware, follow the bare metal installation procedure. For some of these platforms, such as RHOSP , vSphere , VMC on AWS , and bare metal , you can also customize additional network parameters during installation. 2.1.4. Do you need extra security for your cluster? If you use a user-provisioned installation method, you can configure a proxy for your cluster. The instructions are included in each installation procedure. If you want to prevent your cluster on a public cloud from exposing endpoints externally, you can deploy a private cluster with installer-provisioned infrastructure on AWS , Azure , or GCP . If you need to install your cluster that has limited access to the internet, such as a disconnected or restricted network cluster, you can mirror the installation packages and install the cluster from them. Follow detailed instructions for user provisioned infrastructure installations into restricted networks for AWS , GCP , IBM Z or LinuxONE , IBM Z or LinuxONE with RHEL KVM , IBM Power , vSphere , VMC on AWS , or bare metal . You can also install a cluster into a restricted network using installer-provisioned infrastructure by following detailed instructions for AWS , GCP , VMC on AWS , RHOSP , RHV , and vSphere . If you need to deploy your cluster to an AWS GovCloud region , AWS China region , or Azure government region , you can configure those custom regions during an installer-provisioned infrastructure installation. You can also configure the cluster machines to use FIPS Validated / Modules in Process cryptographic libraries during installation. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 2.2. Preparing your cluster for users after installation Some configuration is not required to install the cluster but recommended before your users access the cluster. You can customize the cluster itself by customizing the Operators that make up your cluster and integrate you cluster with other required systems, such as an identity provider. For a production cluster, you must configure the following integrations: Persistent storage An identity provider Monitoring core OpenShift Container Platform components 2.3. Preparing your cluster for workloads Depending on your workload needs, you might need to take extra steps before you begin deploying applications. For example, after you prepare infrastructure to support your application build strategy , you might need to make provisions for low-latency workloads or to protect sensitive workloads . You can also configure monitoring for application workloads. If you plan to run Windows workloads , you must enable hybrid networking with OVN-Kubernetes during the installation process; hybrid networking cannot be enabled after your cluster is installed. 2.4. Supported installation methods for different platforms You can perform different types of installations on different platforms. Note Not all installation options are supported for all platforms, as shown in the following tables. A checkmark indicates that the option is supported and links to the relevant section. Table 2.1. Installer-provisioned infrastructure options Alibaba AWS(64-bit x86) AWS(64-bit ARM) Azure Azure Stack Hub GCP Nutanix RHOSP RHV Bare metal(64-bit x86) Bare metal(64-bit ARM) vSphere VMC IBM Cloud VPC IBM Z IBM Power Default [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Custom [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Network customization [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Restricted network [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Private clusters [✓] [✓] [✓] [✓] Existing virtual private networks [✓] [✓] [✓] [✓] Government regions [✓] [✓] Secret regions [✓] China regions [✓] Table 2.2. User-provisioned infrastructure options Alibaba AWS (64-bit x86) AWS (64-bit ARM) Azure Azure Stack Hub GCP Nutanix RHOSP RHV Bare metal (64-bit x86) Bare metal (64-bit ARM) vSphere VMC IBM Cloud VPC IBM Z IBM Z with RHEL KVM IBM Power Platform agnostic Custom [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Network customization [✓] [✓] [✓] [✓] [✓] Restricted network [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Shared VPC hosted outside of cluster project [✓] | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/installing/installing-preparing |
3.5.3. Incrementing Associated Values | 3.5.3. Incrementing Associated Values Use ++ to increment the associated value of a unique key in an array, as in: Again, you can also use a handler function for your index_expression . For example, if you wanted to tally how many times a specific process performed a read to the virtual file system (using the event vfs.read ), you can use the following probe: Example 3.14. vfsreads.stp In Example 3.14, "vfsreads.stp" , the first time that the probe returns the process name gnome-terminal (that is the first time gnome-terminal performs a VFS read), that process name is set as the unique key gnome-terminal with an associated value of 1. The time that the probe returns the process name gnome-terminal , SystemTap increments the associated value of gnome-terminal by 1. SystemTap performs this operation for all process names as the probe returns them. | [
"array_name [ index_expression ] ++",
"probe vfs.read { reads[execname()] ++ }"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/arrayops-increment |
Chapter 19. Installing on OpenStack | Chapter 19. Installing on OpenStack 19.1. Preparing to install on OpenStack You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP). 19.1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 19.1.2. Choosing a method to install OpenShift Container Platform on OpenStack You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 19.1.2.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Red Hat OpenStack Platform (RHOSP) infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster on OpenStack with customizations : You can install a customized cluster on RHOSP. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on OpenStack with Kuryr : You can install a customized OpenShift Container Platform cluster on RHOSP that uses Kuryr SDN. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Installing a cluster on OpenStack in a restricted network : You can install OpenShift Container Platform on RHOSP in a restricted or disconnected network by creating an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. 19.1.2.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on RHOSP infrastructure that you provision, by using one of the following methods: Installing a cluster on OpenStack on your own infrastructure : You can install OpenShift Container Platform on user-provisioned RHOSP infrastructure. By using this installation method, you can integrate your cluster with existing infrastructure and modifications. For installations on user-provisioned infrastructure, you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. You can use the provided Ansible playbooks to assist with the deployment process. Installing a cluster on OpenStack with Kuryr on your own infrastructure : You can install OpenShift Container Platform on user-provisioned RHOSP infrastructure that uses Kuryr SDN. 19.1.3. Scanning RHOSP endpoints for legacy HTTPS certificates Beginning with OpenShift Container Platform 4.10, HTTPS certificates must contain subject alternative name (SAN) fields. Run the following script to scan each HTTPS endpoint in a Red Hat OpenStack Platform (RHOSP) catalog for legacy certificates that only contain the CommonName field. Important OpenShift Container Platform does not check the underlying RHOSP infrastructure for legacy certificates prior to installation or updates. Use the provided script to check for these certificates yourself. Failing to update legacy certificates prior to installing or updating a cluster will result in cluster dysfunction. Prerequisites On the machine where you run the script, have the following software: Bash version 4.0 or greater grep OpenStack client jq OpenSSL version 1.1.1l or greater Populate the machine with RHOSP credentials for the target cloud. Procedure Save the following script to your machine: #!/usr/bin/env bash set -Eeuo pipefail declare catalog san catalog="USD(mktemp)" san="USD(mktemp)" readonly catalog san declare invalid=0 openstack catalog list --format json --column Name --column Endpoints \ | jq -r '.[] | .Name as USDname | .Endpoints[] | select(.interface=="public") | [USDname, .interface, .url] | join(" ")' \ | sort \ > "USDcatalog" while read -r name interface url; do # Ignore HTTP if [[ USD{url#"http://"} != "USDurl" ]]; then continue fi # Remove the schema from the URL noschema=USD{url#"https://"} # If the schema was not HTTPS, error if [[ "USDnoschema" == "USDurl" ]]; then echo "ERROR (unknown schema): USDname USDinterface USDurl" exit 2 fi # Remove the path and only keep host and port noschema="USD{noschema%%/*}" host="USD{noschema%%:*}" port="USD{noschema##*:}" # Add the port if was implicit if [[ "USDport" == "USDhost" ]]; then port='443' fi # Get the SAN fields openssl s_client -showcerts -servername "USDhost" -connect "USDhost:USDport" </dev/null 2>/dev/null \ | openssl x509 -noout -ext subjectAltName \ > "USDsan" # openssl returns the empty string if no SAN is found. # If a SAN is found, openssl is expected to return something like: # # X509v3 Subject Alternative Name: # DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2 if [[ "USD(grep -c "Subject Alternative Name" "USDsan" || true)" -gt 0 ]]; then echo "PASS: USDname USDinterface USDurl" else invalid=USD((invalid+1)) echo "INVALID: USDname USDinterface USDurl" fi done < "USDcatalog" # clean up temporary files rm "USDcatalog" "USDsan" if [[ USDinvalid -gt 0 ]]; then echo "USD{invalid} legacy certificates were detected. Update your certificates to include a SAN field." exit 1 else echo "All HTTPS certificates for this cloud are valid." fi Run the script. Replace any certificates that the script reports as INVALID with certificates that contain SAN fields. Important You must replace all legacy HTTPS certificates before you install OpenShift Container Platform 4.10 or update a cluster to that version. Legacy certificates will be rejected with the following message: x509: certificate relies on legacy Common Name field, use SANs instead 19.1.3.1. Scanning RHOSP endpoints for legacy HTTPS certificates manually Beginning with OpenShift Container Platform 4.10, HTTPS certificates must contain subject alternative name (SAN) fields. If you do not have access to the prerequisite tools that are listed in "Scanning RHOSP endpoints for legacy HTTPS certificates", perform the following steps to scan each HTTPS endpoint in a Red Hat OpenStack Platform (RHOSP) catalog for legacy certificates that only contain the CommonName field. Important OpenShift Container Platform does not check the underlying RHOSP infrastructure for legacy certificates prior to installation or updates. Use the following steps to check for these certificates yourself. Failing to update legacy certificates prior to installing or updating a cluster will result in cluster dysfunction. Procedure On a command line, run the following command to view the URL of RHOSP public endpoints: USD openstack catalog list Record the URL for each HTTPS endpoint that the command returns. For each public endpoint, note the host and the port. Tip Determine the host of an endpoint by removing the scheme, the port, and the path. For each endpoint, run the following commands to extract the SAN field of the certificate: Set a host variable: USD host=<host_name> Set a port variable: USD port=<port_number> If the URL of the endpoint does not have a port, use the value 443 . Retrieve the SAN field of the certificate: USD openssl s_client -showcerts -servername "USDhost" -connect "USDhost:USDport" </dev/null 2>/dev/null \ | openssl x509 -noout -ext subjectAltName Example output X509v3 Subject Alternative Name: DNS:your.host.example.net For each endpoint, look for output that resembles the example. If there is no output for an endpoint, the certificate of that endpoint is invalid and must be re-issued. Important You must replace all legacy HTTPS certificates before you install OpenShift Container Platform 4.10 or update a cluster to that version. Legacy certificates are rejected with the following message: x509: certificate relies on legacy Common Name field, use SANs instead 19.2. Preparing to install a cluster that uses SR-IOV or OVS-DPDK on OpenStack Before you install a OpenShift Container Platform cluster that uses single-root I/O virtualization (SR-IOV) or Open vSwitch with the Data Plane Development Kit (OVS-DPDK) on Red Hat OpenStack Platform (RHOSP), you must understand the requirements for each technology and then perform preparatory tasks. 19.2.1. Requirements for clusters on RHOSP that use either SR-IOV or OVS-DPDK If you use SR-IOV or OVS-DPDK with your deployment, you must meet the following requirements: RHOSP compute nodes must use a flavor that supports huge pages. 19.2.1.1. Requirements for clusters on RHOSP that use SR-IOV To use single-root I/O virtualization (SR-IOV) with your deployment, you must meet the following requirements: Plan your Red Hat OpenStack Platform (RHOSP) SR-IOV deployment . OpenShift Container Platform must support the NICs that you use. For a list of supported NICs, see "About Single Root I/O Virtualization (SR-IOV) hardware networks" in the "Hardware networks" subsection of the "Networking" documentation. For each node that will have an attached SR-IOV NIC, your RHOSP cluster must have: One instance from the RHOSP quota One port attached to the machines subnet One port for each SR-IOV Virtual Function A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space SR-IOV deployments often employ performance optimizations, such as dedicated or isolated CPUs. For maximum performance, configure your underlying RHOSP deployment to use these optimizations, and then run OpenShift Container Platform compute machines on the optimized infrastructure. For more information about configuring performant RHOSP compute nodes, see Configuring Compute nodes for performance . 19.2.1.2. Requirements for clusters on RHOSP that use OVS-DPDK To use Open vSwitch with the Data Plane Development Kit (OVS-DPDK) with your deployment, you must meet the following requirements: Plan your Red Hat OpenStack Platform (RHOSP) OVS-DPDK deployment by referring to Planning your OVS-DPDK deployment in the Network Functions Virtualization Planning and Configuration Guide. Configure your RHOSP OVS-DPDK deployment according to Configuring an OVS-DPDK deployment in the Network Functions Virtualization Planning and Configuration Guide. 19.2.2. Preparing to install a cluster that uses SR-IOV You must configure RHOSP before you install a cluster that uses SR-IOV on it. 19.2.2.1. Creating SR-IOV networks for compute machines If your Red Hat OpenStack Platform (RHOSP) deployment supports single root I/O virtualization (SR-IOV) , you can provision SR-IOV networks that compute machines run on. Note The following instructions entail creating an external flat network and an external, VLAN-based network that can be attached to a compute machine. Depending on your RHOSP deployment, other network types might be required. Prerequisites Your cluster supports SR-IOV. Note If you are unsure about what your cluster supports, review the OpenShift Container Platform SR-IOV hardware networks documentation. You created radio and uplink provider networks as part of your RHOSP deployment. The names radio and uplink are used in all example commands to represent these networks. Procedure On a command line, create a radio RHOSP network: USD openstack network create radio --provider-physical-network radio --provider-network-type flat --external Create an uplink RHOSP network: USD openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external Create a subnet for the radio network: USD openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio Create a subnet for the uplink network: USD openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink 19.2.3. Preparing to install a cluster that uses OVS-DPDK You must configure RHOSP before you install a cluster that uses SR-IOV on it. Complete Creating a flavor and deploying an instance for OVS-DPDK before you install a cluster on RHOSP. After you perform preinstallation tasks, install your cluster by following the most relevant OpenShift Container Platform on RHOSP installation instructions. Then, perform the tasks under " steps" on this page. 19.2.4. steps For either type of deployment: Configure the Node Tuning Operator with huge pages support . To complete SR-IOV configuration after you deploy your cluster: Install the SR-IOV Operator . Configure your SR-IOV network device . Create SR-IOV compute machines . Consult the following references after you deploy your cluster to improve its performance: A test pod template for clusters that use OVS-DPDK on OpenStack . A test pod template for clusters that use SR-IOV on OpenStack . A performance profile template for clusters that use OVS-DPDK on OpenStack . 19.3. Installing a cluster on OpenStack with customizations In OpenShift Container Platform version 4.11, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP). To customize the installation, modify parameters in the install-config.yaml before you install the cluster. 19.3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.11 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have a storage service installed in RHOSP, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended host practices . You have the metadata service enabled in RHOSP. 19.3.2. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 19.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 19.3.2.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 19.3.2.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 19.3.2.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 19.3.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 19.3.4. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 19.3.5. Configuring an image registry with custom storage on clusters that run on RHOSP After you install a cluster on Red Hat OpenStack Platform (RHOSP), you can use a Cinder volume that is in a specific availability zone for registry storage. Procedure Create a YAML file that specifies the storage class and availability zone to use. For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name> Note OpenShift Container Platform does not verify the existence of the availability zone you choose. Verify the name of the availability zone before you apply the configuration. From a command line, apply the configuration: USD oc apply -f <storage_class_file_name> Example output storageclass.storage.k8s.io/custom-csi-storageclass created Create a YAML file that specifies a persistent volume claim (PVC) that uses your storage class and the openshift-image-registry namespace. For example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: "true" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3 1 Enter the namespace openshift-image-registry . This namespace allows the Cluster Image Registry Operator to consume the PVC. 2 Optional: Adjust the volume size. 3 Enter the name of the storage class that you created. From a command line, apply the configuration: USD oc apply -f <pvc_file_name> Example output persistentvolumeclaim/csi-pvc-imageregistry created Replace the original persistent volume claim in the image registry configuration with the new claim: USD oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{"op": "replace", "path": "/spec/storage/pvc/claim", "value": "csi-pvc-imageregistry"}]' Example output config.imageregistry.operator.openshift.io/cluster patched Over the several minutes, the configuration is updated. Verification To confirm that the registry is using the resources that you defined: Verify that the PVC claim value is identical to the name that you provided in your PVC definition: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output ... status: ... managementState: Managed pvc: claim: csi-pvc-imageregistry ... Verify that the status of the PVC is Bound : USD oc get pvc -n openshift-image-registry csi-pvc-imageregistry Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m 19.3.6. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Important If the external network's CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process. The default network ranges are: Network Range machineNetwork 10.0.0.0/16 serviceNetwork 172.30.0.0/16 clusterNetwork 10.128.0.0/14 Warning If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP. Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 19.3.7. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . OpenShift Container Platform does not support application credentials. If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 19.3.8. Setting cloud provider options Optionally, you can edit the cloud provider configuration for your cluster. The cloud provider configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of cloud provider configuration parameters, see the "OpenStack cloud configuration reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options based on the cloud configuration specification. Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example: #... [LoadBalancer] use-octavia=true 1 lb-provider = "amphora" 2 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #... 1 This property enables Octavia integration. 2 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 3 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 4 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.1 and 16.2, this feature is only available for the Amphora provider. 5 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 7 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.1 and 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Important For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 19.3.8.1. External load balancers that use pre-defined floating IP addresses Commonly, Red Hat OpenStack Platform (RHOSP) deployments disallow non-administrator users from creating specific floating IP addresses. If such a policy is in place and you use a floating IP address in your service specification, the cloud provider will fail to handle IP address assignment to load balancers. If you use an external cloud provider, you can avoid this problem by pre-creating a floating IP address and specifying it in your service specification. The in-tree cloud provider does not support this method. Alternatively, you can modify the RHOSP Networking service (Neutron) to allow non-administrator users to create specific floating IP addresses . Additional resources For more information about cloud provider configuration, see OpenStack cloud provider options . 19.3.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 19.3.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Installation configuration parameters section for more information about the available parameters. 19.3.10.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 19.3.11. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 19.3.11.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 19.2. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 19.3.11.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 19.3. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) cluster network provider to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 19.3.11.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 19.4. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 and vCurrent . v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OpenShift Container Platform. The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . Valid values are baremetal , marketplace and openshift-samples . You may specify multiple capabilities in this parameter. String array cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 19.3.11.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 19.5. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 19.3.11.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 19.6. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . compute.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 19.3.11.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 19.3.11.7. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the install-config.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . If your cluster runs on an RHOSP version that is more than 16.1.6 and less than 16.2.4, bare metal workers do not function due to a known issue that causes the metadata service to be unavailable for services on OpenShift Container Platform nodes. The RHOSP network supports both VM and bare metal server attachment. Your network configuration does not rely on a provider network. Provider networks are not supported. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an install-config.yaml file as part of the OpenShift Container Platform installation process. Procedure In the install-config.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of controlPlane.platform.openstack.type to a bare metal flavor. Change the value of compute.platform.openstack.type to a bare metal flavor. If you want to deploy your machines on a pre-existing network, change the value of platform.openstack.machinesSubnet to the RHOSP subnet UUID of the network. Control plane and compute machines must use the same subnet. An example bare metal install-config.yaml file controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 ... compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 ... platform: openstack: machinesSubnet: <subnet_UUID> 3 ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. 3 If you want to use a pre-existing network, change this value to the UUID of the RHOSP subnet. Use the updated install-config.yaml file to complete the installation process. The compute machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 19.3.11.8. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 19.3.11.8.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 19.3.11.8.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIP property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIP property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIP and platform.openstack.ingressVIP properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 19.3.11.9. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 19.3.12. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 19.3.13. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 19.3.13.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 19.3.13.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 19.3.14. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 19.3.15. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 19.3.16. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 19.3.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 19.3.18. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 19.4. Installing a cluster on OpenStack with Kuryr In OpenShift Container Platform version 4.11, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP) that uses Kuryr SDN. To customize the installation, modify parameters in the install-config.yaml before you install the cluster. 19.4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.11 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have a storage service installed in RHOSP, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended host practices . 19.4.2. About Kuryr SDN Kuryr is a container network interface (CNI) plugin solution that uses the Neutron and Octavia Red Hat OpenStack Platform (RHOSP) services to provide networking for pods and Services. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr namespace: kuryr-controller - a single service instance installed on a master node. This is modeled in OpenShift Container Platform as a Deployment object. kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet object. The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs. Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform SDN over an RHOSP network. If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial. Kuryr is not recommended in deployments where all of the following criteria are true: The RHOSP version is less than 16. The deployment uses UDP services, or a large number of TCP services on few hypervisors. or The ovn-octavia Octavia driver is disabled. The deployment uses a large number of TCP services on few hypervisors. 19.4.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires. Use the following quota to satisfy a default cluster's minimum requirements: Table 19.7. Recommended resources for a default OpenShift Container Platform cluster on RHOSP with Kuryr Resource Value Floating IP addresses 3 - plus the expected number of Services of LoadBalancer type Ports 1500 - 1 needed per Pod Routers 1 Subnets 250 - 1 needed per Namespace/Project Networks 250 - 1 needed per Namespace/Project RAM 112 GB vCPUs 28 Volume storage 275 GB Instances 7 Security groups 250 - 1 needed per Service and per NetworkPolicy Security group rules 1000 Server groups 2 - plus 1 for each additional availability zone in each machine pool Load balancers 100 - 1 needed per Service Load balancer listeners 500 - 1 needed per Service-exposed port Load balancer pools 500 - 1 needed per Service-exposed port A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Important If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects. Take the following notes into consideration when setting resources: The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time. Each network policy is mapped into an RHOSP security group, and depending on the NetworkPolicy spec, one or more rules are added to the security group. Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating the number of security groups required for the quota. If you are using RHOSP version 15 or earlier, or the ovn-octavia driver , each load balancer has a security group with the user project. The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the RHOSP deployment's size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them. If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. To enable Kuryr SDN, your environment must meet the following requirements: Run RHOSP 13+. Have Overcloud with Octavia. Use Neutron Trunk ports extension. Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid . 19.4.3.1. Increasing quota When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP) resources used by pods, services, namespaces, and network policies. Procedure Increase the quotas for a project by running the following command: USD sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project> 19.4.3.2. Configuring Neutron Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work. In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr can properly handle network policies. 19.4.3.3. Configuring Octavia Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use Kuryr SDN. To enable Octavia, you must include the Octavia service during the installation of the RHOSP Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update. Note The following steps only capture the key pieces required during the deployment of RHOSP when dealing with Octavia. It is also important to note that registry methods vary. This example uses the local registry method. Procedure If you are using the local registry, create a template to upload the images to the registry. For example: (undercloud) USD openstack overcloud container image prepare \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ --namespace=registry.access.redhat.com/rhosp13 \ --push-destination=<local-ip-from-undercloud.conf>:8787 \ --prefix=openstack- \ --tag-from-label {version}-{product-version} \ --output-env-file=/home/stack/templates/overcloud_images.yaml \ --output-images-file /home/stack/local_registry_images.yaml Verify that the local_registry_images.yaml file contains the Octavia images. For example: ... - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787 Note The Octavia container versions vary depending upon the specific RHOSP release installed. Pull the container images from registry.redhat.io to the Undercloud node: (undercloud) USD sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verbose This may take some time depending on the speed of your network and Undercloud disk. Since an Octavia load balancer is used to access the OpenShift Container Platform API, you must increase their listeners' default timeouts for the connections. The default timeout is 50 seconds. Increase the timeout to 20 minutes by passing the following file to the Overcloud deploy command: (undercloud) USD cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000 Note This is not needed for RHOSP 13.0.13+. Install or update your Overcloud environment with Octavia: USD openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ -e octavia_timeouts.yaml Note This command only includes the files associated with Octavia; it varies based on your specific installation of RHOSP. See the RHOSP documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director . Note When leveraging Kuryr SDN, the Overcloud installation requires the Neutron trunk extension. This is available by default on director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN. In RHOSP versions earlier than 13.0.13, add the project ID to the octavia.conf configuration file after you create the project. To enforce network policies across services, like when traffic goes through the Octavia load balancer, you must ensure Octavia creates the Amphora VM security groups on the user project. This change ensures that required load balancer security groups belong to that project, and that they can be updated to enforce services isolation. Note This task is unnecessary in RHOSP version 13.0.13 or later. Octavia implements a new ACL API that restricts access to the load balancers VIP. Get the project ID USD openstack project show <project> Example output +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | PROJECT_ID | | is_domain | False | | name | *<project>* | | parent_id | default | | tags | [] | +-------------+----------------------------------+ Add the project ID to octavia.conf for the controllers. Source the stackrc file: USD source stackrc # Undercloud credentials List the Overcloud controllers: USD openstack server list Example output +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | ID | Name | Status | Networks | Image | Flavor | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | controller | │ | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ SSH into the controller(s). USD ssh [email protected] Edit the octavia.conf file to add the project into the list of projects where Amphora security groups are on the user's account. Restart the Octavia worker so the new configuration loads. controller-0USD sudo docker restart octavia_worker Note Depending on your RHOSP environment, Octavia might not support UDP listeners. If you use Kuryr SDN on RHOSP version 13.0.13 or earlier, UDP services are not supported. RHOSP version 16 or later support UDP. 19.4.3.3.1. The Octavia OVN Driver Octavia supports multiple provider drivers through the Octavia API. To see all available Octavia provider drivers, on a command line, enter: USD openstack loadbalancer provider list Example output +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+ Beginning with RHOSP version 16, the Octavia OVN provider driver ( ovn ) is supported on OpenShift Container Platform on RHOSP deployments. ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by Director on deployments that use OVN Neutron ML2. The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it. If Kuryr uses ovn instead of Amphora, it offers the following benefits: Decreased resource requirements. Kuryr does not require a load balancer VM for each service. Reduced network latency. Increased service creation speed by using OpenFlow rules instead of a VM for each service. Distributed load balancing actions across all nodes instead of centralized on Amphora VMs. You can configure your cluster to use the Octavia OVN driver after your RHOSP cloud is upgraded from version 13 to version 16. 19.4.3.4. Known limitations of installing with Kuryr Using OpenShift Container Platform with Kuryr SDN has several known limitations. RHOSP general limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that apply to all versions and environments: Service objects with the NodePort type are not supported. Clusters that use the OVN Octavia provider driver support Service objects for which the .spec.selector property is unspecified only if the .subsets.addresses property of the Endpoints object includes the subnet of the nodes or pods. If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer . Configuring the sessionAffinity=ClientIP property on Service objects does not have an effect. Kuryr does not support this setting. RHOSP version limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that depend on the RHOSP version. RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OpenShift Container Platform service. Creating too many services can cause you to run out of resources. Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of RHOSP. Octavia RHOSP versions before 13.0.13 do not support UDP listeners. Therefore, OpenShift Container Platform UDP services are not supported. Octavia RHOSP versions before 13.0.13 cannot listen to multiple protocols on the same port. Services that expose the same port to different protocols, like TCP and UDP, are not supported. Kuryr SDN does not support automatic unidling by a service. RHOSP environment limitations There are limitations when using Kuryr SDN that depend on your deployment environment. Because of Octavia's lack of support for the UDP protocol and multiple listeners, if the RHOSP version is earlier than 13.0.13, Kuryr forces pods to use TCP for DNS resolution. In Go versions 1.12 and earlier, applications that are compiled with CGO support disabled use UDP only. In this case, the native Go resolver does not recognize the use-vc option in resolv.conf , which controls whether TCP is forced for DNS resolution. As a result, UDP is still used for DNS resolution, which fails. To ensure that TCP forcing is allowed, compile applications either with the environment variable CGO_ENABLED set to 1 , i.e. CGO_ENABLED=1 , or ensure that the variable is absent. In Go versions 1.13 and later, TCP is used automatically if DNS resolution using UDP fails. Note musl-based containers, including Alpine-based containers, do not support the use-vc option. RHOSP upgrade limitations As a result of the RHOSP upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required. You can address API changes on an individual basis. If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two ways: Upgrade each VM by triggering a load balancer failover . Leave responsibility for upgrading the VMs to users. If the operator takes the first option, there might be short downtimes during failovers. If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features. Important If OpenShift Container Platform detects a new Octavia version that supports UDP load balancing, it recreates the DNS service automatically. The service recreation ensures that the service default supports UDP load balancing. The recreation causes the DNS service approximately one minute of downtime. 19.4.3.5. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 19.4.3.6. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 19.4.3.7. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 19.4.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 19.4.5. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 19.4.6. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Important If the external network's CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process. The default network ranges are: Network Range machineNetwork 10.0.0.0/16 serviceNetwork 172.30.0.0/16 clusterNetwork 10.128.0.0/14 Warning If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP. Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 19.4.7. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . OpenShift Container Platform does not support application credentials. If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 19.4.8. Setting cloud provider options Optionally, you can edit the cloud provider configuration for your cluster. The cloud provider configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of cloud provider configuration parameters, see the "OpenStack cloud configuration reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options based on the cloud configuration specification. Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example: #... [LoadBalancer] use-octavia=true 1 lb-provider = "amphora" 2 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #... 1 This property enables Octavia integration. 2 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 3 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 4 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.1 and 16.2, this feature is only available for the Amphora provider. 5 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 7 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.1 and 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Important For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 19.4.8.1. External load balancers that use pre-defined floating IP addresses Commonly, Red Hat OpenStack Platform (RHOSP) deployments disallow non-administrator users from creating specific floating IP addresses. If such a policy is in place and you use a floating IP address in your service specification, the cloud provider will fail to handle IP address assignment to load balancers. If you use an external cloud provider, you can avoid this problem by pre-creating a floating IP address and specifying it in your service specification. The in-tree cloud provider does not support this method. Alternatively, you can modify the RHOSP Networking service (Neutron) to allow non-administrator users to create specific floating IP addresses . Additional resources For more information about cloud provider configuration, see OpenStack cloud provider options . 19.4.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 19.4.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 19.4.10.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Note Kuryr installations default to HTTP proxies. Prerequisites For Kuryr installations on restricted networks that use the Proxy object, the proxy must be able to reply to the router that the cluster uses. To add a static route for the proxy configuration, from a command line as the root user, enter: USD ip route add <cluster_network_cidr> via <installer_subnet_gateway> The restricted subnet must have a gateway that is defined and available to be linked to the Router resource that Kuryr creates. You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 19.4.11. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 19.4.11.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 19.8. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 19.4.11.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 19.9. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) cluster network provider to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 19.4.11.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 19.10. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 and vCurrent . v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OpenShift Container Platform. The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . Valid values are baremetal , marketplace and openshift-samples . You may specify multiple capabilities in this parameter. String array cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 19.4.11.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 19.11. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 19.4.11.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 19.12. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . compute.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 19.4.11.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 19.4.11.7. Sample customized install-config.yaml file for RHOSP with Kuryr To deploy with Kuryr SDN instead of the default OpenShift SDN, you must modify the install-config.yaml file to include Kuryr as the desired networking.networkType and proceed with the default OpenShift Container Platform SDN installation steps. This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 2 octaviaSupport: true 3 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 The Amphora Octavia driver creates two ports per load balancer. As a result, the service subnet that the installer creates is twice the size of the CIDR that is specified as the value of the serviceNetwork property. The larger range is required to prevent IP address conflicts. 2 3 Both trunkSupport and octaviaSupport are automatically discovered by the installer, so there is no need to set them. But if your environment does not meet both requirements, Kuryr SDN will not properly work. Trunks are needed to connect the pods to the RHOSP network and Octavia is required to create the OpenShift Container Platform services. 19.4.11.8. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 19.4.11.8.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 19.4.11.8.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIP property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIP property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIP and platform.openstack.ingressVIP properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 19.4.11.9. Kuryr ports pools A Kuryr ports pool maintains a number of ports on standby for pod creation. Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted. The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes. Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair. Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior: The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add Neutron ports to the pools when the first pod that is configured to use the dedicated network for pods is created in a namespace. The default value is false . The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1 . The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting. If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted. The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3 . 19.4.11.10. Adjusting Kuryr ports pools during installation During installation, you can configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation. Prerequisites Create and modify the install-config.yaml file. Procedure From a command line, create the manifest files: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-network-03-config.yml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-network-* Example output cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml Open the cluster-network-03-config.yml file in an editor, and enter a custom resource (CR) that describes the Cluster Network Operator configuration that you want: USD oc edit networks.operator.openshift.io cluster Edit the settings to meet your requirements. The following file is provided as an example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5 1 Set enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports when the first pod on the network for pods is created in a namespace. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false . 2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts . The default value is 1 . 3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts . The default value is 3 . 4 If the number of free ports in a pool is higher than the value of poolMaxPorts , Kuryr deletes them until the number matches that value. Setting this value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0 . 5 The openStackServiceNetwork parameter defines the CIDR range of the network from which IP addresses are allocated to RHOSP Octavia's LoadBalancers. If this parameter is used with the Amphora driver, Octavia takes two IP addresses from this network for each load balancer: one for OpenShift and the other for VRRP connections. Because these IP addresses are managed by OpenShift Container Platform and Neutron respectively, they must come from different pools. Therefore, the value of openStackServiceNetwork must be at least twice the size of the value of serviceNetwork , and the value of serviceNetwork must overlap entirely with the range that is defined by openStackServiceNetwork . The CNO verifies that VRRP IP addresses that are taken from the range that is defined by this parameter do not overlap with the range that is defined by the serviceNetwork parameter. If this parameter is not set, the CNO uses an expanded value of serviceNetwork that is determined by decrementing the prefix size by 1. Save the cluster-network-03-config.yml file, and exit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory while creating the cluster. 19.4.12. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 19.4.13. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 19.4.13.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 19.4.13.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 19.4.14. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 19.4.15. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 19.4.16. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 19.4.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 19.4.18. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 19.5. Installing a cluster on OpenStack on your own infrastructure In OpenShift Container Platform version 4.11, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 19.5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.11 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have an RHOSP account where you want to install OpenShift Container Platform. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended host practices . On the machine from which you run the installation program, you have: A single directory in which you can keep the files you create during the installation process Python 3 19.5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 19.5.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 19.13. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 19.5.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 19.5.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 19.5.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 19.5.4. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 19.5.5. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-containers.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 19.5.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 19.5.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 19.5.8. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.11 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 19.5.9. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 19.5.10. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 19.5.10.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 19.5.10.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 19.5.11. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . OpenShift Container Platform does not support application credentials. If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 19.5.12. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. 19.5.13. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 19.5.13.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 19.14. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 19.5.13.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 19.15. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) cluster network provider to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 19.5.13.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 19.16. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 and vCurrent . v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OpenShift Container Platform. The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . Valid values are baremetal , marketplace and openshift-samples . You may specify multiple capabilities in this parameter. String array cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 19.5.13.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 19.17. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 19.5.13.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 19.18. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . compute.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 19.5.13.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 19.5.13.7. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 19.5.13.8. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1 open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24 . To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet. 19.5.13.9. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 19.5.13.10. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 19.5.13.10.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 19.5.13.10.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIP property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIP property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIP and platform.openstack.ingressVIP properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 19.5.14. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 19.5.15. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 19.5.16. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 19.5.17. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform postinstallation network configuration. If you do not define a value for os_bootstrap_fip , the installer cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, create a network, subnet, and router by running the network.yaml playbook: USD ansible-playbook -i inventory.yaml network.yaml Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" Optionally, you can use the inventory.yaml file that you created to customize your installation. For example, you can deploy a cluster that uses bare metal machines. 19.5.17.1. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . If your cluster runs on an RHOSP version that is more than 16.1.6 and less than 16.2.4, bare metal workers do not function due to a known issue that causes the metadata service to be unavailable for services on OpenShift Container Platform nodes. The RHOSP network supports both VM and bare metal server attachment. Your network configuration does not rely on a provider network. Provider networks are not supported. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an inventory.yaml file as part of the OpenShift Container Platform installation process. Procedure In the inventory.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor. Change the value of os_flavor_worker to a bare metal flavor. An example bare metal inventory.yaml file all: hosts: localhost: ansible_connection: local ansible_python_interpreter: "{{ansible_playbook_python}}" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external' ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 19.5.18. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 19.5.19. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files aren't already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.24.0 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 19.5.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 19.5.21. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 19.5.22. Creating compute machines on RHOSP After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and compute-nodes.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml steps Approve the certificate signing requests for the machines. 19.5.23. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.24.0 master-1 Ready master 63m v1.24.0 master-2 Ready master 64m v1.24.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.24.0 master-1 Ready master 73m v1.24.0 master-2 Ready master 74m v1.24.0 worker-0 Ready worker 11m v1.24.0 worker-1 Ready worker 11m v1.24.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 19.5.24. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. 19.5.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 19.5.26. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 19.6. Installing a cluster on OpenStack with Kuryr on your own infrastructure In OpenShift Container Platform version 4.11, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 19.6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.11 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have an RHOSP account where you want to install OpenShift Container Platform. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended host practices . On the machine from which you run the installation program, you have: A single directory in which you can keep the files you create during the installation process Python 3 19.6.2. About Kuryr SDN Kuryr is a container network interface (CNI) plugin solution that uses the Neutron and Octavia Red Hat OpenStack Platform (RHOSP) services to provide networking for pods and Services. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr namespace: kuryr-controller - a single service instance installed on a master node. This is modeled in OpenShift Container Platform as a Deployment object. kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet object. The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs. Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform SDN over an RHOSP network. If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial. Kuryr is not recommended in deployments where all of the following criteria are true: The RHOSP version is less than 16. The deployment uses UDP services, or a large number of TCP services on few hypervisors. or The ovn-octavia Octavia driver is disabled. The deployment uses a large number of TCP services on few hypervisors. 19.6.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires. Use the following quota to satisfy a default cluster's minimum requirements: Table 19.19. Recommended resources for a default OpenShift Container Platform cluster on RHOSP with Kuryr Resource Value Floating IP addresses 3 - plus the expected number of Services of LoadBalancer type Ports 1500 - 1 needed per Pod Routers 1 Subnets 250 - 1 needed per Namespace/Project Networks 250 - 1 needed per Namespace/Project RAM 112 GB vCPUs 28 Volume storage 275 GB Instances 7 Security groups 250 - 1 needed per Service and per NetworkPolicy Security group rules 1000 Server groups 2 - plus 1 for each additional availability zone in each machine pool Load balancers 100 - 1 needed per Service Load balancer listeners 500 - 1 needed per Service-exposed port Load balancer pools 500 - 1 needed per Service-exposed port A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Important If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects. Take the following notes into consideration when setting resources: The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time. Each network policy is mapped into an RHOSP security group, and depending on the NetworkPolicy spec, one or more rules are added to the security group. Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating the number of security groups required for the quota. If you are using RHOSP version 15 or earlier, or the ovn-octavia driver , each load balancer has a security group with the user project. The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the RHOSP deployment's size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them. If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. To enable Kuryr SDN, your environment must meet the following requirements: Run RHOSP 13+. Have Overcloud with Octavia. Use Neutron Trunk ports extension. Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid . 19.6.3.1. Increasing quota When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP) resources used by pods, services, namespaces, and network policies. Procedure Increase the quotas for a project by running the following command: USD sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project> 19.6.3.2. Configuring Neutron Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work. In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr can properly handle network policies. 19.6.3.3. Configuring Octavia Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use Kuryr SDN. To enable Octavia, you must include the Octavia service during the installation of the RHOSP Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update. Note The following steps only capture the key pieces required during the deployment of RHOSP when dealing with Octavia. It is also important to note that registry methods vary. This example uses the local registry method. Procedure If you are using the local registry, create a template to upload the images to the registry. For example: (undercloud) USD openstack overcloud container image prepare \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ --namespace=registry.access.redhat.com/rhosp13 \ --push-destination=<local-ip-from-undercloud.conf>:8787 \ --prefix=openstack- \ --tag-from-label {version}-{product-version} \ --output-env-file=/home/stack/templates/overcloud_images.yaml \ --output-images-file /home/stack/local_registry_images.yaml Verify that the local_registry_images.yaml file contains the Octavia images. For example: ... - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787 Note The Octavia container versions vary depending upon the specific RHOSP release installed. Pull the container images from registry.redhat.io to the Undercloud node: (undercloud) USD sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verbose This may take some time depending on the speed of your network and Undercloud disk. Since an Octavia load balancer is used to access the OpenShift Container Platform API, you must increase their listeners' default timeouts for the connections. The default timeout is 50 seconds. Increase the timeout to 20 minutes by passing the following file to the Overcloud deploy command: (undercloud) USD cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000 Note This is not needed for RHOSP 13.0.13+. Install or update your Overcloud environment with Octavia: USD openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ -e octavia_timeouts.yaml Note This command only includes the files associated with Octavia; it varies based on your specific installation of RHOSP. See the RHOSP documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director . Note When leveraging Kuryr SDN, the Overcloud installation requires the Neutron trunk extension. This is available by default on director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN. In RHOSP versions earlier than 13.0.13, add the project ID to the octavia.conf configuration file after you create the project. To enforce network policies across services, like when traffic goes through the Octavia load balancer, you must ensure Octavia creates the Amphora VM security groups on the user project. This change ensures that required load balancer security groups belong to that project, and that they can be updated to enforce services isolation. Note This task is unnecessary in RHOSP version 13.0.13 or later. Octavia implements a new ACL API that restricts access to the load balancers VIP. Get the project ID USD openstack project show <project> Example output +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | PROJECT_ID | | is_domain | False | | name | *<project>* | | parent_id | default | | tags | [] | +-------------+----------------------------------+ Add the project ID to octavia.conf for the controllers. Source the stackrc file: USD source stackrc # Undercloud credentials List the Overcloud controllers: USD openstack server list Example output +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | ID | Name | Status | Networks | Image | Flavor | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | controller | │ | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ SSH into the controller(s). USD ssh [email protected] Edit the octavia.conf file to add the project into the list of projects where Amphora security groups are on the user's account. Restart the Octavia worker so the new configuration loads. controller-0USD sudo docker restart octavia_worker Note Depending on your RHOSP environment, Octavia might not support UDP listeners. If you use Kuryr SDN on RHOSP version 13.0.13 or earlier, UDP services are not supported. RHOSP version 16 or later support UDP. 19.6.3.3.1. The Octavia OVN Driver Octavia supports multiple provider drivers through the Octavia API. To see all available Octavia provider drivers, on a command line, enter: USD openstack loadbalancer provider list Example output +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+ Beginning with RHOSP version 16, the Octavia OVN provider driver ( ovn ) is supported on OpenShift Container Platform on RHOSP deployments. ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by Director on deployments that use OVN Neutron ML2. The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it. If Kuryr uses ovn instead of Amphora, it offers the following benefits: Decreased resource requirements. Kuryr does not require a load balancer VM for each service. Reduced network latency. Increased service creation speed by using OpenFlow rules instead of a VM for each service. Distributed load balancing actions across all nodes instead of centralized on Amphora VMs. 19.6.3.4. Known limitations of installing with Kuryr Using OpenShift Container Platform with Kuryr SDN has several known limitations. RHOSP general limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that apply to all versions and environments: Service objects with the NodePort type are not supported. Clusters that use the OVN Octavia provider driver support Service objects for which the .spec.selector property is unspecified only if the .subsets.addresses property of the Endpoints object includes the subnet of the nodes or pods. If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer . Configuring the sessionAffinity=ClientIP property on Service objects does not have an effect. Kuryr does not support this setting. RHOSP version limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that depend on the RHOSP version. RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OpenShift Container Platform service. Creating too many services can cause you to run out of resources. Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of RHOSP. Octavia RHOSP versions before 13.0.13 do not support UDP listeners. Therefore, OpenShift Container Platform UDP services are not supported. Octavia RHOSP versions before 13.0.13 cannot listen to multiple protocols on the same port. Services that expose the same port to different protocols, like TCP and UDP, are not supported. Kuryr SDN does not support automatic unidling by a service. RHOSP environment limitations There are limitations when using Kuryr SDN that depend on your deployment environment. Because of Octavia's lack of support for the UDP protocol and multiple listeners, if the RHOSP version is earlier than 13.0.13, Kuryr forces pods to use TCP for DNS resolution. In Go versions 1.12 and earlier, applications that are compiled with CGO support disabled use UDP only. In this case, the native Go resolver does not recognize the use-vc option in resolv.conf , which controls whether TCP is forced for DNS resolution. As a result, UDP is still used for DNS resolution, which fails. To ensure that TCP forcing is allowed, compile applications either with the environment variable CGO_ENABLED set to 1 , i.e. CGO_ENABLED=1 , or ensure that the variable is absent. In Go versions 1.13 and later, TCP is used automatically if DNS resolution using UDP fails. Note musl-based containers, including Alpine-based containers, do not support the use-vc option. RHOSP upgrade limitations As a result of the RHOSP upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required. You can address API changes on an individual basis. If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two ways: Upgrade each VM by triggering a load balancer failover . Leave responsibility for upgrading the VMs to users. If the operator takes the first option, there might be short downtimes during failovers. If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features. Important If OpenShift Container Platform detects a new Octavia version that supports UDP load balancing, it recreates the DNS service automatically. The service recreation ensures that the service default supports UDP load balancing. The recreation causes the DNS service approximately one minute of downtime. 19.6.3.5. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 19.6.3.6. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 19.6.3.7. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 19.6.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 19.6.5. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 19.6.6. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-containers.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 19.6.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 19.6.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 19.6.9. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.11 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 19.6.10. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 19.6.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 19.6.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 19.6.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 19.6.12. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . OpenShift Container Platform does not support application credentials. If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 19.6.13. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. 19.6.14. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 19.6.14.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 19.20. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 19.6.14.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 19.21. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) cluster network provider to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 19.6.14.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 19.22. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 and vCurrent . v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OpenShift Container Platform. The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . Valid values are baremetal , marketplace and openshift-samples . You may specify multiple capabilities in this parameter. String array cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 19.6.14.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 19.23. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 19.6.14.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 19.24. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . compute.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 19.6.14.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 19.6.14.7. Sample customized install-config.yaml file for RHOSP with Kuryr To deploy with Kuryr SDN instead of the default OpenShift SDN, you must modify the install-config.yaml file to include Kuryr as the desired networking.networkType and proceed with the default OpenShift Container Platform SDN installation steps. This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 2 octaviaSupport: true 3 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 The Amphora Octavia driver creates two ports per load balancer. As a result, the service subnet that the installer creates is twice the size of the CIDR that is specified as the value of the serviceNetwork property. The larger range is required to prevent IP address conflicts. 2 3 Both trunkSupport and octaviaSupport are automatically discovered by the installer, so there is no need to set them. But if your environment does not meet both requirements, Kuryr SDN will not properly work. Trunks are needed to connect the pods to the RHOSP network and Octavia is required to create the OpenShift Container Platform services. 19.6.14.8. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 19.6.14.8.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 19.6.14.8.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIP property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIP property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIP and platform.openstack.ingressVIP properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 19.6.14.9. Kuryr ports pools A Kuryr ports pool maintains a number of ports on standby for pod creation. Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted. The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes. Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair. Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior: The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add Neutron ports to the pools when the first pod that is configured to use the dedicated network for pods is created in a namespace. The default value is false . The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1 . The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting. If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted. The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3 . 19.6.14.10. Adjusting Kuryr ports pools during installation During installation, you can configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation. Prerequisites Create and modify the install-config.yaml file. Procedure From a command line, create the manifest files: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-network-03-config.yml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-network-* Example output cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml Open the cluster-network-03-config.yml file in an editor, and enter a custom resource (CR) that describes the Cluster Network Operator configuration that you want: USD oc edit networks.operator.openshift.io cluster Edit the settings to meet your requirements. The following file is provided as an example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5 1 Set enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports when the first pod on the network for pods is created in a namespace. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false . 2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts . The default value is 1 . 3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts . The default value is 3 . 4 If the number of free ports in a pool is higher than the value of poolMaxPorts , Kuryr deletes them until the number matches that value. Setting this value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0 . 5 The openStackServiceNetwork parameter defines the CIDR range of the network from which IP addresses are allocated to RHOSP Octavia's LoadBalancers. If this parameter is used with the Amphora driver, Octavia takes two IP addresses from this network for each load balancer: one for OpenShift and the other for VRRP connections. Because these IP addresses are managed by OpenShift Container Platform and Neutron respectively, they must come from different pools. Therefore, the value of openStackServiceNetwork must be at least twice the size of the value of serviceNetwork , and the value of serviceNetwork must overlap entirely with the range that is defined by openStackServiceNetwork . The CNO verifies that VRRP IP addresses that are taken from the range that is defined by this parameter do not overlap with the range that is defined by the serviceNetwork parameter. If this parameter is not set, the CNO uses an expanded value of serviceNetwork that is determined by decrementing the prefix size by 1. Save the cluster-network-03-config.yml file, and exit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory while creating the cluster. 19.6.14.11. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1 open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24 . To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet. 19.6.14.12. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 19.6.14.13. Modifying the network type By default, the installation program selects the OpenShiftSDN network type. To use Kuryr instead, change the value in the installation configuration file that the program generated. Prerequisites You have the file install-config.yaml that was generated by the OpenShift Container Platform installation program Procedure In a command prompt, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["networkType"] = "Kuryr"; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set networking.networkType to "Kuryr" . 19.6.15. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 19.6.16. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 19.6.17. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 19.6.18. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform postinstallation network configuration. If you do not define a value for os_bootstrap_fip , the installer cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, create a network, subnet, and router by running the network.yaml playbook: USD ansible-playbook -i inventory.yaml network.yaml Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" 19.6.19. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 19.6.20. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files aren't already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.24.0 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 19.6.21. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 19.6.22. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 19.6.23. Creating compute machines on RHOSP After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and compute-nodes.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml steps Approve the certificate signing requests for the machines. 19.6.24. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.24.0 master-1 Ready master 63m v1.24.0 master-2 Ready master 64m v1.24.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.24.0 master-1 Ready master 73m v1.24.0 master-2 Ready master 74m v1.24.0 worker-0 Ready worker 11m v1.24.0 worker-1 Ready worker 11m v1.24.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 19.6.25. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. 19.6.26. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 19.6.27. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 19.7. Installing a cluster on OpenStack in a restricted network In OpenShift Container Platform 4.11, you can install a cluster on Red Hat OpenStack Platform (RHOSP) in a restricted network by creating an internal mirror of the installation release content. 19.7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.11 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended host practices . You have the metadata service enabled in RHOSP. 19.7.2. About installations in restricted networks In OpenShift Container Platform 4.11, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 19.7.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 19.7.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 19.25. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 19.7.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 19.7.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 19.7.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 19.7.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 19.7.5. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 19.7.6. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . OpenShift Container Platform does not support application credentials. If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 19.7.7. Setting cloud provider options Optionally, you can edit the cloud provider configuration for your cluster. The cloud provider configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of cloud provider configuration parameters, see the "OpenStack cloud configuration reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options based on the cloud configuration specification. Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example: #... [LoadBalancer] use-octavia=true 1 lb-provider = "amphora" 2 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #... 1 This property enables Octavia integration. 2 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 3 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 4 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.1 and 16.2, this feature is only available for the Amphora provider. 5 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 7 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.1 and 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Important For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 19.7.7.1. External load balancers that use pre-defined floating IP addresses Commonly, Red Hat OpenStack Platform (RHOSP) deployments disallow non-administrator users from creating specific floating IP addresses. If such a policy is in place and you use a floating IP address in your service specification, the cloud provider will fail to handle IP address assignment to load balancers. If you use an external cloud provider, you can avoid this problem by pre-creating a floating IP address and specifying it in your service specification. The in-tree cloud provider does not support this method. Alternatively, you can modify the RHOSP Networking service (Neutron) to allow non-administrator users to create specific floating IP addresses . Additional resources For more information about cloud provider configuration, see OpenStack cloud provider options . 19.7.8. Creating the RHCOS image for restricted network installations Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network Red Hat OpenStack Platform (RHOSP) environment. Prerequisites Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.11 for RHEL 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) image. Decompress the image. Note You must decompress the image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: Upload the image that you decompressed to a location that is accessible from the bastion server, like Glance. For example: Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment. 19.7.9. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Have the imageContentSources values that were generated during mirror registry creation. Obtain the contents of the certificate for your mirror registry. Retrieve a Red Hat Enterprise Linux CoreOS (RHCOS) image and upload it to an accessible location. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . In the install-config.yaml file, set the value of platform.openstack.clusterOSImage to the image location or name. For example: platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 19.7.9.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Note Kuryr installations default to HTTP proxies. Prerequisites For Kuryr installations on restricted networks that use the Proxy object, the proxy must be able to reply to the router that the cluster uses. To add a static route for the proxy configuration, from a command line as the root user, enter: USD ip route add <cluster_network_cidr> via <installer_subnet_gateway> The restricted subnet must have a gateway that is defined and available to be linked to the Router resource that Kuryr creates. You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 19.7.9.2. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 19.7.9.2.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 19.26. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 19.7.9.2.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 19.27. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) cluster network provider to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 19.7.9.2.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 19.28. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 and vCurrent . v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OpenShift Container Platform. The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . Valid values are baremetal , marketplace and openshift-samples . You may specify multiple capabilities in this parameter. String array cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 19.7.9.2.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 19.29. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 19.7.9.2.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 19.30. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . compute.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 19.7.9.3. Sample customized install-config.yaml file for restricted OpenStack installations This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 19.7.10. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 19.7.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 19.7.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 19.7.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 19.7.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 19.7.13. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 19.7.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 19.7.15. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 19.7.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 19.7.17. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 19.8. OpenStack cloud configuration reference guide A cloud provider configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). Use the following parameters in a cloud-provider configuration manifest file to configure your cluster. 19.8.1. OpenStack cloud provider options The cloud provider configuration, typically stored as a file named cloud.conf , controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). You can create a valid cloud.conf file by specifying the following options in it. 19.8.1.1. Global options The following options are used for RHOSP CCM authentication with the RHOSP Identity service, also known as Keystone. They are similiar to the global options that you can set by using the openstack CLI. Option Description auth-url The RHOSP Identity service URL. For example, http://128.110.154.166/identity . ca-file Optional. The CA certificate bundle file for communication with the RHOSP Identity service. If you use the HTTPS protocol with The Identity service URL, this option is required. domain-id The Identity service user domain ID. Leave this option unset if you are using Identity service application credentials. domain-name The Identity service user domain name. This option is not required if you set domain-id . tenant-id The Identity service project ID. Leave this option unset if you are using Identity service application credentials. In version 3 of the Identity API, which changed the identifier tenant to project , the value of tenant-id is automatically mapped to the project construct in the API. tenant-name The Identity service project name. username The Identity service user name. Leave this option unset if you are using Identity service application credentials. password The Identity service user password. Leave this option unset if you are using Identity service application credentials. region The Identity service region name. trust-id The Identity service trust ID. A trust represents the authorization of a user, or trustor, to delegate roles to another user, or trustee. Optionally, a trust authorizes the trustee to impersonate the trustor. You can find available trusts by querying the /v3/OS-TRUST/trusts endpoint of the Identity service API. 19.8.1.2. Load balancer options The cloud provider supports several load balancer options for deployments that use Octavia. Option Description use-octavia Whether or not to use Octavia for the LoadBalancer type of the service implementation rather than Neutron-LBaaS. The default value is true . floating-network-id Optional. The external network used to create floating IP addresses for load balancer virtual IP addresses (VIPs). If there are multiple external networks in the cloud, this option must be set or the user must specify loadbalancer.openstack.org/floating-network-id in the service annotation. lb-method The load balancing algorithm used to create the load balancer pool. For the Amphora provider the value can be ROUND_ROBIN , LEAST_CONNECTIONS , or SOURCE_IP . The default value is ROUND_ROBIN . For the OVN provider, only the SOURCE_IP_PORT algorithm is supported. For the Amphora provider, if using the LEAST_CONNECTIONS or SOURCE_IP methods, configure the create-monitor option as true in the cloud-provider-config config map on the openshift-config namespace and ETP:Local on the load-balancer type service to allow balancing algorithm enforcement in the client to service endpoint connections. lb-provider Optional. Used to specify the provider of the load balancer, for example, amphora or octavia . Only the Amphora and Octavia providers are supported. lb-version Optional. The load balancer API version. Only "v2" is supported. subnet-id The ID of the Networking service subnet on which load balancer VIPs are created. create-monitor Whether or not to create a health monitor for the service load balancer. A health monitor is required for services that declare externalTrafficPolicy: Local . The default value is false . This option is unsupported if you use RHOSP earlier than version 17 with the ovn provider. monitor-delay The interval in seconds by which probes are sent to members of the load balancer. The default value is 5 . monitor-max-retries The number of successful checks that are required to change the operating status of a load balancer member to ONLINE . The valid range is 1 to 10 , and the default value is 1 . monitor-timeout The time in seconds that a monitor waits to connect to the back end before it times out. The default value is 3 . 19.8.1.3. Metadata options Option Description search-order This configuration key affects the way that the provider retrieves metadata that relates to the instances in which it runs. The default value of configDrive,metadataService results in the provider retrieving instance metadata from the configuration drive first if available, and then the metadata service. Alternative values are: configDrive : Only retrieve instance metadata from the configuration drive. metadataService : Only retrieve instance metadata from the metadata service. metadataService,configDrive : Retrieve instance metadata from the metadata service first if available, and then retrieve instance metadata from the configuration drive. 19.9. Uninstalling a cluster on OpenStack You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP). 19.9.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure On the computer that you used to install the cluster, go to the directory that contains the installation program, and run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 19.10. Uninstalling a cluster on RHOSP from your own infrastructure You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP) on user-provisioned infrastructure. 19.10.1. Downloading playbook dependencies The Ansible playbooks that simplify the removal process on user-provisioned infrastructure require several Python modules. On the machine where you will run the process, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 19.10.2. Removing a cluster from RHOSP that uses your own infrastructure You can remove an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) that uses your own infrastructure. To complete the removal process quickly, run several Ansible playbooks. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies." You have the playbooks that you used to install the cluster. You modified the playbooks that are prefixed with down- to reflect any changes that you made to their corresponding installation playbooks. For example, changes to the bootstrap.yaml file are reflected in the down-bootstrap.yaml file. All of the playbooks are in a common directory. Procedure On a command line, run the playbooks that you downloaded: USD ansible-playbook -i inventory.yaml \ down-bootstrap.yaml \ down-control-plane.yaml \ down-compute-nodes.yaml \ down-load-balancers.yaml \ down-network.yaml \ down-security-groups.yaml Remove any DNS record changes you made for the OpenShift Container Platform installation. OpenShift Container Platform is removed from your infrastructure. | [
"#!/usr/bin/env bash set -Eeuo pipefail declare catalog san catalog=\"USD(mktemp)\" san=\"USD(mktemp)\" readonly catalog san declare invalid=0 openstack catalog list --format json --column Name --column Endpoints | jq -r '.[] | .Name as USDname | .Endpoints[] | select(.interface==\"public\") | [USDname, .interface, .url] | join(\" \")' | sort > \"USDcatalog\" while read -r name interface url; do # Ignore HTTP if [[ USD{url#\"http://\"} != \"USDurl\" ]]; then continue fi # Remove the schema from the URL noschema=USD{url#\"https://\"} # If the schema was not HTTPS, error if [[ \"USDnoschema\" == \"USDurl\" ]]; then echo \"ERROR (unknown schema): USDname USDinterface USDurl\" exit 2 fi # Remove the path and only keep host and port noschema=\"USD{noschema%%/*}\" host=\"USD{noschema%%:*}\" port=\"USD{noschema##*:}\" # Add the port if was implicit if [[ \"USDport\" == \"USDhost\" ]]; then port='443' fi # Get the SAN fields openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName > \"USDsan\" # openssl returns the empty string if no SAN is found. # If a SAN is found, openssl is expected to return something like: # # X509v3 Subject Alternative Name: # DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2 if [[ \"USD(grep -c \"Subject Alternative Name\" \"USDsan\" || true)\" -gt 0 ]]; then echo \"PASS: USDname USDinterface USDurl\" else invalid=USD((invalid+1)) echo \"INVALID: USDname USDinterface USDurl\" fi done < \"USDcatalog\" clean up temporary files rm \"USDcatalog\" \"USDsan\" if [[ USDinvalid -gt 0 ]]; then echo \"USD{invalid} legacy certificates were detected. Update your certificates to include a SAN field.\" exit 1 else echo \"All HTTPS certificates for this cloud are valid.\" fi",
"x509: certificate relies on legacy Common Name field, use SANs instead",
"openstack catalog list",
"host=<host_name>",
"port=<port_number>",
"openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName",
"X509v3 Subject Alternative Name: DNS:your.host.example.net",
"x509: certificate relies on legacy Common Name field, use SANs instead",
"openstack network create radio --provider-physical-network radio --provider-network-type flat --external",
"openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external",
"openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio",
"openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink",
"openstack role add --user <user> --project <project> swiftoperator",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>",
"oc apply -f <storage_class_file_name>",
"storageclass.storage.k8s.io/custom-csi-storageclass created",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3",
"oc apply -f <pvc_file_name>",
"persistentvolumeclaim/csi-pvc-imageregistry created",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'",
"config.imageregistry.operator.openshift.io/cluster patched",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"status: managementState: Managed pvc: claim: csi-pvc-imageregistry",
"oc get pvc -n openshift-image-registry csi-pvc-imageregistry",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] use-octavia=true 1 lb-provider = \"amphora\" 2 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 platform: openstack: machinesSubnet: <subnet_UUID> 3",
"./openshift-install wait-for install-complete --log-level debug",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>",
"(undercloud) USD openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml --namespace=registry.access.redhat.com/rhosp13 --push-destination=<local-ip-from-undercloud.conf>:8787 --prefix=openstack- --tag-from-label {version}-{product-version} --output-env-file=/home/stack/templates/overcloud_images.yaml --output-images-file /home/stack/local_registry_images.yaml",
"- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787",
"(undercloud) USD sudo openstack overcloud container image upload --config-file /home/stack/local_registry_images.yaml --verbose",
"(undercloud) USD cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml -e octavia_timeouts.yaml",
"openstack project show <project>",
"+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | PROJECT_ID | | is_domain | False | | name | *<project>* | | parent_id | default | | tags | [] | +-------------+----------------------------------+",
"source stackrc # Undercloud credentials",
"openstack server list",
"+--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | ID | Name | Status | Networks | Image | Flavor | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | controller | │ | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+",
"ssh [email protected]",
"List of project IDs that are allowed to have Load balancer security groups belonging to them. amp_secgroup_allowed_projects = PROJECT_ID",
"controller-0USD sudo docker restart octavia_worker",
"openstack loadbalancer provider list",
"+---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+",
"openstack role add --user <user> --project <project> swiftoperator",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] use-octavia=true 1 lb-provider = \"amphora\" 2 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"ip route add <cluster_network_cidr> via <installer_subnet_gateway>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 2 octaviaSupport: true 3 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-network-03-config.yml 1",
"ls <installation_directory>/manifests/cluster-network-*",
"cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr",
"sudo alternatives --set python /usr/bin/python3",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-containers.yaml'",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"file <name_of_downloaded_file>",
"openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"bootstrap machine\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"export INFRA_ID=USD(jq -r .infraID metadata.json)",
"import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)",
"openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>",
"openstack image show <image_name>",
"openstack catalog show image",
"openstack token issue -c id -f value",
"{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }",
"for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done",
"# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'",
"# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'",
"ansible-playbook -i inventory.yaml security-groups.yaml",
"ansible-playbook -i inventory.yaml network.yaml",
"openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"",
"all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'",
"./openshift-install wait-for install-complete --log-level debug",
"ansible-playbook -i inventory.yaml bootstrap.yaml",
"openstack console log show \"USDINFRA_ID-bootstrap\"",
"ansible-playbook -i inventory.yaml control-plane.yaml",
"openshift-install wait-for bootstrap-complete",
"INFO API v1.24.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml",
"ansible-playbook -i inventory.yaml compute-nodes.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.24.0 master-1 Ready master 63m v1.24.0 master-2 Ready master 64m v1.24.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.24.0 master-1 Ready master 73m v1.24.0 master-2 Ready master 74m v1.24.0 worker-0 Ready worker 11m v1.24.0 worker-1 Ready worker 11m v1.24.0",
"openshift-install --log-level debug wait-for install-complete",
"sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>",
"(undercloud) USD openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml --namespace=registry.access.redhat.com/rhosp13 --push-destination=<local-ip-from-undercloud.conf>:8787 --prefix=openstack- --tag-from-label {version}-{product-version} --output-env-file=/home/stack/templates/overcloud_images.yaml --output-images-file /home/stack/local_registry_images.yaml",
"- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787",
"(undercloud) USD sudo openstack overcloud container image upload --config-file /home/stack/local_registry_images.yaml --verbose",
"(undercloud) USD cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml -e octavia_timeouts.yaml",
"openstack project show <project>",
"+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | PROJECT_ID | | is_domain | False | | name | *<project>* | | parent_id | default | | tags | [] | +-------------+----------------------------------+",
"source stackrc # Undercloud credentials",
"openstack server list",
"+--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | ID | Name | Status | Networks | Image | Flavor | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | controller | │ | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+",
"ssh [email protected]",
"List of project IDs that are allowed to have Load balancer security groups belonging to them. amp_secgroup_allowed_projects = PROJECT_ID",
"controller-0USD sudo docker restart octavia_worker",
"openstack loadbalancer provider list",
"+---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr",
"sudo alternatives --set python /usr/bin/python3",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/openstack/down-containers.yaml'",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"file <name_of_downloaded_file>",
"openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"bootstrap machine\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 2 octaviaSupport: true 3 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIP: 192.0.2.13 ingressVIP: 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-network-03-config.yml 1",
"ls <installation_directory>/manifests/cluster-network-*",
"cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"networkType\"] = \"Kuryr\"; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"export INFRA_ID=USD(jq -r .infraID metadata.json)",
"import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)",
"openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>",
"openstack image show <image_name>",
"openstack catalog show image",
"openstack token issue -c id -f value",
"{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }",
"for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done",
"# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'",
"# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'",
"ansible-playbook -i inventory.yaml security-groups.yaml",
"ansible-playbook -i inventory.yaml network.yaml",
"openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"",
"ansible-playbook -i inventory.yaml bootstrap.yaml",
"openstack console log show \"USDINFRA_ID-bootstrap\"",
"ansible-playbook -i inventory.yaml control-plane.yaml",
"openshift-install wait-for bootstrap-complete",
"INFO API v1.24.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml",
"ansible-playbook -i inventory.yaml compute-nodes.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.24.0 master-1 Ready master 63m v1.24.0 master-2 Ready master 64m v1.24.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.24.0 master-1 Ready master 73m v1.24.0 master-2 Ready master 74m v1.24.0 worker-0 Ready worker 11m v1.24.0 worker-1 Ready worker 11m v1.24.0",
"openshift-install --log-level debug wait-for install-complete",
"openstack role add --user <user> --project <project> swiftoperator",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] use-octavia=true 1 lb-provider = \"amphora\" 2 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"file <name_of_downloaded_file>",
"openstack image create --file rhcos-44.81.202003110027-0-openstack.x86_64.qcow2 --disk-format qcow2 rhcos-USD{RHCOS_VERSION}",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"ip route add <cluster_network_cidr> via <installer_subnet_gateway>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk",
"sudo alternatives --set python /usr/bin/python3",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml down-control-plane.yaml down-compute-nodes.yaml down-load-balancers.yaml down-network.yaml down-security-groups.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/installing/installing-on-openstack |
3.2.3. Autotools Documentation | 3.2.3. Autotools Documentation Red Hat Enterprise Linux 6 includes man pages for autoconf , automake , autoscan and most tools included in the Autotools suite. In addition, the Autotools community provides extensive documentation on autoconf and automake on the following websites: http://www.gnu.org/software/autoconf/manual/autoconf.html http://www.gnu.org/software/autoconf/manual/automake.html The following is an online book describing the use of Autotools. Although the above online documentation is the recommended and most up to date information on Autotools, this book is a good alternative and introduction. http://sourceware.org/autobook/ For information on how to create Autotools input files, see: http://www.gnu.org/software/autoconf/manual/autoconf.html#Making-configure-Scripts http://www.gnu.org/software/autoconf/manual/automake.html#Invoking-Automake The following upstream example also illustrates the use of Autotools in a simple hello program: http://www.gnu.org/software/hello/manual/hello.html | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/cmd-autotools-upstreamdocs |
Chapter 9. Streams | Chapter 9. Streams You may want to process a subset or all data in the cache to produce a result. This may bring thoughts of Map Reduce. Data Grid allows the user to do something very similar but utilizes the standard JRE APIs to do so. Java 8 introduced the concept of a Stream which allows functional-style operations on collections rather than having to procedurally iterate over the data yourself. Stream operations can be implemented in a fashion very similar to MapReduce. Streams, just like MapReduce allow you to perform processing upon the entirety of your cache, possibly a very large data set, but in an efficient way. Note Streams are the preferred method when dealing with data that exists in the cache because streams automatically adjust to cluster topology changes. Also since we can control how the entries are iterated upon we can more efficiently perform the operations in a cache that is distributed if you want it to perform all of the operations across the cluster concurrently. A stream is retrieved from the entrySet , keySet or values collections returned from the Cache by invoking the stream or parallelStream methods. 9.1. Common stream operations This section highlights various options that are present irrespective of what type of underlying cache you are using. 9.2. Key filtering It is possible to filter the stream so that it only operates upon a given subset of keys. This can be done by invoking the filterKeys method on the CacheStream . This should always be used over a Predicate filter and will be faster if the predicate was holding all keys. If you are familiar with the AdvancedCache interface you may be wondering why you even use getAll over this keyFilter. There are some small benefits (mostly smaller payloads) to using getAll if you need the entries as is and need them all in memory in the local node. However if you need to do processing on these elements a stream is recommended since you will get both distributed and threaded parallelism for free. 9.3. Segment based filtering Note This is an advanced feature and should only be used with deep knowledge of Data Grid segment and hashing techniques. These segments based filtering can be useful if you need to segment data into separate invocations. This can be useful when integrating with other tools such as Apache Spark . This option is only supported for replicated and distributed caches. This allows the user to operate upon a subset of data at a time as determined by the KeyPartitioner . The segments can be filtered by invoking filterKeySegments method on the CacheStream . This is applied after the key filter but before any intermediate operations are performed. 9.4. Local/Invalidation A stream used with a local or invalidation cache can be used just the same way you would use a stream on a regular collection. Data Grid handles all of the translations if necessary behind the scenes and works with all of the more interesting options (ie. storeAsBinary and a cache loader). Only data local to the node where the stream operation is performed will be used, for example invalidation only uses local entries. 9.5. Example The code below takes a cache and returns a map with all the cache entries whose values contain the string "JBoss" Map<Object, String> jbossValues = cache.entrySet().stream() .filter(e -> e.getValue().contains("JBoss")) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); 9.6. Distribution/Replication/Scattered This is where streams come into their stride. When a stream operation is performed it will send the various intermediate and terminal operations to each node that has pertinent data. This allows processing the intermediate values on the nodes owning the data, and only sending the final results back to the originating nodes, improving performance. 9.6.1. Rehash Aware Internally the data is segmented and each node only performs the operations upon the data it owns as a primary owner. This allows for data to be processed evenly, assuming segments are granular enough to provide for equal amounts of data on each node. When you are utilizing a distributed cache, the data can be reshuffled between nodes when a new node joins or leaves. Distributed Streams handle this reshuffling of data automatically so you don't have to worry about monitoring when nodes leave or join the cluster. Reshuffled entries may be processed a second time, and we keep track of the processed entries at the key level or at the segment level (depending on the terminal operation) to limit the amount of duplicate processing. It is possible but highly discouraged to disable rehash awareness on the stream. This should only be considered if your request can handle only seeing a subset of data if a rehash occurs. This can be done by invoking CacheStream.disableRehashAware() The performance gain for most operations when a rehash doesn't occur is completely negligible. The only exceptions are for iterator and forEach, which will use less memory, since they do not have to keep track of processed keys. Warning Please rethink disabling rehash awareness unless you really know what you are doing. 9.6.2. Serialization Since the operations are sent across to other nodes they must be serializable by Data Grid marshalling. This allows the operations to be sent to the other nodes. The simplest way is to use a CacheStream instance and use a lambda just as you would normally. Data Grid overrides all of the various Stream intermediate and terminal methods to take Serializable versions of the arguments (ie. SerializableFunction, SerializablePredicate... ) You can find these methods at CacheStream . This relies on the spec to pick the most specific method as defined here . In our example we used a Collector to collect all the results into a Map . Unfortunately the Collectors class doesn't produce Serializable instances. Thus if you need to use these, there are two ways to do so: One option would be to use the CacheCollectors class which allows for a Supplier<Collector> to be provided. This instance could then use the Collectors to supply a Collector which is not serialized. Map<Object, String> jbossValues = cache.entrySet().stream() .filter(e -> e.getValue().contains("Jboss")) .collect(CacheCollectors.serializableCollector(() -> Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue))); Alternatively, you can avoid the use of CacheCollectors and instead use the overloaded collect methods that take Supplier<Collector> . These overloaded collect methods are only available via CacheStream interface. Map<Object, String> jbossValues = cache.entrySet().stream() .filter(e -> e.getValue().contains("Jboss")) .collect(() -> Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); If however you are not able to use the Cache and CacheStream interfaces you cannot utilize Serializable arguments and you must instead cast the lambdas to be Serializable manually by casting the lambda to multiple interfaces. It is not a pretty sight but it gets the job done. Map<Object, String> jbossValues = map.entrySet().stream() .filter((Serializable & Predicate<Map.Entry<Object, String>>) e -> e.getValue().contains("Jboss")) .collect(CacheCollectors.serializableCollector(() -> Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue))); The recommended and most performant way is to use an AdvancedExternalizer as this provides the smallest payload. Unfortunately this means you cannot use lamdbas as advanced externalizers require defining the class before hand. You can use an advanced externalizer as shown below: Map<Object, String> jbossValues = cache.entrySet().stream() .filter(new ContainsFilter("Jboss")) .collect(() -> Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); class ContainsFilter implements Predicate<Map.Entry<Object, String>> { private final String target; ContainsFilter(String target) { this.target = target; } @Override public boolean test(Map.Entry<Object, String> e) { return e.getValue().contains(target); } } class JbossFilterExternalizer implements AdvancedExternalizer<ContainsFilter> { @Override public Set<Class<? extends ContainsFilter>> getTypeClasses() { return Util.asSet(ContainsFilter.class); } @Override public Integer getId() { return CUSTOM_ID; } @Override public void writeObject(ObjectOutput output, ContainsFilter object) throws IOException { output.writeUTF(object.target); } @Override public ContainsFilter readObject(ObjectInput input) throws IOException, ClassNotFoundException { return new ContainsFilter(input.readUTF()); } } You could also use an advanced externalizer for the collector supplier to reduce the payload size even further. Map<Object, String> map = (Map<Object, String>) cache.entrySet().stream() .filter(new ContainsFilter("Jboss")) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); class ToMapCollectorSupplier<K, U> implements Supplier<Collector<Map.Entry<K, U>, ?, Map<K, U>>> { static final ToMapCollectorSupplier INSTANCE = new ToMapCollectorSupplier(); private ToMapCollectorSupplier() { } @Override public Collector<Map.Entry<K, U>, ?, Map<K, U>> get() { return Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue); } } class ToMapCollectorSupplierExternalizer implements AdvancedExternalizer<ToMapCollectorSupplier> { @Override public Set<Class<? extends ToMapCollectorSupplier>> getTypeClasses() { return Util.asSet(ToMapCollectorSupplier.class); } @Override public Integer getId() { return CUSTOM_ID; } @Override public void writeObject(ObjectOutput output, ToMapCollectorSupplier object) throws IOException { } @Override public ToMapCollectorSupplier readObject(ObjectInput input) throws IOException, ClassNotFoundException { return ToMapCollectorSupplier.INSTANCE; } } 9.7. Parallel Computation Distributed streams by default try to parallelize as much as possible. It is possible for the end user to control this and actually they always have to control one of the options. There are 2 ways these streams are parallelized. Local to each node When a stream is created from the cache collection the end user can choose between invoking stream or parallelStream method. Depending on if the parallel stream was picked will enable multiple threading for each node locally. Note that some operations like a rehash aware iterator and forEach operations will always use a sequential stream locally. This could be enhanced at some point to allow for parallel streams locally. Users should be careful when using local parallelism as it requires having a large number of entries or operations that are computationally expensive to be faster. Also it should be noted that if a user uses a parallel stream with forEach that the action should not block as this would be executed on the common pool, which is normally reserved for computation operations. Remote requests When there are multiple nodes it may be desirable to control whether the remote requests are all processed at the same time concurrently or one at a time. By default all terminal operations except the iterator perform concurrent requests. The iterator, method to reduce overall memory pressure on the local node, only performs sequential requests which actually performs slightly better. If a user wishes to change this default however they can do so by invoking the sequentialDistribution or parallelDistribution methods on the CacheStream . 9.8. Task timeout It is possible to set a timeout value for the operation requests. This timeout is used only for remote requests timing out and it is on a per request basis. The former means the local execution will not timeout and the latter means if you have a failover scenario as described above the subsequent requests each have a new timeout. If no timeout is specified it uses the replication timeout as a default timeout. You can set the timeout in your task by doing the following: CacheStream<Map.Entry<Object, String>> stream = cache.entrySet().stream(); stream.timeout(1, TimeUnit.MINUTES); For more information about this, please check the java doc in timeout javadoc. 9.9. Injection The Stream has a terminal operation called forEach which allows for running some sort of side effect operation on the data. In this case it may be desirable to get a reference to the Cache that is backing this Stream. If your Consumer implements the CacheAware interface the injectCache method be invoked before the accept method from the Consumer interface. 9.10. Distributed Stream execution Distributed streams execution works in a fashion very similar to map reduce. Except in this case we are sending zero to many intermediate operations (map, filter etc.) and a single terminal operation to the various nodes. The operation basically comes down to the following: The desired segments are grouped by which node is the primary owner of the given segment A request is generated to send to each remote node that contains the intermediate and terminal operations including which segments it should process The terminal operation will be performed locally if necessary Each remote node will receive this request and run the operations and subsequently send the response back The local node will then gather the local response and remote responses together performing any kind of reduction required by the operations themselves. Final reduced response is then returned to the user In most cases all operations are fully distributed, as in the operations are all fully applied on each remote node and usually only the last operation or something related may be reapplied to reduce the results from multiple nodes. One important note is that intermediate values do not actually have to be serializable, it is the last value sent back that is the part desired (exceptions for various operations will be highlighted below). Terminal operator distributed result reductions The following paragraphs describe how the distributed reductions work for the various terminal operators. Some of these are special in that an intermediate value may be required to be serializable instead of the final result. allMatch noneMatch anyMatch The allMatch operation is ran on each node and then all the results are logically anded together locally to get the appropriate value. The noneMatch and anyMatch operations use a logical or instead. These methods also have early termination support, stopping remote and local operations once the final result is known. collect The collect method is interesting in that it can do a few extra steps. The remote node performs everything as normal except it doesn't perform the final finisher upon the result and instead sends back the fully combined results. The local thread then combines the remote and local result into a value which is then finally finished. The key here to remember is that the final value doesn't have to be serializable but rather the values produced from the supplier and combiner methods. count The count method just adds the numbers together from each node. findAny findFirst The findAny operation returns just the first value they find, whether it was from a remote node or locally. Note this supports early termination in that once a value is found it will not process others. Note the findFirst method is special since it requires a sorted intermediate operation, which is detailed in the exceptions section. max min The max and min methods find the respective min or max value on each node then a final reduction is performed locally to ensure only the min or max across all nodes is returned. reduce The various reduce methods 1 , 2 , 3 will end up serializing the result as much as the accumulator can do. Then it will accumulate the local and remote results together locally, before combining if you have provided that. Note this means a value coming from the combiner doesn't have to be Serializable. 9.11. Key based rehash aware operators The iterator , spliterator and forEach are unlike the other terminal operators in that the rehash awareness has to keep track of what keys per segment have been processed instead of just segments. This is to guarantee an exactly once (iterator & spliterator) or at least once behavior (forEach) even under cluster membership changes. The iterator and spliterator operators when invoked on a remote node will return back batches of entries, where the batch is only sent back after the last has been fully consumed. This batching is done to limit how many entries are in memory at a given time. The user node will hold onto which keys it has processed and when a given segment is completed it will release those keys from memory. This is why sequential processing is preferred for the iterator method, so only a subset of segment keys are held in memory at once, instead of from all nodes. The forEach() method also returns batches, but it returns a batch of keys after it has finished processing at least a batch worth of keys. This way the originating node can know what keys have been processed already to reduce chances of processing the same entry again. Unfortunately this means it is possible to have an at least once behavior when a node goes down unexpectedly. In this case that node could have been processing a batch and not yet completed one and those entries that were processed but not in a completed batch will be ran again when the rehash failure operation occurs. Note that adding a node will not cause this issue as the rehash failover doesn't occur until all responses are received. These operations batch sizes are both controlled by the same value which can be configured by invoking distributedBatchSize method on the CacheStream . This value will default to the chunkSize configured in state transfer. Unfortunately this value is a tradeoff with memory usage vs performance vs at least once and your mileage may vary. Using iterator with replicated and distributed caches When a node is the primary or backup owner of all requested segments for a distributed stream, Data Grid performs the iterator or spliterator terminal operations locally, which optimizes performance as remote iterations are more resource intensive. This optimization applies to both replicated and distributed caches. However, Data Grid performs iterations remotely when using cache stores that are both shared and have write-behind enabled. In this case performing the iterations remotely ensures consistency. 9.12. Intermediate operation exceptions There are some intermediate operations that have special exceptions, these are skip , peek , sorted 1 2 . & distinct . All of these methods have some sort of artificial iterator implanted in the stream processing to guarantee correctness, they are documented as below. Note this means these operations may cause possibly severe performance degradation. Skip An artificial iterator is implanted up to the intermediate skip operation. Then results are brought locally so it can skip the appropriate amount of elements. Sorted WARNING: This operation requires having all entries in memory on the local node. An artificial iterator is implanted up to the intermediate sorted operation. All results are sorted locally. There are possible plans to have a distributed sort which returns batches of elements, but this is not yet implemented. Distinct WARNING: This operation requires having all or nearly all entries in memory on the local node. Distinct is performed on each remote node and then an artificial iterator returns those distinct values. Then finally all of those results have a distinct operation performed upon them. The rest of the intermediate operations are fully distributed as one would expect. 9.13. Examples Word Count Word count is a classic, if overused, example of map/reduce paradigm. Assume we have a mapping of key sentence stored on Data Grid nodes. Key is a String, each sentence is also a String, and we have to count occurrence of all words in all sentences available. The implementation of such a distributed task could be defined as follows: public class WordCountExample { /** * In this example replace c1 and c2 with * real Cache references * * @param args */ public static void main(String[] args) { Cache<String, String> c1 = ...; Cache<String, String> c2 = ...; c1.put("1", "Hello world here I am"); c2.put("2", "Infinispan rules the world"); c1.put("3", "JUDCon is in Boston"); c2.put("4", "JBoss World is in Boston as well"); c1.put("12","JBoss Application Server"); c2.put("15", "Hello world"); c1.put("14", "Infinispan community"); c2.put("15", "Hello world"); c1.put("111", "Infinispan open source"); c2.put("112", "Boston is close to Toronto"); c1.put("113", "Toronto is a capital of Ontario"); c2.put("114", "JUDCon is cool"); c1.put("211", "JBoss World is awesome"); c2.put("212", "JBoss rules"); c1.put("213", "JBoss division of RedHat "); c2.put("214", "RedHat community"); Map<String, Long> wordCountMap = c1.entrySet().parallelStream() .map(e -> e.getValue().split("\\s")) .flatMap(Arrays::stream) .collect(() -> Collectors.groupingBy(Function.identity(), Collectors.counting())); } } In this case it is pretty simple to do the word count from the example. However what if we want to find the most frequent word in the example? If you take a second to think about this case you will realize you need to have all words counted and available locally first. Thus we actually have a few options. We could use a finisher on the collector, which is invoked on the user thread after all the results have been collected. Some redundant lines have been removed from the example. public class WordCountExample { public static void main(String[] args) { // Lines removed String mostFrequentWord = c1.entrySet().parallelStream() .map(e -> e.getValue().split("\\s")) .flatMap(Arrays::stream) .collect(() -> Collectors.collectingAndThen( Collectors.groupingBy(Function.identity(), Collectors.counting()), wordCountMap -> { String mostFrequent = null; long maxCount = 0; for (Map.Entry<String, Long> e : wordCountMap.entrySet()) { int count = e.getValue().intValue(); if (count > maxCount) { maxCount = count; mostFrequent = e.getKey(); } } return mostFrequent; })); } Unfortunately the last step is only going to be ran in a single thread, which if we have a lot of words could be quite slow. Maybe there is another way to parallelize this with Streams. We mentioned before we are in the local node after processing, so we could actually use a stream on the map results. We can therefore use a parallel stream on the results. public class WordFrequencyExample { public static void main(String[] args) { // Lines removed Map<String, Long> wordCount = c1.entrySet().parallelStream() .map(e -> e.getValue().split("\\s")) .flatMap(Arrays::stream) .collect(() -> Collectors.groupingBy(Function.identity(), Collectors.counting())); Optional<Map.Entry<String, Long>> mostFrequent = wordCount.entrySet().parallelStream().reduce( (e1, e2) -> e1.getValue() > e2.getValue() ? e1 : e2); This way you can still utilize all of the cores locally when calculating the most frequent element. Remove specific entries Distributed streams can also be used as a way to modify data where it lives. For example you may want to remove all entries in your cache that contain a specific word. public class RemoveBadWords { public static void main(String[] args) { // Lines removed String word = .. c1.entrySet().parallelStream() .filter(e -> e.getValue().contains(word)) .forEach((c, e) -> c.remove(e.getKey())); If we carefully note what is serialized and what is not, we notice that only the word along with the operations are serialized across to other nods as it is captured by the lambda. However the real saving piece is that the cache operation is performed on the primary owner thus reducing the amount of network traffic required to remove these values from the cache. The cache is not captured by the lambda as we provide a special BiConsumer method override that when invoked on each node passes the cache to the BiConsumer One thing to keep in mind using the forEach command in this manner is that the underlying stream obtains no locks. The cache remove operation will still obtain locks naturally, but the value could have changed from what the stream saw. That means that the entry could have been changed after the stream read it but the remove actually removed it. We have specifically added a new variant which is called LockedStream . Plenty of other examples The Streams API is a JRE tool and there are lots of examples for using it. Just remember that your operations need to be Serializable in some way. | [
"Map<Object, String> jbossValues = cache.entrySet().stream() .filter(e -> e.getValue().contains(\"JBoss\")) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));",
"Map<Object, String> jbossValues = cache.entrySet().stream() .filter(e -> e.getValue().contains(\"Jboss\")) .collect(CacheCollectors.serializableCollector(() -> Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)));",
"Map<Object, String> jbossValues = cache.entrySet().stream() .filter(e -> e.getValue().contains(\"Jboss\")) .collect(() -> Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));",
"Map<Object, String> jbossValues = map.entrySet().stream() .filter((Serializable & Predicate<Map.Entry<Object, String>>) e -> e.getValue().contains(\"Jboss\")) .collect(CacheCollectors.serializableCollector(() -> Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)));",
"Map<Object, String> jbossValues = cache.entrySet().stream() .filter(new ContainsFilter(\"Jboss\")) .collect(() -> Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); class ContainsFilter implements Predicate<Map.Entry<Object, String>> { private final String target; ContainsFilter(String target) { this.target = target; } @Override public boolean test(Map.Entry<Object, String> e) { return e.getValue().contains(target); } } class JbossFilterExternalizer implements AdvancedExternalizer<ContainsFilter> { @Override public Set<Class<? extends ContainsFilter>> getTypeClasses() { return Util.asSet(ContainsFilter.class); } @Override public Integer getId() { return CUSTOM_ID; } @Override public void writeObject(ObjectOutput output, ContainsFilter object) throws IOException { output.writeUTF(object.target); } @Override public ContainsFilter readObject(ObjectInput input) throws IOException, ClassNotFoundException { return new ContainsFilter(input.readUTF()); } }",
"Map<Object, String> map = (Map<Object, String>) cache.entrySet().stream() .filter(new ContainsFilter(\"Jboss\")) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); class ToMapCollectorSupplier<K, U> implements Supplier<Collector<Map.Entry<K, U>, ?, Map<K, U>>> { static final ToMapCollectorSupplier INSTANCE = new ToMapCollectorSupplier(); private ToMapCollectorSupplier() { } @Override public Collector<Map.Entry<K, U>, ?, Map<K, U>> get() { return Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue); } } class ToMapCollectorSupplierExternalizer implements AdvancedExternalizer<ToMapCollectorSupplier> { @Override public Set<Class<? extends ToMapCollectorSupplier>> getTypeClasses() { return Util.asSet(ToMapCollectorSupplier.class); } @Override public Integer getId() { return CUSTOM_ID; } @Override public void writeObject(ObjectOutput output, ToMapCollectorSupplier object) throws IOException { } @Override public ToMapCollectorSupplier readObject(ObjectInput input) throws IOException, ClassNotFoundException { return ToMapCollectorSupplier.INSTANCE; } }",
"CacheStream<Map.Entry<Object, String>> stream = cache.entrySet().stream(); stream.timeout(1, TimeUnit.MINUTES);",
"public class WordCountExample { /** * In this example replace c1 and c2 with * real Cache references * * @param args */ public static void main(String[] args) { Cache<String, String> c1 = ...; Cache<String, String> c2 = ...; c1.put(\"1\", \"Hello world here I am\"); c2.put(\"2\", \"Infinispan rules the world\"); c1.put(\"3\", \"JUDCon is in Boston\"); c2.put(\"4\", \"JBoss World is in Boston as well\"); c1.put(\"12\",\"JBoss Application Server\"); c2.put(\"15\", \"Hello world\"); c1.put(\"14\", \"Infinispan community\"); c2.put(\"15\", \"Hello world\"); c1.put(\"111\", \"Infinispan open source\"); c2.put(\"112\", \"Boston is close to Toronto\"); c1.put(\"113\", \"Toronto is a capital of Ontario\"); c2.put(\"114\", \"JUDCon is cool\"); c1.put(\"211\", \"JBoss World is awesome\"); c2.put(\"212\", \"JBoss rules\"); c1.put(\"213\", \"JBoss division of RedHat \"); c2.put(\"214\", \"RedHat community\"); Map<String, Long> wordCountMap = c1.entrySet().parallelStream() .map(e -> e.getValue().split(\"\\\\s\")) .flatMap(Arrays::stream) .collect(() -> Collectors.groupingBy(Function.identity(), Collectors.counting())); } }",
"public class WordCountExample { public static void main(String[] args) { // Lines removed String mostFrequentWord = c1.entrySet().parallelStream() .map(e -> e.getValue().split(\"\\\\s\")) .flatMap(Arrays::stream) .collect(() -> Collectors.collectingAndThen( Collectors.groupingBy(Function.identity(), Collectors.counting()), wordCountMap -> { String mostFrequent = null; long maxCount = 0; for (Map.Entry<String, Long> e : wordCountMap.entrySet()) { int count = e.getValue().intValue(); if (count > maxCount) { maxCount = count; mostFrequent = e.getKey(); } } return mostFrequent; })); }",
"public class WordFrequencyExample { public static void main(String[] args) { // Lines removed Map<String, Long> wordCount = c1.entrySet().parallelStream() .map(e -> e.getValue().split(\"\\\\s\")) .flatMap(Arrays::stream) .collect(() -> Collectors.groupingBy(Function.identity(), Collectors.counting())); Optional<Map.Entry<String, Long>> mostFrequent = wordCount.entrySet().parallelStream().reduce( (e1, e2) -> e1.getValue() > e2.getValue() ? e1 : e2);",
"public class RemoveBadWords { public static void main(String[] args) { // Lines removed String word = .. c1.entrySet().parallelStream() .filter(e -> e.getValue().contains(word)) .forEach((c, e) -> c.remove(e.getKey()));"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/embedding_data_grid_in_java_applications/streams_streams |
Chapter 124. KafkaBridgeHttpCors schema reference | Chapter 124. KafkaBridgeHttpCors schema reference Used in: KafkaBridgeHttpConfig Property Description allowedOrigins List of allowed origins. Java regular expressions can be used. string array allowedMethods List of allowed HTTP methods. string array | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkabridgehttpcors-reference |
Authentication and authorization | Authentication and authorization OpenShift Container Platform 4.16 Configuring user authentication and access controls for users and services Red Hat OpenShift Documentation Team | [
"oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: tokenConfig: accessTokenMaxAgeSeconds: 172800 1",
"oc apply -f </path/to/file.yaml>",
"oc describe oauth.config.openshift.io/cluster",
"Spec: Token Config: Access Token Max Age Seconds: 172800",
"oc edit oauth cluster",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: spec: tokenConfig: accessTokenInactivityTimeout: 400s 1",
"oc get clusteroperators authentication",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 145m",
"oc get clusteroperators kube-apiserver",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.16.0 True False False 145m",
"error: You must be logged in to the server (Unauthorized)",
"oc login -u <username> -p <password> --certificate-authority=<path_to_ca.crt> 1",
"oc edit ingress.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: oauth-openshift namespace: openshift-authentication hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2",
"{ \"issuer\": \"https://<namespace_route>\", 1 \"authorization_endpoint\": \"https://<namespace_route>/oauth/authorize\", 2 \"token_endpoint\": \"https://<namespace_route>/oauth/token\", 3 \"scopes_supported\": [ 4 \"user:full\", \"user:info\", \"user:check-access\", \"user:list-scoped-projects\", \"user:list-projects\" ], \"response_types_supported\": [ 5 \"code\", \"token\" ], \"grant_types_supported\": [ 6 \"authorization_code\", \"implicit\" ], \"code_challenge_methods_supported\": [ 7 \"plain\", \"S256\" ] }",
"oc get events | grep ServiceAccount",
"1m 1m 1 proxy ServiceAccount Warning NoSAOAuthRedirectURIs service-account-oauth-client-getter system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>",
"oc describe sa/proxy | grep -A5 Events",
"Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 service-account-oauth-client-getter Warning NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>",
"Reason Message NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>",
"Reason Message NoSAOAuthRedirectURIs [routes.route.openshift.io \"<name>\" not found, system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]",
"Reason Message NoSAOAuthRedirectURIs [no kind \"<name>\" is registered for version \"v1\", system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]",
"Reason Message NoSAOAuthTokens system:serviceaccount:myproject:proxy has no tokens",
"oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host",
"oc create -f <(echo ' kind: OAuthClient apiVersion: oauth.openshift.io/v1 metadata: name: demo 1 secret: \"...\" 2 redirectURIs: - \"http://www.example.com/\" 3 grantMethod: prompt 4 ')",
"oc edit oauthclient <oauth_client> 1",
"apiVersion: oauth.openshift.io/v1 grantMethod: auto kind: OAuthClient metadata: accessTokenInactivityTimeoutSeconds: 600 1",
"oc get useroauthaccesstokens",
"NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full",
"oc get useroauthaccesstokens --field-selector=clientName=\"console\"",
"NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full",
"oc describe useroauthaccesstokens <token_name>",
"Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none>",
"oc delete useroauthaccesstokens <token_name>",
"useroauthaccesstoken.oauth.openshift.io \"<token_name>\" deleted",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated",
"oc apply -f add-<cluster_role>.yaml",
"oc delete secrets kubeadmin -n kube-system",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3",
"oc create user <username>",
"oc create identity <identity_provider>:<identity_provider_user_id>",
"oc create useridentitymapping <identity_provider>:<identity_provider_user_id> <username>",
"htpasswd -c -B -b </path/to/users.htpasswd> <username> <password>",
"htpasswd -c -B -b users.htpasswd <username> <password>",
"Adding password for user user1",
"htpasswd -B -b </path/to/users.htpasswd> <user_name> <password>",
"> htpasswd.exe -c -B -b <\\path\\to\\users.htpasswd> <username> <password>",
"> htpasswd.exe -c -B -b users.htpasswd <username> <password>",
"Adding password for user user1",
"> htpasswd.exe -b <\\path\\to\\users.htpasswd> <username> <password>",
"oc create secret generic htpass-secret --from-file=htpasswd=<path_to_users.htpasswd> -n openshift-config 1",
"apiVersion: v1 kind: Secret metadata: name: htpass-secret namespace: openshift-config type: Opaque data: htpasswd: <base64_encoded_htpasswd_file_contents>",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_htpasswd_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3",
"oc apply -f </path/to/CR>",
"oc login -u <username>",
"oc whoami",
"oc get secret htpass-secret -ojsonpath={.data.htpasswd} -n openshift-config | base64 --decode > users.htpasswd",
"htpasswd -bB users.htpasswd <username> <password>",
"Adding password for user <username>",
"htpasswd -D users.htpasswd <username>",
"Deleting password for user <username>",
"oc create secret generic htpass-secret --from-file=htpasswd=users.htpasswd --dry-run=client -o yaml -n openshift-config | oc replace -f -",
"apiVersion: v1 kind: Secret metadata: name: htpass-secret namespace: openshift-config type: Opaque data: htpasswd: <base64_encoded_htpasswd_file_contents>",
"oc delete user <username>",
"user.user.openshift.io \"<username>\" deleted",
"oc delete identity my_htpasswd_provider:<username>",
"identity.user.openshift.io \"my_htpasswd_provider:<username>\" deleted",
"oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: kubernetes.io/tls data: tls.crt: <base64_encoded_cert> tls.key: <base64_encoded_key>",
"oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config",
"apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: keystoneidp 1 mappingMethod: claim 2 type: Keystone keystone: domainName: default 3 url: https://keystone.example.com:5000 4 ca: 5 name: ca-config-map tlsClientCert: 6 name: client-cert-secret tlsClientKey: 7 name: client-key-secret",
"oc apply -f </path/to/CR>",
"oc login -u <username>",
"oc whoami",
"ldap://host:port/basedn?attribute?scope?filter",
"(&(<filter>)(<attribute>=<username>))",
"ldap://ldap.example.com/o=Acme?cn?sub?(enabled=true)",
"oc create secret generic ldap-secret --from-literal=bindPassword=<secret> -n openshift-config 1",
"apiVersion: v1 kind: Secret metadata: name: ldap-secret namespace: openshift-config type: Opaque data: bindPassword: <base64_encoded_bind_password>",
"oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config",
"apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: ldapidp 1 mappingMethod: claim 2 type: LDAP ldap: attributes: id: 3 - dn email: 4 - mail name: 5 - cn preferredUsername: 6 - uid bindDN: \"\" 7 bindPassword: 8 name: ldap-secret ca: 9 name: ca-config-map insecure: false 10 url: \"ldaps://ldaps.example.com/ou=users,dc=acme,dc=com?uid\" 11",
"oc apply -f </path/to/CR>",
"oc login -u <username>",
"oc whoami",
"{\"error\":\"Error message\"}",
"{\"sub\":\"userid\"} 1",
"{\"sub\":\"userid\", \"name\": \"User Name\", ...}",
"{\"sub\":\"userid\", \"email\":\"[email protected]\", ...}",
"{\"sub\":\"014fbff9a07c\", \"preferred_username\":\"bob\", ...}",
"oc create secret tls <secret_name> --key=key.pem --cert=cert.pem -n openshift-config",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: kubernetes.io/tls data: tls.crt: <base64_encoded_cert> tls.key: <base64_encoded_key>",
"oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config",
"apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: basicidp 1 mappingMethod: claim 2 type: BasicAuth basicAuth: url: https://www.example.com/remote-idp 3 ca: 4 name: ca-config-map tlsClientCert: 5 name: client-cert-secret tlsClientKey: 6 name: client-key-secret",
"oc apply -f </path/to/CR>",
"oc login -u <username>",
"oc whoami",
"<VirtualHost *:443> # CGI Scripts in here DocumentRoot /var/www/cgi-bin # SSL Directives SSLEngine on SSLCipherSuite PROFILE=SYSTEM SSLProxyCipherSuite PROFILE=SYSTEM SSLCertificateFile /etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /etc/pki/tls/private/localhost.key # Configure HTTPD to execute scripts ScriptAlias /basic /var/www/cgi-bin # Handles a failed login attempt ErrorDocument 401 /basic/fail.cgi # Handles authentication <Location /basic/login.cgi> AuthType Basic AuthName \"Please Log In\" AuthBasicProvider file AuthUserFile /etc/httpd/conf/passwords Require valid-user </Location> </VirtualHost>",
"#!/bin/bash echo \"Content-Type: application/json\" echo \"\" echo '{\"sub\":\"userid\", \"name\":\"'USDREMOTE_USER'\"}' exit 0",
"#!/bin/bash echo \"Content-Type: application/json\" echo \"\" echo '{\"error\": \"Login failure\"}' exit 0",
"curl --cacert /path/to/ca.crt --cert /path/to/client.crt --key /path/to/client.key -u <user>:<password> -v https://www.example.com/remote-idp",
"{\"sub\":\"userid\"}",
"{\"sub\":\"userid\", \"name\": \"User Name\", ...}",
"{\"sub\":\"userid\", \"email\":\"[email protected]\", ...}",
"{\"sub\":\"014fbff9a07c\", \"preferred_username\":\"bob\", ...}",
"oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config",
"apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: requestheaderidp 1 mappingMethod: claim 2 type: RequestHeader requestHeader: challengeURL: \"https://www.example.com/challenging-proxy/oauth/authorize?USD{query}\" 3 loginURL: \"https://www.example.com/login-proxy/oauth/authorize?USD{query}\" 4 ca: 5 name: ca-config-map clientCommonNames: 6 - my-auth-proxy headers: 7 - X-Remote-User - SSO-User emailHeaders: 8 - X-Remote-User-Email nameHeaders: 9 - X-Remote-User-Display-Name preferredUsernameHeaders: 10 - X-Remote-User-Login",
"oc apply -f </path/to/CR>",
"oc login -u <username>",
"oc whoami",
"oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config 1",
"apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>",
"LoadModule request_module modules/mod_request.so LoadModule auth_gssapi_module modules/mod_auth_gssapi.so Some Apache configurations might require these modules. LoadModule auth_form_module modules/mod_auth_form.so LoadModule session_module modules/mod_session.so Nothing needs to be served over HTTP. This virtual host simply redirects to HTTPS. <VirtualHost *:80> DocumentRoot /var/www/html RewriteEngine On RewriteRule ^(.*)USD https://%{HTTP_HOST}USD1 [R,L] </VirtualHost> <VirtualHost *:443> # This needs to match the certificates you generated. See the CN and X509v3 # Subject Alternative Name in the output of: # openssl x509 -text -in /etc/pki/tls/certs/localhost.crt ServerName www.example.com DocumentRoot /var/www/html SSLEngine on SSLCertificateFile /etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /etc/pki/tls/private/localhost.key SSLCACertificateFile /etc/pki/CA/certs/ca.crt SSLProxyEngine on SSLProxyCACertificateFile /etc/pki/CA/certs/ca.crt # It is critical to enforce client certificates. Otherwise, requests can # spoof the X-Remote-User header by accessing the /oauth/authorize endpoint # directly. SSLProxyMachineCertificateFile /etc/pki/tls/certs/authproxy.pem # To use the challenging-proxy, an X-Csrf-Token must be present. RewriteCond %{REQUEST_URI} ^/challenging-proxy RewriteCond %{HTTP:X-Csrf-Token} ^USD [NC] RewriteRule ^.* - [F,L] <Location /challenging-proxy/oauth/authorize> # Insert your backend server name/ip here. ProxyPass https://<namespace_route>/oauth/authorize AuthName \"SSO Login\" # For Kerberos AuthType GSSAPI Require valid-user RequestHeader set X-Remote-User %{REMOTE_USER}s GssapiCredStore keytab:/etc/httpd/protected/auth-proxy.keytab # Enable the following if you want to allow users to fallback # to password based authentication when they do not have a client # configured to perform kerberos authentication. GssapiBasicAuth On # For ldap: # AuthBasicProvider ldap # AuthLDAPURL \"ldap://ldap.example.com:389/ou=People,dc=my-domain,dc=com?uid?sub?(objectClass=*)\" </Location> <Location /login-proxy/oauth/authorize> # Insert your backend server name/ip here. ProxyPass https://<namespace_route>/oauth/authorize AuthName \"SSO Login\" AuthType GSSAPI Require valid-user RequestHeader set X-Remote-User %{REMOTE_USER}s env=REMOTE_USER GssapiCredStore keytab:/etc/httpd/protected/auth-proxy.keytab # Enable the following if you want to allow users to fallback # to password based authentication when they do not have a client # configured to perform kerberos authentication. GssapiBasicAuth On ErrorDocument 401 /login.html </Location> </VirtualHost> RequestHeader unset X-Remote-User",
"identityProviders: - name: requestheaderidp type: RequestHeader requestHeader: challengeURL: \"https://<namespace_route>/challenging-proxy/oauth/authorize?USD{query}\" loginURL: \"https://<namespace_route>/login-proxy/oauth/authorize?USD{query}\" ca: name: ca-config-map clientCommonNames: - my-auth-proxy headers: - X-Remote-User",
"curl -L -k -H \"X-Remote-User: joe\" --cert /etc/pki/tls/certs/authproxy.pem https://<namespace_route>/oauth/token/request",
"curl -L -k -H \"X-Remote-User: joe\" https://<namespace_route>/oauth/token/request",
"curl -k -v -H 'X-Csrf-Token: 1' https://<namespace_route>/oauth/authorize?client_id=openshift-challenging-client&response_type=token",
"curl -k -v -H 'X-Csrf-Token: 1' <challengeURL_redirect + query>",
"kdestroy -c cache_name 1",
"oc login -u <username>",
"oc logout",
"kinit",
"oc login",
"https://oauth-openshift.apps.<cluster-name>.<cluster-domain>/oauth2callback/<idp-provider-name>",
"https://oauth-openshift.apps.openshift-cluster.example.com/oauth2callback/github",
"oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>",
"oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config",
"oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config",
"apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: githubidp 1 mappingMethod: claim 2 type: GitHub github: ca: 3 name: ca-config-map clientID: {...} 4 clientSecret: 5 name: github-secret hostname: ... 6 organizations: 7 - myorganization1 - myorganization2 teams: 8 - myorganization1/team-a - myorganization2/team-b",
"oc apply -f </path/to/CR>",
"oc login --token=<token>",
"oc whoami",
"oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>",
"oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config",
"oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config",
"apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: gitlabidp 1 mappingMethod: claim 2 type: GitLab gitlab: clientID: {...} 3 clientSecret: 4 name: gitlab-secret url: https://gitlab.com 5 ca: 6 name: ca-config-map",
"oc apply -f </path/to/CR>",
"oc login -u <username>",
"oc whoami",
"oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>",
"oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: googleidp 1 mappingMethod: claim 2 type: Google google: clientID: {...} 3 clientSecret: 4 name: google-secret hostedDomain: \"example.com\" 5",
"oc apply -f </path/to/CR>",
"oc login --token=<token>",
"oc whoami",
"oc create secret generic <secret_name> --from-literal=clientSecret=<secret> -n openshift-config",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-config type: Opaque data: clientSecret: <base64_encoded_client_secret>",
"oc create secret generic <secret_name> --from-file=<path_to_file> -n openshift-config",
"oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config",
"apiVersion: v1 kind: ConfigMap metadata: name: ca-config-map namespace: openshift-config data: ca.crt: | <CA_certificate_PEM>",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp 1 mappingMethod: claim 2 type: OpenID openID: clientID: ... 3 clientSecret: 4 name: idp-secret claims: 5 preferredUsername: - preferred_username name: - name email: - email groups: - groups issuer: https://www.idp-issuer.com 6",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp mappingMethod: claim type: OpenID openID: clientID: clientSecret: name: idp-secret ca: 1 name: ca-config-map extraScopes: 2 - email - profile extraAuthorizeParameters: 3 include_granted_scopes: \"true\" claims: preferredUsername: 4 - preferred_username - email name: 5 - nickname - given_name - name email: 6 - custom_email_claim - email groups: 7 - groups issuer: https://www.idp-issuer.com",
"oc apply -f </path/to/CR>",
"oc login --token=<token>",
"oc login -u <identity_provider_username> --server=<api_server_url_and_port>",
"oc whoami",
"oc describe clusterrole.rbac",
"Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]",
"oc describe clusterrolebinding.rbac",
"Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api",
"oc describe rolebinding.rbac",
"oc describe rolebinding.rbac -n joe-project",
"Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project",
"oc adm policy add-role-to-user <role> <user> -n <project>",
"oc adm policy add-role-to-user admin alice -n joe",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice",
"oc describe rolebinding.rbac -n <project>",
"oc describe rolebinding.rbac -n joe",
"Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe",
"oc create role <name> --verb=<verb> --resource=<resource> -n <project>",
"oc create role podview --verb=get --resource=pod -n blue",
"oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue",
"oc create clusterrole <name> --verb=<verb> --resource=<resource>",
"oc create clusterrole podviewonly --verb=get --resource=pod",
"oc adm policy add-cluster-role-to-user cluster-admin <user>",
"INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>",
"oc delete secrets kubeadmin -n kube-system",
"system:serviceaccount:<project>:<name>",
"oc get sa",
"NAME SECRETS AGE builder 1 2d default 1 2d deployer 1 2d",
"oc create sa <service_account_name> 1",
"serviceaccount \"robot\" created",
"apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>",
"oc describe sa robot",
"Name: robot Namespace: project1 Labels: <none> Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: <none> Events: <none>",
"oc policy add-role-to-user view system:serviceaccount:top-secret:robot",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret",
"oc policy add-role-to-user <role_name> -z <service_account_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name>",
"oc policy add-role-to-group view system:serviceaccounts -n my-project",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts",
"oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers",
"system:serviceaccount:<project>:<name>",
"oc get sa",
"NAME SECRETS AGE builder 1 2d default 1 2d deployer 1 2d",
"oc create sa <service_account_name> 1",
"serviceaccount \"robot\" created",
"apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>",
"oc describe sa robot",
"Name: robot Namespace: project1 Labels: <none> Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: <none> Events: <none>",
"oc sa get-token <service_account_name>",
"serviceaccounts.openshift.io/oauth-redirecturi.<name>",
"\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"https://example.com\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"",
"\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"",
"{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": \"Route\", \"name\": \"jenkins\" } }",
"{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": ..., 1 \"name\": ..., 2 \"group\": ... 3 } }",
"\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"",
"\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"//:8000\" \"serviceaccounts.openshift.io/oauth-redirectreference.second\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"",
"\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated",
"oc apply -f add-<cluster_role>.yaml",
"oc edit authentications cluster",
"spec: serviceAccountIssuer: https://test.default.svc 1",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 12 1",
"for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done",
"apiVersion: v1 kind: Pod metadata: name: nginx spec: securityContext: runAsNonRoot: true 1 seccompProfile: type: RuntimeDefault 2 containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] serviceAccountName: build-robot 3 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 4 expirationSeconds: 7200 5 audience: vault 6",
"oc create -f pod-projected-svc-token.yaml",
"oc create token build-robot",
"eyJhbGciOiJSUzI1NiIsImtpZCI6IkY2M1N4MHRvc2xFNnFSQlA4eG9GYzVPdnN3NkhIV0tRWmFrUDRNcWx4S0kifQ.eyJhdWQiOlsiaHR0cHM6Ly9pc3N1ZXIyLnRlc3QuY29tIiwiaHR0cHM6Ly9pc3N1ZXIxLnRlc3QuY29tIiwiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTY3OTU0MzgzMCwiaWF0IjoxNjc5NTQwMjMwLCJpc3MiOiJodHRwczovL2lzc3VlcjIudGVzdC5jb20iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImRlZmF1bHQiLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoidGVzdC1zYSIsInVpZCI6ImM3ZjA4MjkwLWIzOTUtNGM4NC04NjI4LTMzMTM1NTVhNWY1OSJ9fSwibmJmIjoxNjc5NTQwMjMwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDp0ZXN0LXNhIn0.WyAOPvh1BFMUl3LNhBCrQeaB5wSynbnCfojWuNNPSilT4YvFnKibxwREwmzHpV4LO1xOFZHSi6bXBOmG_o-m0XNDYL3FrGHd65mymiFyluztxa2lgHVxjw5reIV5ZLgNSol3Y8bJqQqmNg3rtQQWRML2kpJBXdDHNww0E5XOypmffYkfkadli8lN5QQD-MhsCbiAF8waCYs8bj6V6Y7uUKTcxee8sCjiRMVtXKjQtooERKm-CH_p57wxCljIBeM89VdaR51NJGued4hVV5lxvVrYZFu89lBEAq4oyQN_d6N1vBWGXQMyoihnt_fQjn-NfnlJWk-3NSZDIluDJAv7e-MTEk3geDrHVQKNEzDei2-Un64hSzb-n1g1M0Vn0885wQBQAePC9UlZm8YZlMNk1tq6wIUKQTMv3HPfi5HtBRqVc2eVs0EfMX4-x-PHhPCasJ6qLJWyj6DvyQ08dP4DW_TWZVGvKlmId0hzwpg59TTcLR0iCklSEJgAVEEd13Aa_M0-faD11L3MhUGxw0qxgOsPczdXUsolSISbefs7OKymzFSIkTAn9sDQ8PHMOsuyxsK8vzfrR-E0z7MAeguZ2kaIY7cZqbN6WFy0caWgx46hrKem9vCKALefElRYbCg3hcBmowBcRTOqaFHLNnHghhU1LaRpoFzH7OUarqX9SGQ",
"runAsUser: type: MustRunAs uid: <id>",
"runAsUser: type: MustRunAsRange uidRangeMax: <maxvalue> uidRangeMin: <minvalue>",
"runAsUser: type: MustRunAsNonRoot",
"runAsUser: type: RunAsAny",
"allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegedContainer: true allowedCapabilities: 1 - '*' apiVersion: security.openshift.io/v1 defaultAddCapabilities: [] 2 fsGroup: 3 type: RunAsAny groups: 4 - system:cluster-admins - system:nodes kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: 'privileged allows access to all privileged and host features and the ability to run as any user, any group, any fsGroup, and with any SELinux context. WARNING: this is the most relaxed SCC and should be used only for cluster administration. Grant with caution.' creationTimestamp: null name: privileged priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null 5 runAsUser: 6 type: RunAsAny seLinuxContext: 7 type: RunAsAny seccompProfiles: - '*' supplementalGroups: 8 type: RunAsAny users: 9 - system:serviceaccount:default:registry - system:serviceaccount:default:router - system:serviceaccount:openshift-infra:build-controller volumes: 10 - '*'",
"apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0",
"apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: runAsUser: 1000 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0",
"kind: SecurityContextConstraints apiVersion: security.openshift.io/v1 metadata: name: scc-admin allowPrivilegedContainer: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny users: - my-admin-user groups: - my-admin-group",
"requiredDropCapabilities: - KILL - MKNOD - SYS_CHROOT",
"oc create -f scc-admin.yaml",
"securitycontextconstraints \"scc-admin\" created",
"oc get scc scc-admin",
"NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES scc-admin true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk quobyte rbd secret vsphere]",
"apiVersion: config.openshift.io/v1 kind: Deployment apiVersion: apps/v1 spec: template: metadata: annotations: openshift.io/required-scc: \"my-scc\" 1",
"oc create -f deployment.yaml",
"oc get pod <pod_name> -o jsonpath='{.metadata.annotations.openshift\\.io\\/scc}{\"\\n\"}' 1",
"my-scc",
"oc create role <role-name> --verb=use --resource=scc --resource-name=<scc-name> -n <namespace>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: role-name 1 namespace: namespace 2 rules: - apiGroups: - security.openshift.io 3 resourceNames: - scc-name 4 resources: - securitycontextconstraints 5 verbs: 6 - use",
"oc get scc",
"NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny 10 false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostaccess false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"hostPath\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostmount-anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"hostPath\",\"nfs\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostnetwork false <no value> MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostnetwork-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] node-exporter true <no value> RunAsAny RunAsAny RunAsAny RunAsAny <no value> false [\"*\"] nonroot false <no value> MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] nonroot-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] privileged true [\"*\"] RunAsAny RunAsAny RunAsAny RunAsAny <no value> false [\"*\"] restricted false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] restricted-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"]",
"oc describe scc restricted",
"Name: restricted Priority: <none> Access: Users: <none> 1 Groups: <none> 2 Settings: Allow Privileged: false Allow Privilege Escalation: true Default Add Capabilities: <none> Required Drop Capabilities: KILL,MKNOD,SETUID,SETGID Allowed Capabilities: <none> Allowed Seccomp Profiles: <none> Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret Allowed Flexvolumes: <all> Allowed Unsafe Sysctls: <none> Forbidden Sysctls: <none> Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsRange UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: MustRunAs Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none>",
"oc edit scc <scc_name>",
"oc delete scc <scc_name>",
"oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=false",
"oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=true",
"oc label namespace <namespace> \\ 1 pod-security.kubernetes.io/<mode>=<profile> \\ 2 --overwrite",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"zgrep -h pod-security.kubernetes.io/audit-violations must-gather.local.<archive_id>/<image_digest_id>/audit_logs/kube-apiserver/*log.gz | jq -r 'select((.annotations[\"pod-security.kubernetes.io/audit-violations\"] != null) and (.objectRef.resource==\"pods\")) | .objectRef.namespace + \" \" + .objectRef.name' | sort | uniq -c",
"1 test-namespace my-pod",
"oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --user=<username>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <any_valid_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: sudoer subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: <username>",
"oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --as=<user> --as-group=<group1> --as-group=<group2>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated",
"oc apply -f add-<cluster_role>.yaml",
"url: ldap://10.0.0.0:389 1 bindDN: cn=admin,dc=example,dc=com 2 bindPassword: <password> 3 insecure: false 4 ca: my-ldap-ca-bundle.crt 5",
"baseDN: ou=users,dc=example,dc=com 1 scope: sub 2 derefAliases: never 3 timeout: 0 4 filter: (objectClass=person) 5 pageSize: 0 6",
"groupUIDNameMapping: \"cn=group1,ou=groups,dc=example,dc=com\": firstgroup \"cn=group2,ou=groups,dc=example,dc=com\": secondgroup \"cn=group3,ou=groups,dc=example,dc=com\": thirdgroup",
"kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 1 insecure: false 2 bindDN: cn=admin,dc=example,dc=com bindPassword: file: \"/etc/secrets/bindPassword\" rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 3 groupNameAttributes: [ cn ] 4 groupMembershipAttributes: [ member ] 5 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 6 userNameAttributes: [ mail ] 7 tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false",
"kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 activeDirectory: usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 1 groupMembershipAttributes: [ memberOf ] 2",
"kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 1 groupNameAttributes: [ cn ] 2 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 3 groupMembershipAttributes: [ memberOf ] 4",
"oc adm groups sync --sync-config=config.yaml --confirm",
"oc adm groups sync --type=openshift --sync-config=config.yaml --confirm",
"oc adm groups sync --whitelist=<whitelist_file> --sync-config=config.yaml --confirm",
"oc adm groups sync --blacklist=<blacklist_file> --sync-config=config.yaml --confirm",
"oc adm groups sync <group_unique_identifier> --sync-config=config.yaml --confirm",
"oc adm groups sync <group_unique_identifier> --whitelist=<whitelist_file> --blacklist=<blacklist_file> --sync-config=config.yaml --confirm",
"oc adm groups sync --type=openshift --whitelist=<whitelist_file> --sync-config=config.yaml --confirm",
"oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm",
"oc adm prune groups --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm",
"oc adm prune groups --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm",
"oc new-project ldap-sync 1",
"kind: ServiceAccount apiVersion: v1 metadata: name: ldap-group-syncer namespace: ldap-sync",
"oc create -f ldap-sync-service-account.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ldap-group-syncer rules: - apiGroups: - user.openshift.io resources: - groups verbs: - get - list - create - update",
"oc create -f ldap-sync-cluster-role.yaml",
"kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ldap-group-syncer subjects: - kind: ServiceAccount name: ldap-group-syncer 1 namespace: ldap-sync roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ldap-group-syncer 2",
"oc create -f ldap-sync-cluster-role-binding.yaml",
"kind: ConfigMap apiVersion: v1 metadata: name: ldap-group-syncer namespace: ldap-sync data: sync.yaml: | 1 kind: LDAPSyncConfig apiVersion: v1 url: ldaps://10.0.0.0:389 2 insecure: false bindDN: cn=admin,dc=example,dc=com 3 bindPassword: file: \"/etc/secrets/bindPassword\" ca: /etc/ldap-ca/ca.crt rfc2307: 4 groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" 5 scope: sub filter: \"(objectClass=groupOfMembers)\" derefAliases: never pageSize: 0 groupUIDAttribute: dn groupNameAttributes: [ cn ] groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" 6 scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn userNameAttributes: [ uid ] tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false",
"oc create -f ldap-sync-config-map.yaml",
"kind: CronJob apiVersion: batch/v1 metadata: name: ldap-group-syncer namespace: ldap-sync spec: 1 schedule: \"*/30 * * * *\" 2 concurrencyPolicy: Forbid jobTemplate: spec: backoffLimit: 0 ttlSecondsAfterFinished: 1800 3 template: spec: containers: - name: ldap-group-sync image: \"registry.redhat.io/openshift4/ose-cli:latest\" command: - \"/bin/bash\" - \"-c\" - \"oc adm groups sync --sync-config=/etc/config/sync.yaml --confirm\" 4 volumeMounts: - mountPath: \"/etc/config\" name: \"ldap-sync-volume\" - mountPath: \"/etc/secrets\" name: \"ldap-bind-password\" - mountPath: \"/etc/ldap-ca\" name: \"ldap-ca\" volumes: - name: \"ldap-sync-volume\" configMap: name: \"ldap-group-syncer\" - name: \"ldap-bind-password\" secret: secretName: \"ldap-secret\" 5 - name: \"ldap-ca\" configMap: name: \"ca-config-map\" 6 restartPolicy: \"Never\" terminationGracePeriodSeconds: 30 activeDeadlineSeconds: 500 dnsPolicy: \"ClusterFirst\" serviceAccountName: \"ldap-group-syncer\"",
"oc create -f ldap-sync-cron-job.yaml",
"dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 1 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com 2 member: cn=Jim,ou=users,dc=example,dc=com",
"oc adm groups sync --sync-config=rfc2307_config.yaml --confirm",
"apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]",
"kind: LDAPSyncConfig apiVersion: v1 groupUIDNameMapping: \"cn=admins,ou=groups,dc=example,dc=com\": Administrators 1 rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 4 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false",
"oc adm groups sync --sync-config=rfc2307_config_user_defined.yaml --confirm",
"apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: Administrators 1 users: - [email protected] - [email protected]",
"Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with dn=\"<user-dn>\" would search outside of the base dn specified (dn=\"<base-dn>\")\".",
"Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with base dn=\"<user-dn>\" refers to a non-existent entry\". Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with base dn=\"<user-dn>\" and filter \"<filter>\" did not return any results\".",
"dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com member: cn=INVALID,ou=users,dc=example,dc=com 1 member: cn=Jim,ou=OUTOFSCOPE,dc=example,dc=com 2",
"kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never groupUIDAttribute: dn groupNameAttributes: [ cn ] groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never userUIDAttribute: dn 1 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: true 2 tolerateMemberOutOfScopeErrors: true 3",
"oc adm groups sync --sync-config=rfc2307_config_tolerating.yaml --confirm",
"apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: admins users: 1 - [email protected] - [email protected]",
"dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: admins 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: admins",
"oc adm groups sync --sync-config=active_directory_config.yaml --confirm",
"apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: admins 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]",
"dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 2 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com",
"oc adm groups sync --sync-config=augmented_active_directory_config.yaml --confirm",
"apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]",
"dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=otheradmins,ou=groups,dc=example,dc=com 2 dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 3 objectClass: group cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=otheradmins,ou=groups,dc=example,dc=com dn: cn=otheradmins,ou=groups,dc=example,dc=com 4 objectClass: group cn: otheradmins owner: cn=admin,dc=example,dc=com description: Other System Administrators memberOf: cn=admins,ou=groups,dc=example,dc=com 5 6 member: cn=Jim,ou=users,dc=example,dc=com",
"kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: 1 derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 4 groupMembershipAttributes: [ \"memberOf:1.2.840.113556.1.4.1941:\" ] 5",
"oc adm groups sync 'cn=admins,ou=groups,dc=example,dc=com' --sync-config=augmented_active_directory_config_nested.yaml --confirm",
"apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]",
"oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}",
"oc get secret <secret_name> -n kube-system -o jsonpath --template '{ .metadata.annotations }'",
"oc get secret <secret_name> -n=kube-system",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>",
"oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'",
"{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }",
"oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2",
"oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: azure-credentials stringData: azure_subscription_id: <base64-encoded_subscription_id> azure_client_id: <base64-encoded_client_id> azure_client_secret: <base64-encoded_client_secret> azure_tenant_id: <base64-encoded_tenant_id> azure_resource_prefix: <base64-encoded_resource_prefix> azure_resourcegroup: <base64-encoded_resource_group> azure_region: <base64-encoded_region>",
"cat .openshift_install_state.json | jq '.\"*installconfig.ClusterID\".InfraID' -r",
"mycluster-2mpcn",
"azure_resource_prefix: mycluster-2mpcn azure_resourcegroup: mycluster-2mpcn-rg",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: openstack-credentials data: clouds.yaml: <base64-encoded_cloud_creds> clouds.conf: <base64-encoded_cloud_creds_init>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: vsphere-creds data: vsphere.openshift.example.com.username: <base64-encoded_username> vsphere.openshift.example.com.password: <base64-encoded_password>",
"oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge",
"oc get co kube-controller-manager",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: aws_access_key_id: <base64_encoded_access_key_id> aws_secret_access_key: <base64_encoded_secret_access_key>",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 stringData: credentials: |- [default] sts_regional_endpoints = regional role_name: <operator_role_name> 3 web_identity_token_file: <path_to_token> 4",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: service_account.json: <service_account> 3",
"{ \"type\": \"service_account\", 1 \"project_id\": \"<project_id>\", \"private_key_id\": \"<private_key_id>\", \"private_key\": \"<private_key>\", 2 \"client_email\": \"<client_email_address>\", \"client_id\": \"<client_id>\", \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\", \"token_uri\": \"https://oauth2.googleapis.com/token\", \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\", \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/<client_email_address>\" }",
"{ \"type\": \"external_account\", 1 \"audience\": \"//iam.googleapis.com/projects/123456789/locations/global/workloadIdentityPools/test-pool/providers/test-provider\", 2 \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<client_email_address>:generateAccessToken\", 3 \"credential_source\": { \"file\": \"<path_to_token>\", 4 \"format\": { \"type\": \"text\" } } }",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: azure_client_id: <client_id> 3 azure_client_secret: <client_secret> 4 azure_region: <region> azure_resource_prefix: <resource_group_prefix> 5 azure_resourcegroup: <resource_group_prefix>-rg 6 azure_subscription_id: <subscription_id> azure_tenant_id: <tenant_id> type: Opaque",
"apiVersion: v1 kind: Secret metadata: namespace: <target_namespace> 1 name: <target_secret_name> 2 data: azure_client_id: <client_id> 3 azure_federated_token_file: <path_to_token_file> 4 azure_region: <region> azure_subscription_id: <subscription_id> azure_tenant_id: <tenant_id> type: Opaque"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/authentication_and_authorization/index |
Chapter 7. Migrating the Red Hat Ceph Storage cluster | Chapter 7. Migrating the Red Hat Ceph Storage cluster In the context of data plane adoption, where the Red Hat OpenStack Platform (RHOSP) services are redeployed in Red Hat OpenShift Container Platform (RHOCP), you migrate a director-deployed Red Hat Ceph Storage cluster by using a process called "externalizing" the Red Hat Ceph Storage cluster. There are two deployment topologies that include an internal Red Hat Ceph Storage cluster: RHOSP includes dedicated Red Hat Ceph Storage nodes to host object storage daemons (OSDs) Hyperconverged Infrastructure (HCI), where Compute and Storage services are colocated on hyperconverged nodes In either scenario, there are some Red Hat Ceph Storage processes that are deployed on RHOSP Controller nodes: Red Hat Ceph Storage monitors, Ceph Object Gateway (RGW), Rados Block Device (RBD), Ceph Metadata Server (MDS), Ceph Dashboard, and NFS Ganesha. To migrate your Red Hat Ceph Storage cluster, you must decommission the Controller nodes and move the Red Hat Ceph Storage daemons to a set of target nodes that are already part of the Red Hat Ceph Storage cluster. Prerequisites Complete the tasks in your Red Hat OpenStack Platform 17.1 environment. For more information, see Red Hat Ceph Storage prerequisites . 7.1. Red Hat Ceph Storage daemon cardinality Red Hat Ceph Storage 7 and later applies strict constraints in the way daemons can be colocated within the same node. For more information, see the Red Hat Knowledgebase article Red Hat Ceph Storage: Supported configurations . Your topology depends on the available hardware and the amount of Red Hat Ceph Storage services in the Controller nodes that you retire. The amount of services that you can migrate depends on the amount of available nodes in the cluster. The following diagrams show the distribution of Red Hat Ceph Storage daemons on Red Hat Ceph Storage nodes where at least 3 nodes are required. The following scenario includes only RGW and RBD, without the Red Hat Ceph Storage dashboard: With the Red Hat Ceph Storage dashboard, but without Shared File Systems service (manila), at least 4 nodes are required. The Red Hat Ceph Storage dashboard has no failover: With the Red Hat Ceph Storage dashboard and the Shared File Systems service, a minimum of 5 nodes are required, and the Red Hat Ceph Storage dashboard has no failover: 7.2. Migrating the monitoring stack component to new nodes within an existing Red Hat Ceph Storage cluster The Red Hat Ceph Storage Dashboard module adds web-based monitoring and administration to the Ceph Manager. With director-deployed Red Hat Ceph Storage, the Red Hat Ceph Storage Dashboard is enabled as part of the overcloud deploy and is composed of the following components: Ceph Manager module Grafana Prometheus Alertmanager Node exporter The Red Hat Ceph Storage Dashboard containers are included through tripleo-container-image-prepare parameters, and high availability (HA) relies on HAProxy and Pacemaker to be deployed on the Red Hat OpenStack Platform (RHOSP) environment. For an external Red Hat Ceph Storage cluster, HA is not supported. In this procedure, you migrate and relocate the Ceph Monitoring components to free Controller nodes. Prerequisites Complete the tasks in your Red Hat OpenStack Platform 17.1 environment. For more information, see Red Hat Ceph Storage prerequisites . 7.2.1. Migrating the monitoring stack to the target nodes To migrate the monitoring stack to the target nodes, you add the monitoring label to your existing nodes and update the configuration of each daemon. You do not need to migrate node exporters. These daemons are deployed across the nodes that are part of the Red Hat Ceph Storage cluster (the placement is '*'). Prerequisites Confirm that the firewall rules are in place and the ports are open for a given monitoring stack service. Note Depending on the target nodes and the number of deployed or active daemons, you can either relocate the existing containers to the target nodes, or select a subset of nodes that host the monitoring stack daemons. High availability (HA) is not supported. Reducing the placement with count: 1 allows you to migrate the existing daemons in a Hyperconverged Infrastructure, or hardware-limited, scenario without impacting other services. 7.2.1.1. Migrating the existing daemons to the target nodes The following procedure is an example of an environment with 3 Red Hat Ceph Storage nodes or ComputeHCI nodes. This scenario extends the monitoring labels to all the Red Hat Ceph Storage or ComputeHCI nodes that are part of the cluster. This means that you keep 3 placements for the target nodes. Procedure Add the monitoring label to all the Red Hat Ceph Storage or ComputeHCI nodes in the cluster: Verify that all the hosts on the target nodes have the monitoring label: Remove the labels from the Controller nodes: Dump the current monitoring stack spec: For each daemon, edit the current spec and replace the placement.hosts: section with the placement.label: section, for example: service_type: grafana service_name: grafana placement: label: monitoring networks: - 172.17.3.0/24 spec: port: 3100 This step also applies to Prometheus and Alertmanager specs. Apply the new monitoring spec to relocate the monitoring stack daemons: Verify that the daemons are deployed on the expected nodes: Note After you migrate the monitoring stack, you lose high availability. The monitoring stack daemons no longer have a Virtual IP address and HAProxy anymore. Node exporters are still running on all the nodes. Review the Red Hat Ceph Storage configuration to ensure that it aligns with the configuration on the target nodes. In particular, focus on the following configuration entries: Verify that the API_HOST/URL of the grafana , alertmanager and prometheus services points to the IP addresses on the storage network of the node where each daemon is relocated: Note The Ceph Dashboard, as the service provided by the Ceph mgr , is not impacted by the relocation. You might experience an impact when the active mgr daemon is migrated or is force-failed. However, you can define 3 replicas in the Ceph Manager configuration to redirect requests to a different instance. 7.3. Migrating Red Hat Ceph Storage MDS to new nodes within the existing cluster You can migrate the MDS daemon when Shared File Systems service (manila), deployed with either a cephfs-native or ceph-nfs back end, is part of the overcloud deployment. The MDS migration is performed by cephadm , and you move the daemons placement from a hosts-based approach to a label-based approach. This ensures that you can visualize the status of the cluster and where daemons are placed by using the ceph orch host command. You can also have a general view of how the daemons are co-located within a given host, as described in the Red Hat Knowledgebase article Red Hat Ceph Storage: Supported configurations . Prerequisites Complete the tasks in your Red Hat OpenStack Platform 17.1 environment. For more information, see Red Hat Ceph Storage prerequisites . Procedure Verify that the Red Hat Ceph Storage cluster is healthy and check the MDS status: Retrieve more detailed information on the Ceph File System (CephFS) MDS status: Check the OSD blocklist and clean up the client list: Note When a file system client is unresponsive or misbehaving, the access to the file system might be forcibly terminated. This process is called eviction. Evicting a CephFS client prevents it from communicating further with MDS daemons and OSD daemons. Ordinarily, a blocklisted client cannot reconnect to the servers; you must unmount and then remount the client. However, permitting a client that was evicted to attempt to reconnect can be useful. Because CephFS uses the RADOS OSD blocklist to control client eviction, you can permit CephFS clients to reconnect by removing them from the blocklist. Get the hosts that are currently part of the Red Hat Ceph Storage cluster: Apply the MDS labels to the target nodes: Verify that all the hosts have the MDS label: Dump the current MDS spec: Edit the retrieved spec and replace the placement.hosts section with placement.label : Use the ceph orchestrator to apply the new MDS spec: This results in an increased number of MDS daemons. Check the new standby daemons that are temporarily added to the CephFS: To migrate MDS to the target nodes, set the MDS affinity that manages the MDS failover: Note It is possible to elect a dedicated MDS as "active" for a particular file system. To configure this preference, CephFS provides a configuration option for MDS called mds_join_fs , which enforces this affinity. When failing over MDS daemons, cluster monitors prefer standby daemons with mds_join_fs equal to the file system name with the failed rank. If no standby exists with mds_join_fs equal to the file system name, it chooses an unqualified standby as a replacement. Replace mds.mds.cephstorage-0.fqcshx with the daemon deployed on cephstorage-0 that was retrieved from the step. Remove the labels from the Controller nodes and force the MDS failover to the target node: The switch to the target node happens in the background. The new active MDS is the one that you set by using the mds_join_fs command. Check the result of the failover and the new deployed daemons: 7.4. Migrating Red Hat Ceph Storage RGW to external RHEL nodes For Hyperconverged Infrastructure (HCI) or dedicated Storage nodes, you must migrate the Ceph Object Gateway (RGW) daemons that are included in the Red Hat OpenStack Platform Controller nodes into the existing external Red Hat Enterprise Linux (RHEL) nodes. The external RHEL nodes typically include the Compute nodes for an HCI environment or Red Hat Ceph Storage nodes. Your environment must have Red Hat Ceph Storage 7 or later and be managed by cephadm or Ceph Orchestrator. Prerequisites Complete the tasks in your Red Hat OpenStack Platform 17.1 environment. For more information, see Red Hat Ceph Storage prerequisites . 7.4.1. Migrating the Red Hat Ceph Storage RGW back ends You must migrate your Ceph Object Gateway (RGW) back ends from your Controller nodes to your Red Hat Ceph Storage nodes. To ensure that you distribute the correct amount of services to your available nodes, you use cephadm labels to refer to a group of nodes where a given daemon type is deployed. For more information about the cardinality diagram, see Red Hat Ceph Storage daemon cardinality . The following procedure assumes that you have three target nodes, cephstorage-0 , cephstorage-1 , cephstorage-2 . Procedure Add the RGW label to the Red Hat Ceph Storage nodes that you want to migrate your RGW back ends to: Locate the RGW spec and dump in the spec directory: This example assumes that 172.17.3.0/24 is the storage network. In the placement section, ensure that the label and rgw_frontend_port values are set: 1 Add the storage network where the RGW back ends are deployed. 2 Replace the Controller nodes with the label: rgw label. 3 Change the rgw_frontend_port value to 8090 to avoid conflicts with the Ceph ingress daemon. 4 Optional: if TLS is enabled, add the SSL certificate and key concatenation as described in Configuring RGW with TLS for an external Red Hat Ceph Storage cluster in Configuring persistent storage . Apply the new RGW spec by using the orchestrator CLI: This command triggers the redeploy, for example: Ensure that the new RGW back ends are reachable on the new ports, so you can enable an ingress daemon on port 8080 later. Log in to each Red Hat Ceph Storage node that includes RGW and add the iptables rule to allow connections to both 8080 and 8090 ports in the Red Hat Ceph Storage nodes: If nftables is used in the existing deployment, edit /etc/nftables/tripleo-rules.nft and add the following content: Save the file. Restart the nftables service: Verify that the rules are applied: From a Controller node, such as controller-0 , try to reach the RGW back ends: You should observe the following output: Repeat the verification for each node where a RGW daemon is deployed. If you migrated RGW back ends to the Red Hat Ceph Storage nodes, there is no internalAPI network, except in the case of HCI nodes. You must reconfigure the RGW keystone endpoint to point to the external network that you propagated: Replace <keystone_endpoint> with the Identity service (keystone) internal endpoint of the service that is deployed in the OpenStackControlPlane CR when you adopt the Identity service. For more information, see Adopting the Identity service . 7.4.2. Deploying a Red Hat Ceph Storage ingress daemon To deploy the Ceph ingress daemon, you perform the following actions: Remove the existing ceph_rgw configuration. Clean up the configuration created by director. Redeploy the Object Storage service (swift). When you deploy the ingress daemon, two new containers are created: HAProxy, which you use to reach the back ends. Keepalived, which you use to own the virtual IP address. You use the rgw label to distribute the ingress daemon to only the number of nodes that host Ceph Object Gateway (RGW) daemons. For more information about distributing daemons among your nodes, see Red Hat Ceph Storage daemon cardinality . After you complete this procedure, you can reach the RGW back end from the ingress daemon and use RGW through the Object Storage service CLI. Procedure Log in to each Controller node and remove the following configuration from the /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg file: Restart haproxy-bundle and confirm that it is started: Confirm that no process is connected to port 8080: You can expect the Object Storage service (swift) CLI to fail to establish the connection: Set the required images for both HAProxy and Keepalived: Create a file called rgw_ingress in controller-0 : Paste the following content into the rgw_ingress file: --- service_type: ingress service_id: rgw.rgw placement: label: rgw spec: backend_service: rgw.rgw virtual_ip: 10.0.0.89/24 frontend_port: 8080 monitor_port: 8898 virtual_interface_networks: - <external_network> ssl_cert: ... Replace <external_network> with your external network, for example, 10.0.0.0/24 . For more information, see Completing prerequisites for migrating Red Hat Ceph Storage RGW . If TLS is enabled, add the SSL certificate and key concatenation as described in Configuring RGW with TLS for an external Red Hat Ceph Storage cluster in Configuring persistent storage . Apply the rgw_ingress spec by using the Ceph orchestrator CLI: Wait until the ingress is deployed and query the resulting endpoint: 7.4.3. Updating the Object Storage service endpoints You must update the Object Storage service (swift) endpoints to point to the new virtual IP address (VIP) that you reserved on the same network that you used to deploy RGW ingress. Procedure List the current endpoints: Update the endpoints that are pointing to the Ingress VIP: Repeat this step for both internal and admin endpoints. Test the migrated service: 7.5. Migrating Red Hat Ceph Storage RBD to external RHEL nodes For Hyperconverged Infrastructure (HCI) or dedicated Storage nodes that are running Red Hat Ceph Storage 7 or later, you must migrate the daemons that are included in the Red Hat OpenStack Platform control plane into the existing external Red Hat Enterprise Linux (RHEL) nodes. The external RHEL nodes typically include the Compute nodes for an HCI environment or dedicated storage nodes. Prerequisites Complete the tasks in your Red Hat OpenStack Platform 17.1 environment. For more information, see Red Hat Ceph Storage prerequisites . 7.5.1. Migrating Ceph Manager daemons to Red Hat Ceph Storage nodes You must migrate your Ceph Manager daemons from the Red Hat OpenStack Platform (RHOSP) Controller nodes to a set of target nodes. Target nodes are either existing Red Hat Ceph Storage nodes, or RHOSP Compute nodes if Red Hat Ceph Storage is deployed by director with a Hyperconverged Infrastructure (HCI) topology. Note The following procedure uses cephadm and the Ceph Orchestrator to drive the Ceph Manager migration, and the Ceph spec to modify the placement and reschedule the Ceph Manager daemons. Ceph Manager is run in an active/passive state. It also provides many modules, including the Ceph Orchestrator. Every potential module, such as the Ceph Dashboard, that is provided by ceph-mgr is implicitly migrated with Ceph Manager. Procedure SSH into the target node and enable the firewall rules that are required to reach a Ceph Manager service: Replace <target_node> with the hostname of the hosts that are listed in the Red Hat Ceph Storage environment. Run ceph orch host ls to see the list of the hosts. Repeat this step for each target node. Check that the rules are properly applied to the target node and persist them: If nftables is used in the existing deployment, edit /etc/nftables/tripleo-rules.nft and add the following content: Save the file. Restart the nftables service: Verify that the rules are applied: Prepare the target node to host the new Ceph Manager daemon, and add the mgr label to the target node: Repeat steps 1-7 for each target node that hosts a Ceph Manager daemon. Get the Ceph Manager spec: USD SPEC_DIR=USD{SPEC_DIR:-"USDPWD/ceph_specs"} USD mkdir -p USD{SPEC_DIR} USD sudo cephadm shell -- ceph orch ls --export mgr > USD{SPEC_DIR}/mgr Edit the retrieved spec and add the label: mgr section to the placement section: service_type: mgr service_id: mgr placement: label: mgr Save the spec. Apply the spec with cephadm by using the Ceph Orchestrator: Verification Verify that the new Ceph Manager daemons are created in the target nodes: The Ceph Manager daemon count should match the number of hosts where the mgr label is added. Note The migration does not shrink the Ceph Manager daemons. The count grows by the number of target nodes, and migrating Ceph Monitor daemons to Red Hat Ceph Storage nodes decommissions the stand-by Ceph Manager instances. For more information, see Migrating Ceph Monitor daemons to Red Hat Ceph Storage nodes . 7.5.2. Migrating Ceph Monitor daemons to Red Hat Ceph Storage nodes You must move Ceph Monitor daemons from the Red Hat OpenStack Platform (RHOSP) Controller nodes to a set of target nodes. Target nodes are either existing Red Hat Ceph Storage nodes, or RHOSP Compute nodes if Red Hat Ceph Storage is deployed by director with a Hyperconverged Infrastructure (HCI) topology. Additional Ceph Monitors are deployed to the target nodes, and they are promoted as _admin nodes that you can use to manage the Red Hat Ceph Storage cluster and perform day 2 operations. To migrate the Ceph Monitor daemons, you must perform the following high-level steps: Configure the target nodes for Ceph Monitor migration . Drain the source node . Migrate your Ceph Monitor IP addresses to the target nodes . Redeploy the Ceph Monitor on the target node . Verify that the Red Hat Ceph Storage cluster is healthy . Repeat these steps for any additional Controller node that hosts a Ceph Monitor until you migrate all the Ceph Monitor daemons to the target nodes. 7.5.2.1. Configuring target nodes for Ceph Monitor migration Prepare the target Red Hat Ceph Storage nodes for the Ceph Monitor migration by performing the following actions: Enable firewall rules in a target node and persist them. Create a spec that is based on labels and apply it by using cephadm . Ensure that the Ceph Monitor quorum is maintained during the migration process. Procedure SSH into the target node and enable the firewall rules that are required to reach a Ceph Monitor service: Replace <target_node> with the hostname of the node that hosts the new Ceph Monitor. Check that the rules are properly applied to the target node and persist them: If nftables is used in the existing deployment, edit /etc/nftables/tripleo-rules.nft and add the following content: Save the file. Restart the nftables service: Verify that the rules are applied: To migrate the existing Ceph Monitors to the target Red Hat Ceph Storage nodes, retrieve the Red Hat Ceph Storage mon spec from the first Ceph Monitor, or the first Controller node: Add the label:mon section to the placement section: Save the spec. Apply the spec with cephadm by using the Ceph Orchestrator: Extend the mon label to the remaining Red Hat Ceph Storage target nodes to ensure that quorum is maintained during the migration process: Note Applying the mon spec allows the existing strategy to use labels instead of hosts . As a result, any node with the mon label can host a Ceph Monitor daemon. Perform this step only once to avoid multiple iterations when multiple Ceph Monitors are migrated. Check the status of the Red Hat Ceph Storage and the Ceph Orchestrator daemons list. Ensure that Ceph Monitors are in a quorum and listed by the ceph orch command: Set up a Ceph client on the first Controller node that is used during the rest of the procedure to interact with Red Hat Ceph Storage. Set up an additional IP address on the storage network that is used to interact with Red Hat Ceph Storage when the first Controller node is decommissioned: Back up the content of /etc/ceph in the ceph_client_backup directory. Edit /etc/os-net-config/config.yaml and add - ip_netmask: 172.17.3.200 after the IP address on the VLAN that belongs to the storage network. Replace 172.17.3.200 with any other available IP address on the storage network that can be statically assigned to controller-0 . Save the file and refresh the controller-0 network configuration: Verify that the IP address is present in the Controller node: Ping the IP address and confirm that it is reachable: Verify that you can interact with the Red Hat Ceph Storage cluster: steps Proceed to the step Draining the source node . 7.5.2.2. Draining the source node Drain the source node and remove the source node host from the Red Hat Ceph Storage cluster. Procedure On the source node, back up the /etc/ceph/ directory to run cephadm and get a shell for the Red Hat Ceph Storage cluster from the source node: Identify the active ceph-mgr instance: Fail the ceph-mgr if it is active on the source node: Replace <mgr_instance> with the Ceph Manager daemon to fail. From the cephadm shell, remove the labels on the source node: Replace <source_node> with the hostname of the source node. Optional: Ensure that you remove the Ceph Monitor daemon from the source node if it is still running: Drain the source node to remove any leftover daemons: Remove the source node host from the Red Hat Ceph Storage cluster: Note The source node is not part of the cluster anymore, and should not appear in the Red Hat Ceph Storage host list when you run sudo cephadm shell -- ceph orch host ls . However, if you run sudo podman ps in the source node, the list might show that both Ceph Monitors and Ceph Managers are still running. To clean up the existing containers and remove the cephadm data from the source node, contact Red Hat Support. Confirm that mons are still in quorum: steps Proceed to the step Migrating the Ceph Monitor IP address . 7.5.2.3. Migrating the Ceph Monitor IP address You must migrate your Ceph Monitor IP addresses to the target Red Hat Ceph Storage nodes. The IP address migration assumes that the target nodes are originally deployed by director and that the network configuration is managed by os-net-config . Procedure Get the original Ceph Monitor IP addresses from USDHOME/ceph_client_backup/ceph.conf file on the mon_host line, for example: Match the IP address retrieved in the step with the storage network IP addresses on the source node, and find the Ceph Monitor IP address: Confirm that the Ceph Monitor IP address is present in the os-net-config configuration that is located in the /etc/os-net-config directory on the source node: Edit the /etc/os-net-config/config.yaml file and remove the ip_netmask line. Save the file and refresh the node network configuration: Verify that the IP address is not present in the source node anymore, for example: SSH into the target node, for example cephstorage-0 , and add the IP address for the new Ceph Monitor. On the target node, edit /etc/os-net-config/config.yaml and add the - ip_netmask: 172.17.3.60 line that you removed in the source node. Save the file and refresh the node network configuration: Verify that the IP address is present in the target node. From the Ceph client node, controller-0 , ping the IP address that is migrated to the target node and confirm that it is still reachable: steps Proceed to the step Redeploying the Ceph Monitor on the target node . 7.5.2.4. Redeploying a Ceph Monitor on the target node You use the IP address that you migrated to the target node to redeploy the Ceph Monitor on the target node. Procedure From the Ceph client node, for example controller-0 , get the Ceph mon spec: Edit the retrieved spec and add the unmanaged: true keyword: service_type: mon service_id: mon placement: label: mon unmanaged: true Save the spec. Apply the spec with cephadm by using the Ceph Orchestrator: The Ceph Monitor daemons are marked as unmanaged , and you can now redeploy the existing daemon and bind it to the migrated IP address. Delete the existing Ceph Monitor on the target node: Replace <target_node> with the hostname of the target node that is included in the Red Hat Ceph Storage cluster. Redeploy the new Ceph Monitor on the target node by using the migrated IP address: Replace <ip_address> with the IP address of the migrated IP address. Get the Ceph Monitor spec: Edit the retrieved spec and set the unmanaged keyword to false : service_type: mon service_id: mon placement: label: mon unmanaged: false Save the spec. Apply the spec with cephadm by using the Ceph Orchestrator: The new Ceph Monitor runs on the target node with the original IP address. Identify the running mgr : Refresh the Ceph Manager information by force-failing it: Refresh the OSD information: steps Repeat the procedure starting from step Draining the source node for each node that you want to decommission. Proceed to the step Verifying the Red Hat Ceph Storage cluster after Ceph Monitor migration . 7.5.2.5. Verifying the Red Hat Ceph Storage cluster after Ceph Monitor migration After you finish migrating your Ceph Monitor daemons to the target nodes, verify that the the Red Hat Ceph Storage cluster is healthy. Procedure Verify that the Red Hat Ceph Storage cluster is healthy: Verify that the Red Hat Ceph Storage mons are running with the old IP addresses. SSH into the target nodes and verify that the Ceph Monitor daemons are bound to the expected IP and port: 7.6. Updating the Red Hat Ceph Storage cluster Ceph Dashboard configuration If the Ceph Dashboard is part of the enabled Ceph Manager modules, you need to reconfigure the failover settings. Procedure Regenerate the following Red Hat Ceph Storage configuration keys to point to the right mgr container: For each retrieved mgr daemon, update the corresponding entry in the Red Hat Ceph Storage configuration: | [
"| | | | |----|---------------------|-------------| | osd | mon/mgr/crash | rgw/ingress | | osd | mon/mgr/crash | rgw/ingress | | osd | mon/mgr/crash | rgw/ingress |",
"| | | | |-----|---------------------|-------------| | osd | mon/mgr/crash | rgw/ingress | | osd | mon/mgr/crash | rgw/ingress | | osd | mon/mgr/crash | dashboard/grafana | | osd | rgw/ingress | (free) |",
"| | | | |-----|---------------------|-------------------------| | osd | mon/mgr/crash | rgw/ingress | | osd | mon/mgr/crash | rgw/ingress | | osd | mon/mgr/crash | mds/ganesha/ingress | | osd | rgw/ingress | mds/ganesha/ingress | | osd | mds/ganesha/ingress | dashboard/grafana |",
"for item in USD(sudo cephadm shell -- ceph orch host ls --format json | jq -r '.[].hostname'); do sudo cephadm shell -- ceph orch host label add USDitem monitoring; done",
"[tripleo-admin@controller-0 ~]USD sudo cephadm shell -- ceph orch host ls HOST ADDR LABELS cephstorage-0.redhat.local 192.168.24.11 osd monitoring cephstorage-1.redhat.local 192.168.24.12 osd monitoring cephstorage-2.redhat.local 192.168.24.47 osd monitoring controller-0.redhat.local 192.168.24.35 _admin mon mgr monitoring controller-1.redhat.local 192.168.24.53 mon _admin mgr monitoring controller-2.redhat.local 192.168.24.10 mon _admin mgr monitoring",
"for i in 0 1 2; do sudo cephadm shell -- ceph orch host label rm \"controller-USDi.redhat.local\" monitoring; done Removed label monitoring from host controller-0.redhat.local Removed label monitoring from host controller-1.redhat.local Removed label monitoring from host controller-2.redhat.local",
"function export_spec { local component=\"USD1\" local target_dir=\"USD2\" sudo cephadm shell -- ceph orch ls --export \"USDcomponent\" > \"USDtarget_dir/USDcomponent\" } SPEC_DIR=USD{SPEC_DIR:-\"USDPWD/ceph_specs\"} mkdir -p USD{SPEC_DIR} for m in grafana prometheus alertmanager; do export_spec \"USDm\" \"USDSPEC_DIR\" done",
"service_type: grafana service_name: grafana placement: label: monitoring networks: - 172.17.3.0/24 spec: port: 3100",
"SPEC_DIR=USD{SPEC_DIR:-\"USDPWD/ceph_specs\"} function migrate_daemon { local component=\"USD1\" local target_dir=\"USD2\" sudo cephadm shell -m \"USDtarget_dir\" -- ceph orch apply -i /mnt/ceph_specs/USDcomponent } for m in grafana prometheus alertmanager; do migrate_daemon \"USDm\" \"USDSPEC_DIR\" done",
"ceph orch ps | grep -iE \"(prome|alert|grafa)\" alertmanager.cephstorage-2 cephstorage-2.redhat.local 172.17.3.144:9093,9094 grafana.cephstorage-0 cephstorage-0.redhat.local 172.17.3.83:3100 prometheus.cephstorage-1 cephstorage-1.redhat.local 172.17.3.53:9092",
"ceph config dump | grep -i dashboard mgr advanced mgr/dashboard/ALERTMANAGER_API_HOST http://172.17.3.83:9093 mgr advanced mgr/dashboard/GRAFANA_API_URL https://172.17.3.144:3100 mgr advanced mgr/dashboard/PROMETHEUS_API_HOST http://172.17.3.83:9092 mgr advanced mgr/dashboard/controller-0.ycokob/server_addr 172.17.3.33 mgr advanced mgr/dashboard/controller-1.lmzpuc/server_addr 172.17.3.147 mgr advanced mgr/dashboard/controller-2.xpdgfl/server_addr 172.17.3.138",
"ceph orch ps | grep -iE \"(prome|alert|grafa)\" alertmanager.cephstorage-0 cephstorage-0.redhat.local 172.17.3.83:9093,9094 alertmanager.cephstorage-1 cephstorage-1.redhat.local 172.17.3.53:9093,9094 alertmanager.cephstorage-2 cephstorage-2.redhat.local 172.17.3.144:9093,9094 grafana.cephstorage-0 cephstorage-0.redhat.local 172.17.3.83:3100 grafana.cephstorage-1 cephstorage-1.redhat.local 172.17.3.53:3100 grafana.cephstorage-2 cephstorage-2.redhat.local 172.17.3.144:3100 prometheus.cephstorage-0 cephstorage-0.redhat.local 172.17.3.83:9092 prometheus.cephstorage-1 cephstorage-1.redhat.local 172.17.3.53:9092 prometheus.cephstorage-2 cephstorage-2.redhat.local 172.17.3.144:9092",
"ceph config dump mgr advanced mgr/dashboard/ALERTMANAGER_API_HOST http://172.17.3.83:9093 mgr advanced mgr/dashboard/PROMETHEUS_API_HOST http://172.17.3.83:9092 mgr advanced mgr/dashboard/GRAFANA_API_URL https://172.17.3.144:3100",
"sudo cephadm shell -- ceph fs ls name: cephfs, metadata pool: manila_metadata, data pools: [manila_data ] sudo cephadm shell -- ceph mds stat cephfs:1 {0=mds.controller-2.oebubl=up:active} 2 up:standby sudo cephadm shell -- ceph fs status cephfs cephfs - 0 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active mds.controller-2.oebubl Reqs: 0 /s 696 196 173 0 POOL TYPE USED AVAIL manila_metadata metadata 152M 141G manila_data data 3072M 141G STANDBY MDS mds.controller-0.anwiwd mds.controller-1.cwzhog",
"sudo cephadm shell -- ceph fs dump e8 enable_multiple, ever_enabled_multiple: 1,1 default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} legacy client fscid: 1 Filesystem 'cephfs' (1) fs_name cephfs epoch 5 flags 12 joinable allow_snaps allow_multimds_snaps created 2024-01-18T19:04:01.633820+0000 modified 2024-01-18T19:04:05.393046+0000 tableserver 0 root 0 session_timeout 60 session_autoclose 300 max_file_size 1099511627776 required_client_features {} last_failure 0 last_failure_osd_epoch 0 compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2} max_mds 1 in 0 up {0=24553} failed damaged stopped data_pools [7] metadata_pool 9 inline_data disabled balancer standby_count_wanted 1 [mds.mds.controller-2.oebubl{0:24553} state up:active seq 2 addr [v2:172.17.3.114:6800/680266012,v1:172.17.3.114:6801/680266012] compat {c=[1],r=[1],i=[7ff]}] Standby daemons: [mds.mds.controller-0.anwiwd{-1:14715} state up:standby seq 1 addr [v2:172.17.3.20:6802/3969145800,v1:172.17.3.20:6803/3969145800] compat {c=[1],r=[1],i=[7ff]}] [mds.mds.controller-1.cwzhog{-1:24566} state up:standby seq 1 addr [v2:172.17.3.43:6800/2227381308,v1:172.17.3.43:6801/2227381308] compat {c=[1],r=[1],i=[7ff]}] dumped fsmap epoch 8",
"sudo cephadm shell -- ceph osd blocklist ls .. .. for item in USD(sudo cephadm shell -- ceph osd blocklist ls | awk '{print USD1}'); do sudo cephadm shell -- ceph osd blocklist rm USDitem; done",
"ceph orch host ls HOST ADDR LABELS STATUS cephstorage-0.redhat.local 192.168.24.25 osd cephstorage-1.redhat.local 192.168.24.50 osd cephstorage-2.redhat.local 192.168.24.47 osd controller-0.redhat.local 192.168.24.24 _admin mgr mon controller-1.redhat.local 192.168.24.42 mgr _admin mon controller-2.redhat.local 192.168.24.37 mgr _admin mon 6 hosts in cluster",
"for item in USD(sudo cephadm shell -- ceph orch host ls --format json | jq -r '.[].hostname'); do sudo cephadm shell -- ceph orch host label add USDitem mds; done",
"sudo cephadm shell -- ceph orch host ls HOST ADDR LABELS cephstorage-0.redhat.local 192.168.24.11 osd mds cephstorage-1.redhat.local 192.168.24.12 osd mds cephstorage-2.redhat.local 192.168.24.47 osd mds controller-0.redhat.local 192.168.24.35 _admin mon mgr mds controller-1.redhat.local 192.168.24.53 mon _admin mgr mds controller-2.redhat.local 192.168.24.10 mon _admin mgr mds",
"SPEC_DIR=USD{SPEC_DIR:-\"USDPWD/ceph_specs\"} mkdir -p USD{SPEC_DIR} sudo cephadm shell -- ceph orch ls --export mds > USD{SPEC_DIR}/mds",
"service_type: mds service_id: mds service_name: mds.mds placement: label: mds",
"SPEC_DIR=USD{SPEC_DIR:-\"USDPWD/ceph_specs\"} sudo cephadm shell -m USD{SPEC_DIR}/mds -- ceph orch apply -i /mnt/mds Scheduling new mds deployment",
"sudo cephadm shell -- ceph fs dump Active standby_count_wanted 1 [mds.mds.controller-0.awzplm{0:463158} state up:active seq 307 join_fscid=1 addr [v2:172.17.3.20:6802/51565420,v1:172.17.3.20:6803/51565420] compat {c=[1],r=[1],i=[7ff]}] Standby daemons: [mds.mds.cephstorage-1.jkvomp{-1:463800} state up:standby seq 1 join_fscid=1 addr [v2:172.17.3.135:6820/2075903648,v1:172.17.3.135:6821/2075903648] compat {c=[1],r=[1],i=[7ff]}] [mds.mds.controller-2.gfrqvc{-1:475945} state up:standby seq 1 addr [v2:172.17.3.114:6800/2452517189,v1:172.17.3.114:6801/2452517189] compat {c=[1],r=[1],i=[7ff]}] [mds.mds.cephstorage-0.fqcshx{-1:476503} state up:standby seq 1 join_fscid=1 addr [v2:172.17.3.92:6820/4120523799,v1:172.17.3.92:6821/4120523799] compat {c=[1],r=[1],i=[7ff]}] [mds.mds.cephstorage-2.gnfhfe{-1:499067} state up:standby seq 1 addr [v2:172.17.3.79:6820/2448613348,v1:172.17.3.79:6821/2448613348] compat {c=[1],r=[1],i=[7ff]}] [mds.mds.controller-1.tyiziq{-1:499136} state up:standby seq 1 addr [v2:172.17.3.43:6800/3615018301,v1:172.17.3.43:6801/3615018301] compat {c=[1],r=[1],i=[7ff]}]",
"sudo cephadm shell -- ceph config set mds.mds.cephstorage-0.fqcshx mds_join_fs cephfs",
"for i in 0 1 2; do sudo cephadm shell -- ceph orch host label rm \"controller-USDi.redhat.local\" mds; done Removed label mds from host controller-0.redhat.local Removed label mds from host controller-1.redhat.local Removed label mds from host controller-2.redhat.local",
"sudo cephadm shell -- ceph fs dump ... ... standby_count_wanted 1 [mds.mds.cephstorage-0.fqcshx{0:476503} state up:active seq 168 join_fscid=1 addr [v2:172.17.3.92:6820/4120523799,v1:172.17.3.92:6821/4120523799] compat {c=[1],r=[1],i=[7ff]}] Standby daemons: [mds.mds.cephstorage-2.gnfhfe{-1:499067} state up:standby seq 1 addr [v2:172.17.3.79:6820/2448613348,v1:172.17.3.79:6821/2448613348] compat {c=[1],r=[1],i=[7ff]}] [mds.mds.cephstorage-1.jkvomp{-1:499760} state up:standby seq 1 join_fscid=1 addr [v2:172.17.3.135:6820/452139733,v1:172.17.3.135:6821/452139733] compat {c=[1],r=[1],i=[7ff]}] sudo cephadm shell -- ceph orch ls NAME PORTS RUNNING REFRESHED AGE PLACEMENT crash 6/6 10m ago 10d * mds.mds 3/3 10m ago 32m label:mds sudo cephadm shell -- ceph orch ps | grep mds mds.mds.cephstorage-0.fqcshx cephstorage-0.redhat.local running (79m) 3m ago 79m 27.2M - 17.2.6-100.el9cp 1af7b794f353 2a2dc5ba6d57 mds.mds.cephstorage-1.jkvomp cephstorage-1.redhat.local running (79m) 3m ago 79m 21.5M - 17.2.6-100.el9cp 1af7b794f353 7198b87104c8 mds.mds.cephstorage-2.gnfhfe cephstorage-2.redhat.local running (79m) 3m ago 79m 24.2M - 17.2.6-100.el9cp 1af7b794f353 f3cb859e2a15",
"sudo cephadm shell -- ceph orch host label add cephstorage-0 rgw; sudo cephadm shell -- ceph orch host label add cephstorage-1 rgw; sudo cephadm shell -- ceph orch host label add cephstorage-2 rgw; Added label rgw to host cephstorage-0 Added label rgw to host cephstorage-1 Added label rgw to host cephstorage-2 sudo cephadm shell -- ceph orch host ls HOST ADDR LABELS STATUS cephstorage-0 192.168.24.54 osd rgw cephstorage-1 192.168.24.44 osd rgw cephstorage-2 192.168.24.30 osd rgw controller-0 192.168.24.45 _admin mon mgr controller-1 192.168.24.11 _admin mon mgr controller-2 192.168.24.38 _admin mon mgr 6 hosts in cluster",
"SPEC_DIR=USD{SPEC_DIR:-\"USDPWD/ceph_specs\"} mkdir -p USD{SPEC_DIR} sudo cephadm shell -- ceph orch ls --export rgw > USD{SPEC_DIR}/rgw cat USD{SPEC_DIR}/rgw networks: - 172.17.3.0/24 placement: hosts: - controller-0 - controller-1 - controller-2 service_id: rgw service_name: rgw.rgw service_type: rgw spec: rgw_frontend_port: 8080 rgw_realm: default rgw_zone: default",
"--- networks: - 172.17.3.0/24 1 placement: label: rgw 2 service_id: rgw service_name: rgw.rgw service_type: rgw spec: rgw_frontend_port: 8090 3 rgw_realm: default rgw_zone: default rgw_frontend_ssl_certificate: ... 4 ssl: true",
"SPEC_DIR=USD{SPEC_DIR:-\"USDPWD/ceph_specs\"} sudo cephadm shell -m USD{SPEC_DIR}/rgw -- ceph orch apply -i /mnt/rgw",
"osd.9 cephstorage-2 rgw.rgw.cephstorage-0.wsjlgx cephstorage-0 172.17.3.23:8090 starting rgw.rgw.cephstorage-1.qynkan cephstorage-1 172.17.3.26:8090 starting rgw.rgw.cephstorage-2.krycit cephstorage-2 172.17.3.81:8090 starting rgw.rgw.controller-1.eyvrzw controller-1 172.17.3.146:8080 running (5h) rgw.rgw.controller-2.navbxa controller-2 172.17.3.66:8080 running (5h) osd.9 cephstorage-2 rgw.rgw.cephstorage-0.wsjlgx cephstorage-0 172.17.3.23:8090 running (19s) rgw.rgw.cephstorage-1.qynkan cephstorage-1 172.17.3.26:8090 running (16s) rgw.rgw.cephstorage-2.krycit cephstorage-2 172.17.3.81:8090 running (13s)",
"iptables -I INPUT -p tcp -m tcp --dport 8080 -m conntrack --ctstate NEW -m comment --comment \"ceph rgw ingress\" -j ACCEPT iptables -I INPUT -p tcp -m tcp --dport 8090 -m conntrack --ctstate NEW -m comment --comment \"ceph rgw backends\" -j ACCEPT sudo iptables-save sudo systemctl restart iptables",
"100 ceph_rgw {'dport': ['8080','8090']} add rule inet filter TRIPLEO_INPUT tcp dport { 8080,8090 } ct state new counter accept comment \"100 ceph_rgw\"",
"sudo systemctl restart nftables",
"sudo nft list ruleset | grep ceph_rgw",
"curl http://cephstorage-0.storage:8090;",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?><ListAllMyBucketsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>",
"ceph config dump | grep keystone global basic rgw_keystone_url http://172.16.1.111:5000 ceph config set global rgw_keystone_url http://<keystone_endpoint>:5000",
"listen ceph_rgw bind 10.0.0.103:8080 transparent mode http balance leastconn http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } http-request set-header X-Forwarded-Port %[dst_port] option httpchk GET /swift/healthcheck option httplog option forwardfor server controller-0.storage.redhat.local 172.17.3.73:8080 check fall 5 inter 2000 rise 2 server controller-1.storage.redhat.local 172.17.3.146:8080 check fall 5 inter 2000 rise 2 server controller-2.storage.redhat.local 172.17.3.156:8080 check fall 5 inter 2000 rise 2",
"sudo pcs resource restart haproxy-bundle haproxy-bundle successfully restarted sudo pcs status | grep haproxy * Container bundle set: haproxy-bundle [undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-haproxy:pcmklatest]: * haproxy-bundle-podman-0 (ocf:heartbeat:podman): Started controller-0 * haproxy-bundle-podman-1 (ocf:heartbeat:podman): Started controller-1 * haproxy-bundle-podman-2 (ocf:heartbeat:podman): Started controller-2",
"ss -antop | grep 8080",
"(overcloud) [root@cephstorage-0 ~]# swift list HTTPConnectionPool(host='10.0.0.103', port=8080): Max retries exceeded with url: /swift/v1/AUTH_852f24425bb54fa896476af48cbe35d3?format=json (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc41beb0430>: Failed to establish a new connection: [Errno 111] Connection refused'))",
"ceph config set mgr mgr/cephadm/container_image_haproxy registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest ceph config set mgr mgr/cephadm/container_image_keepalived registry.redhat.io/rhceph/keepalived-rhel9:latest",
"SPEC_DIR=USD{SPEC_DIR:-\"USDPWD/ceph_specs\"} vim USD{SPEC_DIR}/rgw_ingress",
"--- service_type: ingress service_id: rgw.rgw placement: label: rgw spec: backend_service: rgw.rgw virtual_ip: 10.0.0.89/24 frontend_port: 8080 monitor_port: 8898 virtual_interface_networks: - <external_network> ssl_cert:",
"SPEC_DIR=USD{SPEC_DIR:-\"USDPWD/ceph_specs\"} cephadm shell -m USD{SPEC_DIR}/rgw_ingress -- ceph orch apply -i /mnt/rgw_ingress",
"sudo cephadm shell -- ceph orch ls NAME PORTS RUNNING REFRESHED AGE PLACEMENT crash 6/6 6m ago 3d * ingress.rgw.rgw 10.0.0.89:8080,8898 6/6 37s ago 60s label:rgw mds.mds 3/3 6m ago 3d controller-0;controller-1;controller-2 mgr 3/3 6m ago 3d controller-0;controller-1;controller-2 mon 3/3 6m ago 3d controller-0;controller-1;controller-2 osd.default_drive_group 15 37s ago 3d cephstorage-0;cephstorage-1;cephstorage-2 rgw.rgw ?:8090 3/3 37s ago 4m label:rgw",
"curl 10.0.0.89:8080 --- <?xml version=\"1.0\" encoding=\"UTF-8\"?><ListAllMyBucketsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>[ceph: root@controller-0 /]# -",
"(overcloud) [stack@undercloud-0 ~]USD openstack endpoint list | grep object | 1326241fb6b6494282a86768311f48d1 | regionOne | swift | object-store | True | internal | http://172.17.3.68:8080/swift/v1/AUTH_%(project_id)s | | 8a34817a9d3443e2af55e108d63bb02b | regionOne | swift | object-store | True | public | http://10.0.0.103:8080/swift/v1/AUTH_%(project_id)s | | fa72f8b8b24e448a8d4d1caaeaa7ac58 | regionOne | swift | object-store | True | admin | http://172.17.3.68:8080/swift/v1/AUTH_%(project_id)s |",
"(overcloud) [stack@undercloud-0 ~]USD openstack endpoint set --url \"http://10.0.0.89:8080/swift/v1/AUTH_%(project_id)s\" 95596a2d92c74c15b83325a11a4f07a3 (overcloud) [stack@undercloud-0 ~]USD openstack endpoint list | grep object-store | 6c7244cc8928448d88ebfad864fdd5ca | regionOne | swift | object-store | True | internal | http://172.17.3.79:8080/swift/v1/AUTH_%(project_id)s | | 95596a2d92c74c15b83325a11a4f07a3 | regionOne | swift | object-store | True | public | http://10.0.0.89:8080/swift/v1/AUTH_%(project_id)s | | e6d0599c5bf24a0fb1ddf6ecac00de2d | regionOne | swift | object-store | True | admin | http://172.17.3.79:8080/swift/v1/AUTH_%(project_id)s |",
"(overcloud) [stack@undercloud-0 ~]USD swift list --debug DEBUG:swiftclient:Versionless auth_url - using http://10.0.0.115:5000/v3 as endpoint DEBUG:keystoneclient.auth.identity.v3.base:Making authentication request to http://10.0.0.115:5000/v3/auth/tokens DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 10.0.0.115:5000 DEBUG:urllib3.connectionpool:http://10.0.0.115:5000 \"POST /v3/auth/tokens HTTP/1.1\" 201 7795 DEBUG:keystoneclient.auth.identity.v3.base:{\"token\": {\"methods\": [\"password\"], \"user\": {\"domain\": {\"id\": \"default\", \"name\": \"Default\"}, \"id\": \"6f87c7ffdddf463bbc633980cfd02bb3\", \"name\": \"admin\", \"password_expires_at\": null}, DEBUG:swiftclient:REQ: curl -i http://10.0.0.89:8080/swift/v1/AUTH_852f24425bb54fa896476af48cbe35d3?format=json -X GET -H \"X-Auth-Token: gAAAAABj7KHdjZ95syP4c8v5a2zfXckPwxFQZYg0pgWR42JnUs83CcKhYGY6PFNF5Cg5g2WuiYwMIXHm8xftyWf08zwTycJLLMeEwoxLkcByXPZr7kT92ApT-36wTfpi-zbYXd1tI5R00xtAzDjO3RH1kmeLXDgIQEVp0jMRAxoVH4zb-DVHUos\" -H \"Accept-Encoding: gzip\" DEBUG:swiftclient:RESP STATUS: 200 OK DEBUG:swiftclient:RESP HEADERS: {'content-length': '2', 'x-timestamp': '1676452317.72866', 'x-account-container-count': '0', 'x-account-object-count': '0', 'x-account-bytes-used': '0', 'x-account-bytes-used-actual': '0', 'x-account-storage-policy-default-placement-container-count': '0', 'x-account-storage-policy-default-placement-object-count': '0', 'x-account-storage-policy-default-placement-bytes-used': '0', 'x-account-storage-policy-default-placement-bytes-used-actual': '0', 'x-trans-id': 'tx00000765c4b04f1130018-0063eca1dd-1dcba-default', 'x-openstack-request-id': 'tx00000765c4b04f1130018-0063eca1dd-1dcba-default', 'accept-ranges': 'bytes', 'content-type': 'application/json; charset=utf-8', 'date': 'Wed, 15 Feb 2023 09:11:57 GMT'} DEBUG:swiftclient:RESP BODY: b'[]'",
"dports=\"6800:7300\" ssh heat-admin@<target_node> sudo iptables -I INPUT -p tcp --match multiport --dports USDdports -j ACCEPT;",
"sudo iptables-save sudo systemctl restart iptables",
"113 ceph_mgr {'dport': ['6800-7300', 8444]} add rule inet filter TRIPLEO_INPUT tcp dport { 6800-7300,8444 } ct state new counter accept comment \"113 ceph_mgr\"",
"sudo systemctl restart nftables",
"sudo nft list ruleset | grep ceph_mgr",
"sudo cephadm shell -- ceph orch host label add <target_node> mgr",
"SPEC_DIR=USD{SPEC_DIR:-\"USDPWD/ceph_specs\"} mkdir -p USD{SPEC_DIR} sudo cephadm shell -- ceph orch ls --export mgr > USD{SPEC_DIR}/mgr",
"service_type: mgr service_id: mgr placement: label: mgr",
"SPEC_DIR=USD{SPEC_DIR:-\"USDPWD/ceph_specs\"} sudo cephadm shell -m USD{SPEC_DIR}/mgr -- ceph orch apply -i /mnt/mgr",
"sudo cephadm shell -- ceph orch ps | grep -i mgr sudo cephadm shell -- ceph -s",
"for port in 3300 6789; { ssh heat-admin@<target_node> sudo iptables -I INPUT -p tcp -m tcp --dport USDport -m conntrack --ctstate NEW -j ACCEPT; }",
"sudo iptables-save sudo systemctl restart iptables",
"110 ceph_mon {'dport': [6789, 3300, '9100']} add rule inet filter TRIPLEO_INPUT tcp dport { 6789,3300,9100 } ct state new counter accept comment \"110 ceph_mon\"",
"sudo systemctl restart nftables",
"sudo nft list ruleset | grep ceph_mon",
"SPEC_DIR=USD{SPEC_DIR:-\"USDPWD/ceph_specs\"} mkdir -p USD{SPEC_DIR} sudo cephadm shell -- ceph orch ls --export mon > USD{SPEC_DIR}/mon",
"service_type: mon service_id: mon placement: label: mon",
"SPEC_DIR=USD{SPEC_DIR:-\"USDPWD/ceph_specs\"} sudo cephadm shell -m USD{SPEC_DIR}/mon -- ceph orch apply -i /mnt/mon",
"for item in USD(sudo cephadm shell -- ceph orch host ls --format json | jq -r '.[].hostname'); do sudo cephadm shell -- ceph orch host label add USDitem mon; sudo cephadm shell -- ceph orch host label add USDitem _admin; done",
"sudo cephadm shell -- ceph -s cluster: id: f6ec3ebe-26f7-56c8-985d-eb974e8e08e3 health: HEALTH_OK services: mon: 6 daemons, quorum controller-0,controller-1,controller-2,ceph-0,ceph-1,ceph-2 (age 19m) mgr: controller-0.xzgtvo(active, since 32m), standbys: controller-1.mtxohd, controller-2.ahrgsk osd: 8 osds: 8 up (since 12m), 8 in (since 18m); 1 remapped pgs data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 43 MiB used, 400 GiB / 400 GiB avail pgs: 1 active+clean",
"sudo cephadm shell -- ceph orch host ls HOST ADDR LABELS STATUS ceph-0 192.168.24.14 osd mon mgr _admin ceph-1 192.168.24.7 osd mon mgr _admin ceph-2 192.168.24.8 osd mon mgr _admin controller-0 192.168.24.15 _admin mgr mon controller-1 192.168.24.23 _admin mgr mon controller-2 192.168.24.13 _admin mgr mon",
"mkdir -p USDHOME/ceph_client_backup sudo cp -R /etc/ceph/* USDHOME/ceph_client_backup",
"sudo os-net-config -c /etc/os-net-config/config.yaml",
"ip -o a | grep 172.17.3.200",
"ping -c 3 172.17.3.200",
"sudo cephadm shell -c USDHOME/ceph_client_backup/ceph.conf -k USDHOME/ceph_client_backup/ceph.client.admin.keyring -- ceph -s",
"mkdir -p USDHOME/ceph_client_backup sudo cp -R /etc/ceph USDHOME/ceph_client_backup",
"sudo cephadm shell -- ceph mgr stat",
"sudo cephadm shell -- ceph mgr fail <mgr_instance>",
"for label in mon mgr _admin; do sudo cephadm shell -- ceph orch host label rm <source_node> USDlabel; done",
"sudo cephadm shell -- ceph orch daemon rm mon.<source_node> --force",
"sudo cephadm shell -- ceph orch host drain <source_node>",
"sudo cephadm shell -- ceph orch host rm <source_node> --force",
"sudo podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5c1ad36472bc registry.redhat.io/ceph/rhceph@sha256:320c364dcc8fc8120e2a42f54eb39ecdba12401a2546763b7bef15b02ce93bc4 -n mon.contro... 35 minutes ago Up 35 minutes ago ceph-f6ec3ebe-26f7-56c8-985d-eb974e8e08e3-mon-controller-1 3b14cc7bf4dd registry.redhat.io/ceph/rhceph@sha256:320c364dcc8fc8120e2a42f54eb39ecdba12401a2546763b7bef15b02ce93bc4 -n mgr.contro... 35 minutes ago Up 35 minutes ago ceph-f6ec3ebe-26f7-56c8-985d-eb974e8e08e3-mgr-controller-1-mtxohd",
"sudo cephadm shell -- ceph -s sudo cephadm shell -- ceph orch ps | grep -i mon",
"mon_host = [v2:172.17.3.60:3300/0,v1:172.17.3.60:6789/0] [v2:172.17.3.29:3300/0,v1:172.17.3.29:6789/0] [v2:172.17.3.53:3300/0,v1:172.17.3.53:6789/0]",
"[tripleo-admin@controller-0 ~]USD ip -o -4 a | grep 172.17.3 9: vlan30 inet 172.17.3.60/24 brd 172.17.3.255 scope global vlan30\\ valid_lft forever preferred_lft forever 9: vlan30 inet 172.17.3.13/32 brd 172.17.3.255 scope global vlan30\\ valid_lft forever preferred_lft forever",
"[tripleo-admin@controller-0 ~]USD grep \"172.17.3.60\" /etc/os-net-config/config.yaml - ip_netmask: 172.17.3.60/24",
"sudo os-net-config -c /etc/os-net-config/config.yaml",
"[controller-0]USD ip -o a | grep 172.17.3.60",
"sudo os-net-config -c /etc/os-net-config/config.yaml",
"ip -o a | grep 172.17.3.60",
"[controller-0]USD ping -c 3 172.17.3.60",
"SPEC_DIR=USD{SPEC_DIR:-\"USDPWD/ceph_specs\"} sudo cephadm shell -- ceph orch ls --export mon > USD{SPEC_DIR}/mon",
"service_type: mon service_id: mon placement: label: mon unmanaged: true",
"SPEC_DIR=USD{SPEC_DIR:-\"USDPWD/ceph_specs\"} sudo cephadm shell -m USD{SPEC_DIR}/mon -- ceph orch apply -i /mnt/mon",
"sudo cephadm shell -- ceph orch daemon rm mon.<target_node> --force",
"sudo cephadm shell -- ceph orch daemon add mon <target_node>:<ip_address>",
"SPEC_DIR=USD{SPEC_DIR:-\"USDPWD/ceph_specs\"} sudo cephadm shell -- ceph orch ls --export mon > USD{SPEC_DIR}/mon",
"service_type: mon service_id: mon placement: label: mon unmanaged: false",
"SPEC_DIR=USD{SPEC_DIR:-\"USDPWD/ceph_specs\"} sudo cephadm shell -m USD{SPEC_DIR}/mon -- ceph orch apply -i /mnt/mon",
"sudo cephadm shell -- ceph mgr stat",
"sudo cephadm shell -- ceph mgr fail",
"sudo cephadm shell -- ceph orch reconfig osd.default_drive_group",
"ceph -s cluster: id: f6ec3ebe-26f7-56c8-985d-eb974e8e08e3 health: HEALTH_OK",
"netstat -tulpn | grep 3300",
"mgr advanced mgr/dashboard/controller-0.ycokob/server_addr 172.17.3.33 mgr advanced mgr/dashboard/controller-1.lmzpuc/server_addr 172.17.3.147 mgr advanced mgr/dashboard/controller-2.xpdgfl/server_addr 172.17.3.138",
"sudo cephadm shell ceph orch ps | awk '/mgr./ {print USD1}'",
"ceph config set mgr mgr/dashboard/<>/server_addr/<ip addr>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/adopting_a_red_hat_openstack_platform_17.1_deployment/ceph-migration_adopt-control-plane |
A.2. Explanation of settings in the Run Once window | A.2. Explanation of settings in the Run Once window The Run Once window defines one-off boot options for a virtual machine. For persistent boot options, use the Boot Options tab in the New Virtual Machine window. The Run Once window contains multiple sections that can be configured. The standalone Rollback this configuration during reboots check box specifies whether reboots (initiated by the Manager, or from within the guest) will be warm (soft) or cold (hard). Select this check box to configure a cold reboot that restarts the virtual machine with regular (non- Run Once ) configuration. Clear this check box to configure a warm reboot that retains the virtual machine's Run Once configuration. The Boot Options section defines the virtual machine's boot sequence, running options, and source images for installing the operating system and required drivers. Note The following tables do not include information on whether a power cycle is required because these one-off boot options apply only when you reboot the virtual machine. Table A.13. Boot Options Section Field Name Description Attach CD Attaches an ISO image to the virtual machine. Use this option to install the virtual machine's operating system and applications. The CD image must reside in the ISO domain. Attach Windows guest tools CD Attaches a secondary virtual CD-ROM to the virtual machine with the virtio-win ISO image. Use this option to install Windows drivers. For information on installing the image, see Uploading the VirtIO Image Files to a Storage Domain in the Administration Guide . Enable menu to select boot device Enables a menu to select the boot device. After the virtual machine starts and connects to the console, but before the virtual machine starts booting, a menu displays that allows you to select the boot device. This option should be enabled before the initial boot to allow you to select the required installation media. Start in Pause Mode Starts and then pauses the virtual machine to enable connection to the console. Suitable for virtual machines in remote locations. Predefined Boot Sequence Determines the order in which the boot devices are used to boot the virtual machine. Select Hard Disk , CD-ROM , or Network (PXE) , and use Up and Down to move the option up or down in the list. Run Stateless Deletes all data and configuration changes to the virtual machine upon shutdown. This option is only available if a virtual disk is attached to the virtual machine. The Linux Boot Options section contains fields to boot a Linux kernel directly instead of through the BIOS bootloader. Table A.14. Linux Boot Options Section Field Name Description kernel path A fully qualified path to a kernel image to boot the virtual machine. The kernel image must be stored on either the ISO domain (path name in the format of iso://path-to-image ) or on the host's local storage domain (path name in the format of /data/images ). initrd path A fully qualified path to a ramdisk image to be used with the previously specified kernel. The ramdisk image must be stored on the ISO domain (path name in the format of iso://path-to-image ) or on the host's local storage domain (path name in the format of /data/images ). kernel parameters Kernel command line parameter strings to be used with the defined kernel on boot. The Initial Run section is used to specify whether to use Cloud-Init or Sysprep to initialize the virtual machine. For Linux-based virtual machines, you must select the Use Cloud-Init check box in the Initial Run tab to view the available options. For Windows-based virtual machines, you must attach the [sysprep] floppy by selecting the Attach Floppy check box in the Boot Options tab and selecting the floppy from the list. The options that are available in the Initial Run section differ depending on the operating system that the virtual machine is based on. Table A.15. Initial Run Section (Linux-based Virtual Machines) Field Name Description VM Hostname The host name of the virtual machine. Configure Time Zone The time zone for the virtual machine. Select this check box and select a time zone from the Time Zone list. Authentication The authentication details for the virtual machine. Click the disclosure arrow to display the settings for this option. Authentication User Name Creates a new user account on the virtual machine. If this field is not filled in, the default user is root . Authentication Use already configured password This check box is automatically selected after you specify an initial root password. You must clear this check box to enable the Password and Verify Password fields and specify a new password. Authentication Password The root password for the virtual machine. Enter the password in this text field and the Verify Password text field to verify the password. Authentication SSH Authorized Keys SSH keys to be added to the authorized keys file of the virtual machine. Authentication Regenerate SSH Keys Regenerates SSH keys for the virtual machine. Networks Network-related settings for the virtual machine. Click the disclosure arrow to display the settings for this option. Networks DNS Servers The DNS servers to be used by the virtual machine. Networks DNS Search Domains The DNS search domains to be used by the virtual machine. Networks Network Configures network interfaces for the virtual machine. Select this check box and click + or - to add or remove network interfaces to or from the virtual machine. When you click + , a set of fields becomes visible that can specify whether to use DHCP, and configure an IP address, netmask, and gateway, and specify whether the network interface will start on boot. Custom Script Custom scripts that will be run on the virtual machine when it starts. The scripts entered in this field are custom YAML sections that are added to those produced by the Manager, and allow you to automate tasks such as creating users and files, configuring yum repositories and running commands. For more information on the format of scripts that can be entered in this field, see the Custom Script documentation. Table A.16. Initial Run Section (Windows-based Virtual Machines) Field Name Description VM Hostname The host name of the virtual machine. Domain The Active Directory domain to which the virtual machine belongs. Organization Name The name of the organization to which the virtual machine belongs. This option corresponds to the text field for setting the organization name displayed when a machine running Windows is started for the first time. Active Directory OU The organizational unit in the Active Directory domain to which the virtual machine belongs. The distinguished name must be provided. For example CN=Users,DC=lab,DC=local Configure Time Zone The time zone for the virtual machine. Select this check box and select a time zone from the Time Zone list. Admin Password The administrative user password for the virtual machine. Click the disclosure arrow to display the settings for this option. Admin Password Use already configured password This check box is automatically selected after you specify an initial administrative user password. You must clear this check box to enable the Admin Password and Verify Admin Password fields and specify a new password. Admin Password Admin Password The administrative user password for the virtual machine. Enter the password in this text field and the Verify Admin Password text field to verify the password. Custom Locale Locales must be in a format such as en-US . Click the disclosure arrow to display the settings for this option. Custom Locale Input Locale The locale for user input. Custom Locale UI Language The language used for user interface elements such as buttons and menus. Custom Locale System Locale The locale for the overall system. Custom Locale User Locale The locale for users. Sysprep A custom Sysprep definition. The definition must be in the format of a complete unattended installation answer file. You can copy and paste the default answer files in the /usr/share/ovirt-engine/conf/sysprep/ directory on the machine on which the Red Hat Virtualization Manager is installed and alter the fields as required. The definition will overwrite any values entered in the Initial Run fields. See Templates for more information. Domain The Active Directory domain to which the virtual machine belongs. If left blank, the value of the Domain field is used. Alternate Credentials Selecting this check box allows you to set a User Name and Password as alternative credentials. The System section enables you to define the supported machine type or CPU type. Table A.17. System Section Field Name Description Custom Emulated Machine This option allows you to specify the machine type. If changed, the virtual machine will only run on hosts that support this machine type. Defaults to the cluster's default machine type. Custom CPU Type This option allows you to specify a CPU type. If changed, the virtual machine will only run on hosts that support this CPU type. Defaults to the cluster's default CPU type. The Host section is used to define the virtual machine's host. Table A.18. Host Section Field Name Description Any host in cluster Allocates the virtual machine to any available host. Specific Host(s) Specifies a user-defined host for the virtual machine. The Console section defines the protocol to connect to virtual machines. Table A.19. Console Section Field Name Description Headless Mode Select this option if you do not require a graphical console when running the machine for the first time. See Configuring Headless Machines for more information. VNC Requires a VNC client to connect to a virtual machine using VNC. Optionally, specify VNC Keyboard Layout from the drop-down list. SPICE Recommended protocol for Linux and Windows virtual machines. Using SPICE protocol with QXLDOD drivers is supported for Windows 10 and Windows Server 2016 and later virtual machines. Enable SPICE file transfer Determines whether you can drag and drop files from an external host into the virtual machine's SPICE console. This option is only available for virtual machines using the SPICE protocol. This check box is selected by default. Enable SPICE clipboard copy and paste Defines whether you can copy and paste content from an external host into the virtual machine's SPICE console. This option is only available for virtual machines using the SPICE protocol. This check box is selected by default. The Custom Properties section contains additional VDSM options for running virtual machines. See New VMs Custom Properties for details. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/virtual_machine_run_once_settings_explained |
Chapter 4. ResourceAccessReview [authorization.openshift.io/v1] | Chapter 4. ResourceAccessReview [authorization.openshift.io/v1] Description ResourceAccessReview is a means to request a list of which users and groups are authorized to perform the action specified by spec Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required namespace verb resourceAPIGroup resourceAPIVersion resource resourceName path isNonResourceURL 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources content RawExtension Content is the actual content of the request for create and update isNonResourceURL boolean IsNonResourceURL is true if this is a request for a non-resource URL (outside of the resource hierarchy) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces path string Path is the path of a non resource URL resource string Resource is one of the existing resource types resourceAPIGroup string Group is the API group of the resource Serialized as resourceAPIGroup to avoid confusion with the 'groups' field when inlined resourceAPIVersion string Version is the API version of the resource Serialized as resourceAPIVersion to avoid confusion with TypeMeta.apiVersion and ObjectMeta.resourceVersion when inlined resourceName string ResourceName is the name of the resource being requested for a "get" or deleted for a "delete" verb string Verb is one of: get, list, watch, create, update, delete 4.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/resourceaccessreviews POST : create a ResourceAccessReview 4.2.1. /apis/authorization.openshift.io/v1/resourceaccessreviews Table 4.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a ResourceAccessReview Table 4.2. Body parameters Parameter Type Description body ResourceAccessReview schema Table 4.3. HTTP responses HTTP code Reponse body 200 - OK ResourceAccessReview schema 201 - Created ResourceAccessReview schema 202 - Accepted ResourceAccessReview schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/authorization_apis/resourceaccessreview-authorization-openshift-io-v1 |
Preface | Preface Enterprise Contract is a policy-driven workflow tool for maintaining software supply chain security by defining and enforcing policies for building and testing container images. A secure CI/CD workflow should include artifact verification to detect problems early. It's the job of Enterprise Contract to validate that a container image is signed and attested by a known and trusted build system. | null | https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.0/html/managing_compliance_with_enterprise_contract/pr01 |
Chapter 22. container | Chapter 22. container This chapter describes the commands under the container command. 22.1. container create Create new container Usage: Table 22.1. Positional arguments Value Summary <container-name> New container name(s) Table 22.2. Command arguments Value Summary -h, --help Show this help message and exit Table 22.3. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 22.4. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 22.5. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 22.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 22.2. container delete Delete container Usage: Table 22.7. Positional arguments Value Summary <container> Container(s) to delete Table 22.8. Command arguments Value Summary -h, --help Show this help message and exit --recursive, -r Recursively delete objects and container 22.3. container list List containers Usage: Table 22.9. Command arguments Value Summary -h, --help Show this help message and exit --prefix <prefix> Filter list using <prefix> --marker <marker> Anchor for paging --end-marker <end-marker> End anchor for paging --limit <num-containers> Limit the number of containers returned --long List additional fields in output --all List all containers (default is 10000) Table 22.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 22.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 22.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 22.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 22.4. container save Save container contents locally Usage: Table 22.14. Positional arguments Value Summary <container> Container to save Table 22.15. Command arguments Value Summary -h, --help Show this help message and exit 22.5. container set Set container properties Usage: Table 22.16. Positional arguments Value Summary <container> Container to modify Table 22.17. Command arguments Value Summary -h, --help Show this help message and exit --property <key=value> Set a property on this container (repeat option to set multiple properties) 22.6. container show Display container details Usage: Table 22.18. Positional arguments Value Summary <container> Container to display Table 22.19. Command arguments Value Summary -h, --help Show this help message and exit Table 22.20. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 22.21. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 22.22. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 22.23. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 22.7. container unset Unset container properties Usage: Table 22.24. Positional arguments Value Summary <container> Container to modify Table 22.25. Command arguments Value Summary -h, --help Show this help message and exit --property <key> Property to remove from container (repeat option to remove multiple properties) | [
"openstack container create [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] <container-name> [<container-name> ...]",
"openstack container delete [-h] [--recursive] <container> [<container> ...]",
"openstack container list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--prefix <prefix>] [--marker <marker>] [--end-marker <end-marker>] [--limit <num-containers>] [--long] [--all]",
"openstack container save [-h] <container>",
"openstack container set [-h] --property <key=value> <container>",
"openstack container show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <container>",
"openstack container unset [-h] --property <key> <container>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/container |
Chapter 6. Integrating with QRadar | Chapter 6. Integrating with QRadar You can configure Red Hat Advanced Cluster Security for Kubernetes to send events to QRadar by configuring a generic webhook integration in RHACS. The following steps represent a high-level workflow for integrating RHACS with QRadar: In RHACS: Configure the generic webhook. Note When configuring the integration in RHACS, in the Endpoint field, use the following example as a guide: <URL to QRadar Box>:<Port of Integration> . Identify policies for which you want to send notifications, and update the notification settings for those policies. If QRadar does not automatically detect the log source, add an RHACS log source on the QRadar Console. For more information on configuring QRadar and RHACS, see the Red Hat Advanced Cluster Security for Kubernetes IBM resource. 6.1. Configuring integrations by using webhooks Create a new integration in Red Hat Advanced Cluster Security for Kubernetes by using the webhook URL. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the Notifier Integrations section and select Generic Webhook . Click New integration . Enter a name for Integration name . Enter the webhook URL in the Endpoint field. If your webhook receiver uses an untrusted certificate, enter a CA certificate in the CA certificate field. Otherwise, leave it blank. Note The server certificate used by the webhook receiver must be valid for the endpoint DNS name. You can click Skip TLS verification to ignore this validation. Red Hat does not suggest turning off TLS verification. Without TLS verification, data could be intercepted by an unintended recipient. Optional: Click Enable audit logging to receive alerts about all the changes made in Red Hat Advanced Cluster Security for Kubernetes. Note Red Hat suggests using separate webhooks for alerts and audit logs to handle these messages differently. To authenticate with the webhook receiver, enter details for one of the following: Username and Password for basic HTTP authentication Custom Header , for example: Authorization: Bearer <access_token> Use Extra fields to include additional key-value pairs in the JSON object that Red Hat Advanced Cluster Security for Kubernetes sends. For example, if your webhook receiver accepts objects from multiple sources, you can add "source": "rhacs" as an extra field and filter on this value to identify all alerts from Red Hat Advanced Cluster Security for Kubernetes. Select Test to send a test message to verify that the integration with your generic webhook is working. Select Save to create the configuration. 6.2. Configuring policy notifications Enable alert notifications for system policies. Procedure In the RHACS portal, go to Platform Configuration Policy Management . Select one or more policies for which you want to send alerts. Under Bulk actions , select Enable notification . In the Enable notification window, select the webhook notifier. Note If you have not configured any other integrations, the system displays a message that no notifiers are configured. Click Enable . Note Red Hat Advanced Cluster Security for Kubernetes sends notifications on an opt-in basis. To receive notifications, you must first assign a notifier to the policy. Notifications are only sent once for a given alert. If you have assigned a notifier to a policy, you will not receive a notification unless a violation generates a new alert. Red Hat Advanced Cluster Security for Kubernetes creates a new alert for the following scenarios: A policy violation occurs for the first time in a deployment. A runtime-phase policy violation occurs in a deployment after you resolved the runtime alert for a policy in that deployment. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/integrating/integrate-with-qradar |
Chapter 6. GNOME Shell Extensions | Chapter 6. GNOME Shell Extensions GNOME Shell in Red Hat Enterprise Linux 7 does not support applets, which were used to customize the default GNOME 2 interface in Red Hat Enterprise Linux 5 and 6. GNOME 3 replaces applets with GNOME Shell extensions . Extensions can modify the default GNOME Shell interface and its parts, such as window management and application launching. 6.1. Replacement for the Clock Applet GNOME 2 in Red Hat Enterprise Linux 5 and 6 featured the Clock applet, which provided access to the date, time, and calendar from the GNOME 2 Panel. In Red Hat Enterprise Linux 7, that applet is replaced by the Clocks application, which is provided by the gnome-clocks package. The user can access that application by clicking the calendar on GNOME Shell's top bar and selecting Open Clocks . Figure 6.1. Open Clocks Getting More Information See Section 11.1, "What Are GNOME Shell Extensions?" for more detailed information on what GNOME Shell extensions are and how to configure and manage them. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/migrating-gnome-shell-extensions |
Subsets and Splits