title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 5. Validating the deployment | Chapter 5. Validating the deployment After you successfully run the playbook, the JBoss Web Server collection automatically installs Red Hat JBoss Web Server on your target hosts. If you have added customized tasks to the playbook, Ansible also automatically deploys any JBoss Web Server applications on your targets hosts, as appropriate. You can optionally check the status of JBoss Web Server by using the systemctl command on the target host or by using the curl command on a remote host. Prerequisites You have run the playbook . Procedure Optional: On the JBoss Web Server host, enter the following command as the root user: In the preceding command, replace <service_name> with the correct service name for your JBoss Web Server installation. The default service name is tomcat . For more information about setting up a service name, see Automating the integration of JBoss Web Server with systemd . Note This step requires that JBoss Web Server is integrated with systemd . Optional: On a remote host , enter the following command as the root user: In the preceding command, replace <target_host> with the IP address or host name of the JBoss Web Server host that you want to access. The preceding command assumes that the JBoss Web Server is accessible through the default port 8080 and that the target firewall and network allow remote access to the port. Note The JBoss Web Server collection also includes a validate.yml playbook in the playbooks folder. You can run the validate.yml playbook if you want the JBoss Web Server collection to perform automated validation checks. For more information about the validate.yml playbook, refer to the information page for the jws_validation role in Ansible automation hub . Additional resources Controlling the JBoss Web Server with systemd | [
"systemctl status <service_name>",
"curl http:// <target_host> :8080/"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/installing_jboss_web_server_by_using_the_red_hat_ansible_certified_content_collection/validate_deployment |
8.10. NIC Offloads | 8.10. NIC Offloads The default Ethernet maximum transfer unit (MTU) is 1500 bytes, which is the largest frame size that can usually be transmitted. This can cause system resources to be underutilized, for example, if there are 3200 bytes of data for transmission, it would mean the generation of three smaller packets. There are several options, called offloads, which allow the relevant protocol stack to transmit packets that are larger than the normal MTU. Packets as large as the maximum allowable 64KiB can be created, with options for both transmitting (Tx) and receiving (Rx). When sending or receiving large amounts of data this can mean handling one large packet as opposed to multiple smaller ones for every 64KiB of data sent or received. This means there are fewer interrupt requests generated, less processing overhead is spent on splitting or combining traffic, and more opportunities for transmission, leading to an overall increase in throughput. Offload Types TCP Segmentation Offload (TSO) Uses the TCP protocol to send large packets. Uses the NIC to handle segmentation, and then adds the TCP, IP and data link layer protocol headers to each segment. UDP Fragmentation Offload (UFO) Uses the UDP protocol to send large packets. Uses the NIC to handle IP fragmentation into MTU sized packets for large UDP datagrams. Generic Segmentation Offload (GSO) Uses the TCP or UDP protocol to send large packets. If the NIC cannot handle segmentation/fragmentation, GSO performs the same operations, bypassing the NIC hardware. This is achieved by delaying segmentation until as late as possible, for example, when the packet is processed by the device driver. Large Receive Offload (LRO) Uses the TCP protocol. All incoming packets are re-segmented as they are received, reducing the number of segments the system has to process. They can be merged either in the driver or using the NIC. A problem with LRO is that it tends to resegment all incoming packets, often ignoring differences in headers and other information which can cause errors. It is generally not possible to use LRO when IP forwarding is enabled. LRO in combination with IP forwarding can lead to checksum errors. Forwarding is enabled if /proc/sys/net/ipv4/ip_forward is set to 1. Generic Receive Offload (GRO) Uses either the TCP or UDP protocols. GRO is more rigorous than LRO when resegmenting packets. For example it checks the MAC headers of each packet, which must match, only a limited number of TCP or IP headers can be different, and the TCP timestamps must match. Resegmenting can be handled by either the NIC or the GSO code. 8.10.1. Using NIC Offloads Offloads should be used on high speed systems that transmit or receive large amounts of data and favor throughput over latency. Because using offloads greatly increases the capacity of the driver queue, latency can become an issue. An example of this would be a system transferring large amounts of data using large packet sizes, but is also running lots of interactive applications. Because interactive applications send small packets at timed intervals there is a very real risk that those packets may become 'trapped' in the buffer while larger packets in front of them are processed, causing unacceptable latency. To check current offload settings use the ethtool command. Some device settings may be listed as fixed , meaning they cannot be changed. Command syntax: ethtool -k ethernet_device_name Example 8.1. Check Current Offload Settings | [
"ethtool -k em1",
"Features for em1: rx-checksumming: on tx-checksumming: on tx-checksum-ipv4: off [fixed] tx-checksum-ip-generic: on tx-checksum-ipv6: off [fixed] tx-checksum-fcoe-crc: off [fixed] tx-checksum-sctp: off [fixed] scatter-gather: on tx-scatter-gather: on tx-scatter-gather-fraglist: off [fixed] tcp-segmentation-offload: on tx-tcp-segmentation: on tx-tcp-ecn-segmentation: off [fixed] tx-tcp6-segmentation: on udp-fragmentation-offload: off [fixed] generic-segmentation-offload: on generic-receive-offload: on large-receive-offload: off [fixed] rx-vlan-offload: on tx-vlan-offload: on ntuple-filters: off [fixed] receive-hashing: on highdma: on [fixed] rx-vlan-filter: off [fixed] vlan-challenged: off [fixed] tx-lockless: off [fixed] netns-local: off [fixed] tx-gso-robust: off [fixed] tx-fcoe-segmentation: off [fixed] tx-gre-segmentation: off [fixed] tx-ipip-segmentation: off [fixed] tx-sit-segmentation: off [fixed] tx-udp_tnl-segmentation: off [fixed] tx-mpls-segmentation: off [fixed] fcoe-mtu: off [fixed] tx-nocache-copy: off loopback: off [fixed] rx-fcs: off rx-all: off tx-vlan-stag-hw-insert: off [fixed] rx-vlan-stag-hw-parse: off [fixed] rx-vlan-stag-filter: off [fixed] l2-fwd-offload: off [fixed] busy-poll: off [fixed]"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/network-nic-offloads |
3.2. libvirt and libvirt Tools | 3.2. libvirt and libvirt Tools The libvirt package provides a hypervisor-independent virtualization API that can interact with the virtualization capabilities of a range of operating systems. It includes: A virtualization layer to securely manage virtual machines on a host. An interface for managing local and networked hosts. The APIs required to provision, create, modify, monitor, control, migrate, and stop virtual machines. Although multiple hosts may be accessed with libvirt simultaneously, the APIs are limited to single node operations. Note Only operations supported by the hypervisor can be performed using libvirt. libvirt focuses on managing single hosts and provides APIs to enumerate, monitor and use the resources available on the managed node, including CPUs, memory, storage, networking and Non-Uniform Memory Access (NUMA) partitions. The management tools do not need to be on the same physical machine as the machines on which the hosts are running. In such a scenario, the machine on which the management tools run communicates with the machines on which the hosts are running using secure protocols. Red Hat Enterprise Linux 7 supports libvirt and includes libvirt -based tools as its default method for virtualization management (as in Red Hat Virtualization Management). The libvirt package is available as free software under the GNU Lesser General Public License. The libvirt project aims to provide a long term stable C API to virtualization management tools, running on top of varying hypervisor technologies. The libvirt package supports Xen on Red Hat Enterprise Linux 5, and KVM on Red Hat Enterprise Linux 5, Red Hat Enterprise Linux 6, and Red Hat Enterprise Linux 7. Notably, libvirt also provides the two primary tools for controlling virtualization on Red Hat Enterprise Linux 7: virsh and virt-manager . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_getting_started_guide/sec-virtualization_getting_started-products-libvirt-libvirt-tools |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/replacing_nodes/making-open-source-more-inclusive |
Chapter 6. Installing on Azure | Chapter 6. Installing on Azure 6.1. Preparing to install on Azure 6.1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 6.1.2. Requirements for installing OpenShift Container Platform on Azure Before installing OpenShift Container Platform on Microsoft Azure, you must configure an Azure account. See Configuring an Azure account for details about account configuration, account limits, public DNS zone configuration, required roles, creating service principals, and supported Azure regions. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, see Manually creating IAM for Azure for other options. 6.1.3. Choosing a method to install OpenShift Container Platform on Azure You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 6.1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Azure infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster quickly on Azure : You can install OpenShift Container Platform on Azure infrastructure that is provisioned by the OpenShift Container Platform installation program. You can install a cluster quickly by using the default configuration options. Installing a customized cluster on Azure : You can install a customized cluster on Azure infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on Azure with network customizations : You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on Azure into an existing VNet : You can install OpenShift Container Platform on an existing Azure Virtual Network (VNet) on Azure. You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure. Installing a private cluster on Azure : You can install a private cluster into an existing Azure Virtual Network (VNet) on Azure. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. Installing a cluster on Azure into a government region : OpenShift Container Platform can be deployed into Microsoft Azure Government (MAG) regions that are specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads on Azure. 6.1.3.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on Azure infrastructure that you provision, by using the following method: Installing a cluster on Azure using ARM templates : You can install OpenShift Container Platform on Azure by using infrastructure that you provide. You can use the provided Azure Resource Manager (ARM) templates to assist with an installation. 6.1.4. steps Configuring an Azure account 6.2. Configuring an Azure account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 6.2.1. Azure account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters. Important Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores. Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure. The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Default Azure limit Description vCPU 40 20 per region A default cluster requires 40 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap machine uses Standard_D4s_v3 machines, which use 4 vCPUs, the control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the worker machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 40 vCPUs. The bootstrap node VM, which uses 4 vCPUs, is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. OS Disk 7 Each cluster machine must have a minimum of 100 GB of storage and 300 IOPS. While these are the minimum supported values, faster storage is recommended for production clusters and clusters with intensive workloads. For more information about optimizing storage for performance, see the page titled "Optimizing storage" in the "Scalability and performance" section. VNet 1 1000 per region Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 65,536 per region Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 5000 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 1000 per region Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 3 Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Spot VM vCPUs (optional) 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. 20 per region This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. Additional resources Optimizing storage . 6.2.2. Configuring a public DNS zone in Azure To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source. Note For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. 6.2.3. Increasing Azure account limits To increase an account limit, file a support request on the Azure portal. Note You can increase only one type of quota per support request. Procedure From the Azure portal, click Help + support in the lower left corner. Click New support request and then select the required values: From the Issue type list, select Service and subscription limits (quotas) . From the Subscription list, select the subscription to modify. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster. Click : Solutions . On the Problem Details page, provide the required information for your quota increase: Click Provide details and provide the required details in the Quota details window. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details. Click : Review + create and then click Create . 6.2.4. Required Azure roles OpenShift Container Platform needs a service principal so it can manage Microsoft Azure resources. Before you can create a service principal, review the following information: Your Azure account subscription must have the following roles: User Access Administrator Contributor Your Azure Active Directory (AD) must have the following permission: "microsoft.directory/servicePrincipals/createAsOwner" To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation. 6.2.5. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. Procedure Log in to the Azure CLI: USD az login If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": "AzureCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> \ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3 1 Specify the service principal name. 2 Specify the subscription ID. 3 Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Assign the User Access Administrator role by running the following command: USD az role assignment create --role "User Access Administrator" \ --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 1 Specify the appId parameter value for your service principal. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 6.2.6. Supported Azure Marketplace regions Installing a cluster using the Azure Marketplace image is available to customers who purchase the offer in North America and EMEA. While the offer must be purchased in North America or EMEA, you can deploy the cluster to any of the Azure public partitions that OpenShift Container Platform supports. Note Deploying a cluster using the Azure Marketplace image is not supported for the Azure Government regions. 6.2.7. Supported Azure regions The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. Supported Azure public regions australiacentral (Australia Central) australiaeast (Australia East) australiasoutheast (Australia South East) brazilsouth (Brazil South) canadacentral (Canada Central) canadaeast (Canada East) centralindia (Central India) centralus (Central US) eastasia (East Asia) eastus (East US) eastus2 (East US 2) francecentral (France Central) germanywestcentral (Germany West Central) israelcentral (Israel Central) italynorth (Italy North) japaneast (Japan East) japanwest (Japan West) koreacentral (Korea Central) koreasouth (Korea South) northcentralus (North Central US) northeurope (North Europe) norwayeast (Norway East) polandcentral (Poland Central) qatarcentral (Qatar Central) southafricanorth (South Africa North) southcentralus (South Central US) southeastasia (Southeast Asia) southindia (South India) swedencentral (Sweden Central) switzerlandnorth (Switzerland North) uaenorth (UAE North) uksouth (UK South) ukwest (UK West) westcentralus (West Central US) westeurope (West Europe) westindia (West India) westus (West US) westus2 (West US 2) westus3 (West US 3) Supported Azure Government regions Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6: usgovtexas (US Gov Texas) usgovvirginia (US Gov Virginia) You can reference all available MAG regions in the Azure documentation . Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested. 6.2.8. steps Install an OpenShift Container Platform cluster on Azure. You can install a customized cluster or quickly install a cluster with default options. 6.3. Manually creating IAM for Azure In environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace, you can put the Cloud Credential Operator (CCO) into manual mode before you install the cluster. 6.3.1. Alternatives to storing administrator-level secrets in the kube-system project The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file. If you prefer not to store an administrator-level credential secret in the cluster kube-system project, you can set the credentialsMode parameter for the CCO to Manual when installing OpenShift Container Platform and manage your cloud credentials manually. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them. Additional resources For a detailed description of all available CCO credential modes and their supported platforms, see About the Cloud Credential Operator . 6.3.2. Manually create IAM The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Change to the directory that contains the installation program and create the install-config.yaml file by running the following command: USD openshift-install create install-config --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled ... 1 This line is added to set the credentialsMode parameter to Manual . Generate the manifests by running the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your openshift-install binary is built to use by running the following command: USD openshift-install version Example output release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 Locate all CredentialsRequest objects in this release image that target the cloud you are deploying on by running the following command: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 \ --credentials-requests \ --cloud=azure This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component-secret> namespace: <component-namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important The release image includes CredentialsRequest objects for Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set. You can identify these objects by their use of the release.openshift.io/feature-gate: TechPreviewNoUpgrade annotation. If you are not using any of these features, do not create secrets for these objects. Creating secrets for Technology Preview features that you are not using can cause the installation to fail. If you are using any of these features, you must create secrets for the corresponding objects. To find CredentialsRequest objects with the TechPreviewNoUpgrade annotation, run the following command: USD grep "release.openshift.io/feature-gate" * Example output 0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-gate: TechPreviewNoUpgrade From the directory that contains the installation program, proceed with your cluster creation: USD openshift-install create cluster --dir <installation_directory> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. Additional resources Updating a cluster using the web console Updating a cluster using the CLI 6.3.3. steps Install an OpenShift Container Platform cluster: Installing a cluster quickly on Azure with default options on installer-provisioned infrastructure Install a cluster with cloud customizations on installer-provisioned infrastructure Install a cluster with network customizations on installer-provisioned infrastructure 6.4. Enabling user-managed encryption for Azure In OpenShift Container Platform version 4.11, you can install a cluster with a user-managed encryption key in Azure. To enable this feature, you can prepare an Azure DiskEncryptionSet before installation, modify the install-config.yaml file, and then perform post-installation steps. 6.4.1. Preparing an Azure Disk Encryption Set The OpenShift Container Platform installer can use an existing Disk Encryption Set with a user-managed key. To enable this feature, you can create a Disk Encryption Set in Azure and provide the key to the installer. Procedure Set the following environment variables for the Azure resource group by running the following command: USD export RESOURCEGROUP="<resource_group>" \ 1 LOCATION="<location>" 2 1 Specifies the name of the Azure resource group where you will create the Disk Encryption Set and encryption key. To avoid losing access to your keys after destroying the cluster, you should create the Disk Encryption Set in a different resource group than the resource group where you install the cluster. 2 Specifies the Azure location where you will create the resource group. Set the following environment variables for the Azure Key Vault and Disk Encryption Set by running the following command: USD export KEYVAULT_NAME="<keyvault_name>" \ 1 KEYVAULT_KEY_NAME="<keyvault_key_name>" \ 2 DISK_ENCRYPTION_SET_NAME="<disk_encryption_set_name>" 3 1 Specifies the name of the Azure Key Vault you will create. 2 Specifies the name of the encryption key you will create. 3 Specifies the name of the disk encryption set you will create. Set the environment variable for the ID of your Azure Service Principal by running the following command: USD export CLUSTER_SP_ID="<service_principal_id>" 1 1 Specifies the ID of the service principal you will use for this installation. Enable host-level encryption in Azure by running the following commands: USD az feature register --namespace "Microsoft.Compute" --name "EncryptionAtHost" USD az feature show --namespace Microsoft.Compute --name EncryptionAtHost USD az provider register -n Microsoft.Compute Create an Azure Resource Group to hold the disk encryption set and associated resources by running the following command: USD az group create --name USDRESOURCEGROUP --location USDLOCATION Create an Azure key vault by running the following command: USD az keyvault create -n USDKEYVAULT_NAME -g USDRESOURCEGROUP -l USDLOCATION \ --enable-purge-protection true Create an encryption key in the key vault by running the following command: USD az keyvault key create --vault-name USDKEYVAULT_NAME -n USDKEYVAULT_KEY_NAME \ --protection software Capture the ID of the key vault by running the following command: USD KEYVAULT_ID=USD(az keyvault show --name USDKEYVAULT_NAME --query "[id]" -o tsv) Capture the key URL in the key vault by running the following command: USD KEYVAULT_KEY_URL=USD(az keyvault key show --vault-name USDKEYVAULT_NAME --name \ USDKEYVAULT_KEY_NAME --query "[key.kid]" -o tsv) Create a disk encryption set by running the following command: USD az disk-encryption-set create -n USDDISK_ENCRYPTION_SET_NAME -l USDLOCATION -g \ USDRESOURCEGROUP --source-vault USDKEYVAULT_ID --key-url USDKEYVAULT_KEY_URL Grant the DiskEncryptionSet resource access to the key vault by running the following commands: USD DES_IDENTITY=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g \ USDRESOURCEGROUP --query "[identity.principalId]" -o tsv) USD az keyvault set-policy -n USDKEYVAULT_NAME -g USDRESOURCEGROUP --object-id \ USDDES_IDENTITY --key-permissions wrapkey unwrapkey get Grant the Azure Service Principal permission to read the DiskEncryptionSet by running the following commands: USD DES_RESOURCE_ID=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g \ USDRESOURCEGROUP --query "[id]" -o tsv) USD az role assignment create --assignee USDCLUSTER_SP_ID --role "<reader_role>" \ 1 --scope USDDES_RESOURCE_ID -o jsonc 1 Specifies an Azure role with read permissions to the disk encryption set. You can use the Owner role or a custom role with the necessary permissions. 6.4.2. steps Install an OpenShift Container Platform cluster: Install a cluster with customizations on installer-provisioned infrastructure Install a cluster with network customizations on installer-provisioned infrastructure Install a cluster into an existing VNet on installer-provisioned infrastructure Install a private cluster on installer-provisioned infrastructure Install a cluster into an government region on installer-provisioned infrastructure 6.5. Installing a cluster quickly on Azure In OpenShift Container Platform version 4.11, you can install a cluster on Microsoft Azure that uses the default configuration options. 6.5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 6.5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.5.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.5.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 6.5.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file, which contains Microsoft Azure profile information, in the ~/.azure/ directory on your computer, the installer prompts you to specify the following Azure parameter values for your subscription and service principal. azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Important After you enter values for the previously listed parameters, the installation program creates a osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. These actions ensure that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.5.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 6.5.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 6.5.8. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.5.9. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 6.6. Installing a cluster on Azure with customizations In OpenShift Container Platform version 4.11, you can install a customized cluster on infrastructure that the installation program provisions on Microsoft Azure. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 6.6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 6.6.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.6.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.6.4. Selecting an Azure Marketplace image If you are deploying an OpenShift Container Platform cluster using the Azure Marketplace offering, you must first obtain the Azure Marketplace image. The installation program uses this image to deploy worker nodes. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you are going to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100 Note Regardless of the version of OpenShift Container Platform you are installing, the correct version of the Azure Marketplace image to use is 4.8.x. If required, as part of the installation process, your VMs are automatically upgraded. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer. You must update the compute section in the install-config.yaml file with values for publisher , offer , sku , and version before deploying the cluster. Sample install-config.yaml file with the Azure Marketplace worker nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: azure: type: Standard_D4s_v5 osImage: publisher: redhat offer: rh-ocp-worker sku: rh-ocp-worker version: 4.8.2021122100 replicas: 3 6.6.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 6.6.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 6.6.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.6.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.6.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 6.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) cluster network provider to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.6.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 and vCurrent . v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OpenShift Container Platform. The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . Valid values are baremetal , marketplace and openshift-samples . You may specify multiple capabilities in this parameter. String array cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 6.6.6.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. Table 6.4. Additional Azure parameters Parameter Description Values compute.platform.azure.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . compute.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on compute nodes. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . compute.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different than the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . compute.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . compute.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . compute.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . compute.platform.azure.type Defines the Azure instance type for compute machines. String compute.platform.azure.zones The availability zones where the installation program creates compute machines. String list controlPlane.platform.azure.type Defines the Azure instance type for control plane machines. String controlPlane.platform.azure.zones The availability zones where the installation program creates control plane machines. String list platform.azure.defaultMachinePlatform.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached, and un-managed disks on the VM host. This parameter is not a prerequisite for user-managed server-side encryption. true or false . The default is false . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example, production_disk_encryption_set . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. To avoid deleting your Azure encryption key when the cluster is destroyed, this resource group must be different from the resource group where you install the cluster. This value is necessary only if you intend to install the cluster with user-managed disk encryption. String, for example, production_encryption_resource_group . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . platform.azure.defaultMachinePlatform.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . platform.azure.defaultMachinePlatform.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.defaultMachinePlatform.type The Azure instance type for control plane and compute machines. The Azure instance type. platform.azure.defaultMachinePlatform.zones The availability zones where the installation program creates compute and control plane machines. String list. controlPlane.platform.azure.encryptionAtHost Enables host-level encryption for control plane machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . controlPlane.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different than the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . controlPlane.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . controlPlane.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt control plane machines. String, in the format 00000000-0000-0000-0000-000000000000 . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . controlPlane.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of control plane machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.defaultMachinePlatform.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane and compute machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . platform.azure.defaultMachinePlatform.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. Accelerated or Basic . If instance type of control plane and compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 6.6.6.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.5. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.6.6.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 6.1. Machine types c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 6.6.6.4. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 11 region: centralus 12 resourceGroupName: existing_resource_group 13 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 14 fips: false 15 sshKey: ssh-ed25519 AAAA... 16 1 10 12 14 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 Specify the name of the resource group that contains the DNS zone for your base domain. 13 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 16 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 6.6.6.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 6.6.7. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.6.8. Finalizing user-managed encryption after installation If you installed OpenShift Container Platform using a user-managed encryption key, you can complete the installation by creating a new storage class and granting write permissions to the Azure cluster resource group. Procedure Obtain the identity of the cluster resource group used by the installer: If you specified an existing resource group in install-config.yaml , obtain its Azure identity by running the following command: USD az identity list --resource-group "<existing_resource_group>" If you did not specify a existing resource group in install-config.yaml , locate the resource group that the installer created, and then obtain its Azure identity by running the following commands: USD az group list USD az identity list --resource-group "<installer_created_resource_group>" Grant a role assignment to the cluster resource group so that it can write to the Disk Encryption Set by running the following command: USD az role assignment create --role "<privileged_role>" \ 1 --assignee "<resource_group_identity>" 2 1 Specifies an Azure role that has read/write permissions to the disk encryption set. You can use the Owner role or a custom role with the necessary permissions. 2 Specifies the identity of the cluster resource group. Obtain the id of the disk encryption set you created prior to installation by running the following command: USD az disk-encryption-set show -n <disk_encryption_set_name> \ 1 --resource-group <resource_group_name> 2 1 Specifies the name of the disk encryption set. 2 Specifies the resource group that contains the disk encryption set. The id is in the format of "/subscriptions/... /resourceGroups/... /providers/Microsoft.Compute/diskEncryptionSets/... " . Obtain the identity of the cluster service principal by running the following command: USD az identity show -g <cluster_resource_group> \ 1 -n <cluster_service_principal_name> \ 2 --query principalId --out tsv 1 Specifies the name of the cluster resource group created by the installation program. 2 Specifies the name of the cluster service principal created by the installation program. The identity is in the format of 12345678-1234-1234-1234-1234567890 . Create a role assignment that grants the cluster service principal Contributor privileges to the disk encryption set by running the following command: USD az role assignment create --assignee <cluster_service_principal_id> \ 1 --role 'Contributor' \// --scope <disk_encryption_set_id> \ 2 1 Specifies the ID of the cluster service principal obtained in the step. 2 Specifies the ID of the disk encryption set. Create a storage class that uses the user-managed disk encryption set: Save the following storage class definition to a file, for example storage-class-definition.yaml : kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: "<disk_encryption_set_ID>" 1 resourceGroup: "<resource_group_name>" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer 1 Specifies the ID of the disk encryption set that you created in the prerequisite steps, for example "/subscriptions/xxxxxx-xxxxx-xxxxx/resourceGroups/test-encryption/providers/Microsoft.Compute/diskEncryptionSets/disk-encryption-set-xxxxxx" . 2 Specifies the name of the resource group used by the installer. This is the same resource group from the first step. Create the storage class managed-premium from the file you created by running the following command: USD oc create -f storage-class-definition.yaml Select the managed-premium storage class when you create persistent volumes to use encrypted storage. 6.6.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 6.6.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 6.6.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.6.12. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 6.7. Installing a cluster on Azure with network customizations In OpenShift Container Platform version 4.11, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Microsoft Azure. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. 6.7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . Manual mode can also be used in environments where the cloud IAM APIs are not reachable. If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 6.7.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.7.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.7.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 6.7.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 6.7.5.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.7.5.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.6. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.7.5.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 6.7. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) cluster network provider to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.7.5.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.8. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 and vCurrent . v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OpenShift Container Platform. The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . Valid values are baremetal , marketplace and openshift-samples . You may specify multiple capabilities in this parameter. String array cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 6.7.5.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. Table 6.9. Additional Azure parameters Parameter Description Values compute.platform.azure.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . compute.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on compute nodes. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . compute.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different than the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . compute.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . compute.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . compute.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . compute.platform.azure.type Defines the Azure instance type for compute machines. String compute.platform.azure.zones The availability zones where the installation program creates compute machines. String list controlPlane.platform.azure.type Defines the Azure instance type for control plane machines. String controlPlane.platform.azure.zones The availability zones where the installation program creates control plane machines. String list platform.azure.defaultMachinePlatform.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached, and un-managed disks on the VM host. This parameter is not a prerequisite for user-managed server-side encryption. true or false . The default is false . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example, production_disk_encryption_set . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. To avoid deleting your Azure encryption key when the cluster is destroyed, this resource group must be different from the resource group where you install the cluster. This value is necessary only if you intend to install the cluster with user-managed disk encryption. String, for example, production_encryption_resource_group . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . platform.azure.defaultMachinePlatform.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . platform.azure.defaultMachinePlatform.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.defaultMachinePlatform.type The Azure instance type for control plane and compute machines. The Azure instance type. platform.azure.defaultMachinePlatform.zones The availability zones where the installation program creates compute and control plane machines. String list. controlPlane.platform.azure.encryptionAtHost Enables host-level encryption for control plane machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . controlPlane.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different than the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . controlPlane.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . controlPlane.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt control plane machines. String, in the format 00000000-0000-0000-0000-000000000000 . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . controlPlane.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of control plane machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.defaultMachinePlatform.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane and compute machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . platform.azure.defaultMachinePlatform.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. Accelerated or Basic . If instance type of control plane and compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 6.7.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.10. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.7.5.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 6.2. Machine types c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 6.7.5.4. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 12 region: centralus 13 resourceGroupName: existing_resource_group 14 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 15 fips: false 16 sshKey: ssh-ed25519 AAAA... 17 1 10 13 15 Required. The installation program prompts you for this value. 2 6 11 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 12 Specify the name of the resource group that contains the DNS zone for your base domain. 14 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 16 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 17 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 6.7.5.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.7.6. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information on these fields, refer to Installation configuration parameters . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Important The CIDR range 172.17.0.0/16 is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration. You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2. 6.7.7. Specifying advanced network configuration You can use advanced network configuration for your cluster network provider to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 6.7.8. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 6.7.8.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 6.11. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 6.12. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 6.13. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 6.14. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 6.15. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 6.16. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 6.17. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 6.7.9. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with OVN-Kubernetes. This allows a hybrid cluster that supports different node networking configurations. For example, this is necessary to run both Linux and Windows nodes in a cluster. Important You must configure hybrid networking with OVN-Kubernetes during the installation of your cluster. You cannot switch to hybrid networking after the installation process. Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, such as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR cannot overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. Note For more information on using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads . Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 6.7.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.7.11. Finalizing user-managed encryption after installation If you installed OpenShift Container Platform using a user-managed encryption key, you can complete the installation by creating a new storage class and granting write permissions to the Azure cluster resource group. Procedure Obtain the identity of the cluster resource group used by the installer: If you specified an existing resource group in install-config.yaml , obtain its Azure identity by running the following command: USD az identity list --resource-group "<existing_resource_group>" If you did not specify a existing resource group in install-config.yaml , locate the resource group that the installer created, and then obtain its Azure identity by running the following commands: USD az group list USD az identity list --resource-group "<installer_created_resource_group>" Grant a role assignment to the cluster resource group so that it can write to the Disk Encryption Set by running the following command: USD az role assignment create --role "<privileged_role>" \ 1 --assignee "<resource_group_identity>" 2 1 Specifies an Azure role that has read/write permissions to the disk encryption set. You can use the Owner role or a custom role with the necessary permissions. 2 Specifies the identity of the cluster resource group. Obtain the id of the disk encryption set you created prior to installation by running the following command: USD az disk-encryption-set show -n <disk_encryption_set_name> \ 1 --resource-group <resource_group_name> 2 1 Specifies the name of the disk encryption set. 2 Specifies the resource group that contains the disk encryption set. The id is in the format of "/subscriptions/... /resourceGroups/... /providers/Microsoft.Compute/diskEncryptionSets/... " . Obtain the identity of the cluster service principal by running the following command: USD az identity show -g <cluster_resource_group> \ 1 -n <cluster_service_principal_name> \ 2 --query principalId --out tsv 1 Specifies the name of the cluster resource group created by the installation program. 2 Specifies the name of the cluster service principal created by the installation program. The identity is in the format of 12345678-1234-1234-1234-1234567890 . Create a role assignment that grants the cluster service principal Contributor privileges to the disk encryption set by running the following command: USD az role assignment create --assignee <cluster_service_principal_id> \ 1 --role 'Contributor' \// --scope <disk_encryption_set_id> \ 2 1 Specifies the ID of the cluster service principal obtained in the step. 2 Specifies the ID of the disk encryption set. Create a storage class that uses the user-managed disk encryption set: Save the following storage class definition to a file, for example storage-class-definition.yaml : kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: "<disk_encryption_set_ID>" 1 resourceGroup: "<resource_group_name>" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer 1 Specifies the ID of the disk encryption set that you created in the prerequisite steps, for example "/subscriptions/xxxxxx-xxxxx-xxxxx/resourceGroups/test-encryption/providers/Microsoft.Compute/diskEncryptionSets/disk-encryption-set-xxxxxx" . 2 Specifies the name of the resource group used by the installer. This is the same resource group from the first step. Create the storage class managed-premium from the file you created by running the following command: USD oc create -f storage-class-definition.yaml Select the managed-premium storage class when you create persistent volumes to use encrypted storage. 6.7.12. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 6.7.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 6.7.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.7.15. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 6.8. Installing a cluster on Azure into an existing VNet In OpenShift Container Platform version 4.11, you can install a cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 6.8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 6.8.2. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.11, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 6.8.2.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 6.8.2.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 6.18. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Additional resources About the OpenShift SDN network plugin 6.8.2.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 6.8.2.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 6.8.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.8.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.8.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 6.8.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 6.8.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.8.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.19. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.8.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 6.20. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) cluster network provider to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.8.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.21. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 and vCurrent . v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OpenShift Container Platform. The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . Valid values are baremetal , marketplace and openshift-samples . You may specify multiple capabilities in this parameter. String array cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 6.8.6.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. Table 6.22. Additional Azure parameters Parameter Description Values compute.platform.azure.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . compute.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on compute nodes. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . compute.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different than the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . compute.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . compute.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . compute.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . compute.platform.azure.type Defines the Azure instance type for compute machines. String compute.platform.azure.zones The availability zones where the installation program creates compute machines. String list controlPlane.platform.azure.type Defines the Azure instance type for control plane machines. String controlPlane.platform.azure.zones The availability zones where the installation program creates control plane machines. String list platform.azure.defaultMachinePlatform.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached, and un-managed disks on the VM host. This parameter is not a prerequisite for user-managed server-side encryption. true or false . The default is false . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example, production_disk_encryption_set . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. To avoid deleting your Azure encryption key when the cluster is destroyed, this resource group must be different from the resource group where you install the cluster. This value is necessary only if you intend to install the cluster with user-managed disk encryption. String, for example, production_encryption_resource_group . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . platform.azure.defaultMachinePlatform.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . platform.azure.defaultMachinePlatform.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.defaultMachinePlatform.type The Azure instance type for control plane and compute machines. The Azure instance type. platform.azure.defaultMachinePlatform.zones The availability zones where the installation program creates compute and control plane machines. String list. controlPlane.platform.azure.encryptionAtHost Enables host-level encryption for control plane machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . controlPlane.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different than the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . controlPlane.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . controlPlane.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt control plane machines. String, in the format 00000000-0000-0000-0000-000000000000 . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . controlPlane.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of control plane machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.defaultMachinePlatform.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane and compute machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . platform.azure.defaultMachinePlatform.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. Accelerated or Basic . If instance type of control plane and compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 6.8.6.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.23. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.8.6.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 6.3. Machine types c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 6.8.6.4. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 11 region: centralus 12 resourceGroupName: existing_resource_group 13 networkResourceGroupName: vnet_resource_group 14 virtualNetwork: vnet 15 controlPlaneSubnet: control_plane_subnet 16 computeSubnet: compute_subnet 17 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 18 fips: false 19 sshKey: ssh-ed25519 AAAA... 20 1 10 12 18 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 Specify the name of the resource group that contains the DNS zone for your base domain. 13 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 14 If you use an existing VNet, specify the name of the resource group that contains it. 15 If you use an existing VNet, specify its name. 16 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 17 If you use an existing VNet, specify the name of the subnet to host the compute machines. 19 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 20 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 6.8.6.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 6.8.7. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.8.8. Finalizing user-managed encryption after installation If you installed OpenShift Container Platform using a user-managed encryption key, you can complete the installation by creating a new storage class and granting write permissions to the Azure cluster resource group. Procedure Obtain the identity of the cluster resource group used by the installer: If you specified an existing resource group in install-config.yaml , obtain its Azure identity by running the following command: USD az identity list --resource-group "<existing_resource_group>" If you did not specify a existing resource group in install-config.yaml , locate the resource group that the installer created, and then obtain its Azure identity by running the following commands: USD az group list USD az identity list --resource-group "<installer_created_resource_group>" Grant a role assignment to the cluster resource group so that it can write to the Disk Encryption Set by running the following command: USD az role assignment create --role "<privileged_role>" \ 1 --assignee "<resource_group_identity>" 2 1 Specifies an Azure role that has read/write permissions to the disk encryption set. You can use the Owner role or a custom role with the necessary permissions. 2 Specifies the identity of the cluster resource group. Obtain the id of the disk encryption set you created prior to installation by running the following command: USD az disk-encryption-set show -n <disk_encryption_set_name> \ 1 --resource-group <resource_group_name> 2 1 Specifies the name of the disk encryption set. 2 Specifies the resource group that contains the disk encryption set. The id is in the format of "/subscriptions/... /resourceGroups/... /providers/Microsoft.Compute/diskEncryptionSets/... " . Obtain the identity of the cluster service principal by running the following command: USD az identity show -g <cluster_resource_group> \ 1 -n <cluster_service_principal_name> \ 2 --query principalId --out tsv 1 Specifies the name of the cluster resource group created by the installation program. 2 Specifies the name of the cluster service principal created by the installation program. The identity is in the format of 12345678-1234-1234-1234-1234567890 . Create a role assignment that grants the cluster service principal Contributor privileges to the disk encryption set by running the following command: USD az role assignment create --assignee <cluster_service_principal_id> \ 1 --role 'Contributor' \// --scope <disk_encryption_set_id> \ 2 1 Specifies the ID of the cluster service principal obtained in the step. 2 Specifies the ID of the disk encryption set. Create a storage class that uses the user-managed disk encryption set: Save the following storage class definition to a file, for example storage-class-definition.yaml : kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: "<disk_encryption_set_ID>" 1 resourceGroup: "<resource_group_name>" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer 1 Specifies the ID of the disk encryption set that you created in the prerequisite steps, for example "/subscriptions/xxxxxx-xxxxx-xxxxx/resourceGroups/test-encryption/providers/Microsoft.Compute/diskEncryptionSets/disk-encryption-set-xxxxxx" . 2 Specifies the name of the resource group used by the installer. This is the same resource group from the first step. Create the storage class managed-premium from the file you created by running the following command: USD oc create -f storage-class-definition.yaml Select the managed-premium storage class when you create persistent volumes to use encrypted storage. 6.8.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 6.8.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 6.8.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.8.12. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 6.9. Installing a private cluster on Azure In OpenShift Container Platform version 4.11, you can install a private cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 6.9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 6.9.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 6.9.2.1. Private clusters in Azure To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster's private DNS records. The cluster's machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation. The cluster still requires access to internet to access the Azure APIs. The following items are not required or created when you install a private cluster: A BaseDomainResourceGroup , since the cluster does not create public records Public IP addresses Public DNS records Public endpoints 6.9.2.1.1. Limitations Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet. 6.9.2.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. Private cluster with network address translation You can use Azure VNET network address translation (NAT) to provide outbound internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions. When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with Azure Firewall You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation. When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with a proxy configuration You can use a proxy with user-defined routing to allow egress to the internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy. When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all Azure API requests are routed over Azure's internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints. Private cluster with no internet access You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following: An OpenShift image registry mirror that allows for pulling container images Access to Azure APIs With these requirements available, you can use user-defined routing to create private clusters with no public endpoints. 6.9.3. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.11, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 6.9.3.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 6.9.3.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 6.24. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Additional resources About the OpenShift SDN network plugin 6.9.3.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 6.9.3.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 6.9.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.9.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.9.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 6.9.7. Manually creating the installation configuration file For installations of a private OpenShift Container Platform cluster that are only accessible from an internal network and are not visible to the internet, you must manually generate your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Note For some platform types, you can alternatively run ./openshift-install create install-config --dir <installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 6.9.7.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.9.7.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.25. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.9.7.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 6.26. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) cluster network provider to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.9.7.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.27. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 and vCurrent . v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OpenShift Container Platform. The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . Valid values are baremetal , marketplace and openshift-samples . You may specify multiple capabilities in this parameter. String array cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 6.9.7.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. Table 6.28. Additional Azure parameters Parameter Description Values compute.platform.azure.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . compute.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on compute nodes. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . compute.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different than the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . compute.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . compute.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . compute.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . compute.platform.azure.type Defines the Azure instance type for compute machines. String compute.platform.azure.zones The availability zones where the installation program creates compute machines. String list controlPlane.platform.azure.type Defines the Azure instance type for control plane machines. String controlPlane.platform.azure.zones The availability zones where the installation program creates control plane machines. String list platform.azure.defaultMachinePlatform.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached, and un-managed disks on the VM host. This parameter is not a prerequisite for user-managed server-side encryption. true or false . The default is false . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example, production_disk_encryption_set . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. To avoid deleting your Azure encryption key when the cluster is destroyed, this resource group must be different from the resource group where you install the cluster. This value is necessary only if you intend to install the cluster with user-managed disk encryption. String, for example, production_encryption_resource_group . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . platform.azure.defaultMachinePlatform.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . platform.azure.defaultMachinePlatform.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.defaultMachinePlatform.type The Azure instance type for control plane and compute machines. The Azure instance type. platform.azure.defaultMachinePlatform.zones The availability zones where the installation program creates compute and control plane machines. String list. controlPlane.platform.azure.encryptionAtHost Enables host-level encryption for control plane machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . controlPlane.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different than the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . controlPlane.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . controlPlane.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt control plane machines. String, in the format 00000000-0000-0000-0000-000000000000 . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . controlPlane.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of control plane machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.defaultMachinePlatform.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane and compute machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . platform.azure.defaultMachinePlatform.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. Accelerated or Basic . If instance type of control plane and compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 6.9.7.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.29. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.9.7.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 6.4. Machine types c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 6.9.7.4. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 11 region: centralus 12 resourceGroupName: existing_resource_group 13 networkResourceGroupName: vnet_resource_group 14 virtualNetwork: vnet 15 controlPlaneSubnet: control_plane_subnet 16 computeSubnet: compute_subnet 17 outboundType: UserDefinedRouting 18 cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 1 10 12 19 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 Specify the name of the resource group that contains the DNS zone for your base domain. 13 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 14 If you use an existing VNet, specify the name of the resource group that contains it. 15 If you use an existing VNet, specify its name. 16 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 17 If you use an existing VNet, specify the name of the subnet to host the compute machines. 18 You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 6.9.7.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 6.9.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.9.9. Finalizing user-managed encryption after installation If you installed OpenShift Container Platform using a user-managed encryption key, you can complete the installation by creating a new storage class and granting write permissions to the Azure cluster resource group. Procedure Obtain the identity of the cluster resource group used by the installer: If you specified an existing resource group in install-config.yaml , obtain its Azure identity by running the following command: USD az identity list --resource-group "<existing_resource_group>" If you did not specify a existing resource group in install-config.yaml , locate the resource group that the installer created, and then obtain its Azure identity by running the following commands: USD az group list USD az identity list --resource-group "<installer_created_resource_group>" Grant a role assignment to the cluster resource group so that it can write to the Disk Encryption Set by running the following command: USD az role assignment create --role "<privileged_role>" \ 1 --assignee "<resource_group_identity>" 2 1 Specifies an Azure role that has read/write permissions to the disk encryption set. You can use the Owner role or a custom role with the necessary permissions. 2 Specifies the identity of the cluster resource group. Obtain the id of the disk encryption set you created prior to installation by running the following command: USD az disk-encryption-set show -n <disk_encryption_set_name> \ 1 --resource-group <resource_group_name> 2 1 Specifies the name of the disk encryption set. 2 Specifies the resource group that contains the disk encryption set. The id is in the format of "/subscriptions/... /resourceGroups/... /providers/Microsoft.Compute/diskEncryptionSets/... " . Obtain the identity of the cluster service principal by running the following command: USD az identity show -g <cluster_resource_group> \ 1 -n <cluster_service_principal_name> \ 2 --query principalId --out tsv 1 Specifies the name of the cluster resource group created by the installation program. 2 Specifies the name of the cluster service principal created by the installation program. The identity is in the format of 12345678-1234-1234-1234-1234567890 . Create a role assignment that grants the cluster service principal Contributor privileges to the disk encryption set by running the following command: USD az role assignment create --assignee <cluster_service_principal_id> \ 1 --role 'Contributor' \// --scope <disk_encryption_set_id> \ 2 1 Specifies the ID of the cluster service principal obtained in the step. 2 Specifies the ID of the disk encryption set. Create a storage class that uses the user-managed disk encryption set: Save the following storage class definition to a file, for example storage-class-definition.yaml : kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: "<disk_encryption_set_ID>" 1 resourceGroup: "<resource_group_name>" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer 1 Specifies the ID of the disk encryption set that you created in the prerequisite steps, for example "/subscriptions/xxxxxx-xxxxx-xxxxx/resourceGroups/test-encryption/providers/Microsoft.Compute/diskEncryptionSets/disk-encryption-set-xxxxxx" . 2 Specifies the name of the resource group used by the installer. This is the same resource group from the first step. Create the storage class managed-premium from the file you created by running the following command: USD oc create -f storage-class-definition.yaml Select the managed-premium storage class when you create persistent volumes to use encrypted storage. 6.9.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 6.9.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 6.9.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.9.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 6.10. Installing a cluster on Azure into a government region In OpenShift Container Platform version 4.11, you can install a cluster on Microsoft Azure into a government region. To configure the government region, you modify parameters in the install-config.yaml file before you install the cluster. 6.10.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated government region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 6.10.2. Azure government regions OpenShift Container Platform supports deploying a cluster to Microsoft Azure Government (MAG) regions. MAG is specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads on Azure. MAG is composed of government-only data center regions, all granted an Impact Level 5 Provisional Authorization . Installing to a MAG region requires manually configuring the Azure Government dedicated cloud instance and region in the install-config.yaml file. You must also update your service principal to reference the appropriate government environment. Note The Azure government region cannot be selected using the guided terminal prompts from the installation program. You must define the region manually in the install-config.yaml file. Remember to also set the dedicated cloud instance, like AzureUSGovernmentCloud , based on the region specified. 6.10.3. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 6.10.3.1. Private clusters in Azure To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster's private DNS records. The cluster's machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation. The cluster still requires access to internet to access the Azure APIs. The following items are not required or created when you install a private cluster: A BaseDomainResourceGroup , since the cluster does not create public records Public IP addresses Public DNS records Public endpoints 6.10.3.1.1. Limitations Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet. 6.10.3.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. Private cluster with network address translation You can use Azure VNET network address translation (NAT) to provide outbound internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions. When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with Azure Firewall You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation. When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with a proxy configuration You can use a proxy with user-defined routing to allow egress to the internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy. When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all Azure API requests are routed over Azure's internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints. Private cluster with no internet access You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following: An OpenShift image registry mirror that allows for pulling container images Access to Azure APIs With these requirements available, you can use user-defined routing to create private clusters with no public endpoints. 6.10.4. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.11, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 6.10.4.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 6.10.4.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 6.30. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Additional resources About the OpenShift SDN network plugin 6.10.4.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 6.10.4.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 6.10.5. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.10.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.10.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 6.10.8. Manually creating the installation configuration file When installing OpenShift Container Platform on Microsoft Azure into a government region, you must manually generate your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 6.10.8.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.10.8.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.31. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.10.8.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 6.32. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) cluster network provider to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.10.8.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.33. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 and vCurrent . v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OpenShift Container Platform. The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . Valid values are baremetal , marketplace and openshift-samples . You may specify multiple capabilities in this parameter. String array cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 6.10.8.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. Table 6.34. Additional Azure parameters Parameter Description Values compute.platform.azure.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . compute.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on compute nodes. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . compute.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different than the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . compute.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . compute.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . compute.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . compute.platform.azure.type Defines the Azure instance type for compute machines. String compute.platform.azure.zones The availability zones where the installation program creates compute machines. String list controlPlane.platform.azure.type Defines the Azure instance type for control plane machines. String controlPlane.platform.azure.zones The availability zones where the installation program creates control plane machines. String list platform.azure.defaultMachinePlatform.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached, and un-managed disks on the VM host. This parameter is not a prerequisite for user-managed server-side encryption. true or false . The default is false . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example, production_disk_encryption_set . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. To avoid deleting your Azure encryption key when the cluster is destroyed, this resource group must be different from the resource group where you install the cluster. This value is necessary only if you intend to install the cluster with user-managed disk encryption. String, for example, production_encryption_resource_group . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . platform.azure.defaultMachinePlatform.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . platform.azure.defaultMachinePlatform.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.defaultMachinePlatform.type The Azure instance type for control plane and compute machines. The Azure instance type. platform.azure.defaultMachinePlatform.zones The availability zones where the installation program creates compute and control plane machines. String list. controlPlane.platform.azure.encryptionAtHost Enables host-level encryption for control plane machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . controlPlane.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different than the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . controlPlane.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . controlPlane.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt control plane machines. String, in the format 00000000-0000-0000-0000-000000000000 . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . controlPlane.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of control plane machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.defaultMachinePlatform.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane and compute machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . platform.azure.defaultMachinePlatform.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. Accelerated or Basic . If instance type of control plane and compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 6.10.8.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.35. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.10.8.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 6.5. Machine types c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 6.10.8.4. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 11 region: usgovvirginia resourceGroupName: existing_resource_group 12 networkResourceGroupName: vnet_resource_group 13 virtualNetwork: vnet 14 controlPlaneSubnet: control_plane_subnet 15 computeSubnet: compute_subnet 16 outboundType: UserDefinedRouting 17 cloudName: AzureUSGovernmentCloud 18 pullSecret: '{"auths": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 1 10 19 Required. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 Specify the name of the resource group that contains the DNS zone for your base domain. 12 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 13 If you use an existing VNet, specify the name of the resource group that contains it. 14 If you use an existing VNet, specify its name. 15 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 16 If you use an existing VNet, specify the name of the subnet to host the compute machines. 17 You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. 18 Specify the name of the Azure cloud environment to deploy your cluster to. Set AzureUSGovernmentCloud to deploy to a Microsoft Azure Government (MAG) region. The default value is AzurePublicCloud . 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 6.10.8.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 6.10.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.10.10. Finalizing user-managed encryption after installation If you installed OpenShift Container Platform using a user-managed encryption key, you can complete the installation by creating a new storage class and granting write permissions to the Azure cluster resource group. Procedure Obtain the identity of the cluster resource group used by the installer: If you specified an existing resource group in install-config.yaml , obtain its Azure identity by running the following command: USD az identity list --resource-group "<existing_resource_group>" If you did not specify a existing resource group in install-config.yaml , locate the resource group that the installer created, and then obtain its Azure identity by running the following commands: USD az group list USD az identity list --resource-group "<installer_created_resource_group>" Grant a role assignment to the cluster resource group so that it can write to the Disk Encryption Set by running the following command: USD az role assignment create --role "<privileged_role>" \ 1 --assignee "<resource_group_identity>" 2 1 Specifies an Azure role that has read/write permissions to the disk encryption set. You can use the Owner role or a custom role with the necessary permissions. 2 Specifies the identity of the cluster resource group. Obtain the id of the disk encryption set you created prior to installation by running the following command: USD az disk-encryption-set show -n <disk_encryption_set_name> \ 1 --resource-group <resource_group_name> 2 1 Specifies the name of the disk encryption set. 2 Specifies the resource group that contains the disk encryption set. The id is in the format of "/subscriptions/... /resourceGroups/... /providers/Microsoft.Compute/diskEncryptionSets/... " . Obtain the identity of the cluster service principal by running the following command: USD az identity show -g <cluster_resource_group> \ 1 -n <cluster_service_principal_name> \ 2 --query principalId --out tsv 1 Specifies the name of the cluster resource group created by the installation program. 2 Specifies the name of the cluster service principal created by the installation program. The identity is in the format of 12345678-1234-1234-1234-1234567890 . Create a role assignment that grants the cluster service principal Contributor privileges to the disk encryption set by running the following command: USD az role assignment create --assignee <cluster_service_principal_id> \ 1 --role 'Contributor' \// --scope <disk_encryption_set_id> \ 2 1 Specifies the ID of the cluster service principal obtained in the step. 2 Specifies the ID of the disk encryption set. Create a storage class that uses the user-managed disk encryption set: Save the following storage class definition to a file, for example storage-class-definition.yaml : kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: "<disk_encryption_set_ID>" 1 resourceGroup: "<resource_group_name>" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer 1 Specifies the ID of the disk encryption set that you created in the prerequisite steps, for example "/subscriptions/xxxxxx-xxxxx-xxxxx/resourceGroups/test-encryption/providers/Microsoft.Compute/diskEncryptionSets/disk-encryption-set-xxxxxx" . 2 Specifies the name of the resource group used by the installer. This is the same resource group from the first step. Create the storage class managed-premium from the file you created by running the following command: USD oc create -f storage-class-definition.yaml Select the managed-premium storage class when you create persistent volumes to use encrypted storage. 6.10.11. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 6.10.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 6.10.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.10.14. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 6.11. Installing a cluster on Azure using ARM templates In OpenShift Container Platform version 4.11, you can install a cluster on Microsoft Azure by using infrastructure that you provide. Several Azure Resource Manager (ARM) templates are provided to assist in completing these steps or to help model your own. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several ARM templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 6.11.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster. You downloaded the Azure CLI and installed it on your computer. See Install the Azure CLI in the Azure documentation. The documentation below was last tested using version 2.38.0 of the Azure CLI. Azure CLI commands might perform differently based on the version you use. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . Note Be sure to also review this site list if you are configuring a proxy. 6.11.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.11.3. Configuring your Azure project Before you can install OpenShift Container Platform, you must configure an Azure project to host it. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 6.11.3.1. Azure account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters. Important Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores. Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure. The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Default Azure limit Description vCPU 40 20 per region A default cluster requires 40 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap machine uses Standard_D4s_v3 machines, which use 4 vCPUs, the control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the worker machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 40 vCPUs. The bootstrap node VM, which uses 4 vCPUs, is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. OS Disk 7 Each cluster machine must have a minimum of 100 GB of storage and 300 IOPS. While these are the minimum supported values, faster storage is recommended for production clusters and clusters with intensive workloads. For more information about optimizing storage for performance, see the page titled "Optimizing storage" in the "Scalability and performance" section. VNet 1 1000 per region Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 65,536 per region Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 5000 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 1000 per region Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 3 Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Spot VM vCPUs (optional) 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. 20 per region This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. Additional resources Optimizing storage . 6.11.3.2. Configuring a public DNS zone in Azure To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source. Note For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. You can view Azure's DNS solution by visiting this example for creating DNS zones . 6.11.3.3. Increasing Azure account limits To increase an account limit, file a support request on the Azure portal. Note You can increase only one type of quota per support request. Procedure From the Azure portal, click Help + support in the lower left corner. Click New support request and then select the required values: From the Issue type list, select Service and subscription limits (quotas) . From the Subscription list, select the subscription to modify. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster. Click : Solutions . On the Problem Details page, provide the required information for your quota increase: Click Provide details and provide the required details in the Quota details window. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details. Click : Review + create and then click Create . 6.11.3.4. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 6.11.3.5. Required Azure roles OpenShift Container Platform needs a service principal so it can manage Microsoft Azure resources. Before you can create a service principal, review the following information: Your Azure account subscription must have the following roles: User Access Administrator Contributor Your Azure Active Directory (AD) must have the following permission: "microsoft.directory/servicePrincipals/createAsOwner" To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation. 6.11.3.6. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. Procedure Log in to the Azure CLI: USD az login If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": "AzureCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> \ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3 1 Specify the service principal name. 2 Specify the subscription ID. 3 Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Assign the User Access Administrator role by running the following command: USD az role assignment create --role "User Access Administrator" \ --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 1 Specify the appId parameter value for your service principal. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 6.11.3.7. Supported Azure regions The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. Supported Azure public regions australiacentral (Australia Central) australiaeast (Australia East) australiasoutheast (Australia South East) brazilsouth (Brazil South) canadacentral (Canada Central) canadaeast (Canada East) centralindia (Central India) centralus (Central US) eastasia (East Asia) eastus (East US) eastus2 (East US 2) francecentral (France Central) germanywestcentral (Germany West Central) israelcentral (Israel Central) italynorth (Italy North) japaneast (Japan East) japanwest (Japan West) koreacentral (Korea Central) koreasouth (Korea South) northcentralus (North Central US) northeurope (North Europe) norwayeast (Norway East) polandcentral (Poland Central) qatarcentral (Qatar Central) southafricanorth (South Africa North) southcentralus (South Central US) southeastasia (Southeast Asia) southindia (South India) swedencentral (Sweden Central) switzerlandnorth (Switzerland North) uaenorth (UAE North) uksouth (UK South) ukwest (UK West) westcentralus (West Central US) westeurope (West Europe) westindia (West India) westus (West US) westus2 (West US 2) westus3 (West US 3) Supported Azure Government regions Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6: usgovtexas (US Gov Texas) usgovvirginia (US Gov Virginia) You can reference all available MAG regions in the Azure documentation . Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested. 6.11.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 6.11.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 6.36. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 6.11.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.37. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.11.4.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 6.6. Machine types c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 6.11.5. Selecting an Azure Marketplace image If you are deploying an OpenShift Container Platform cluster using the Azure Marketplace offering, you must first obtain the Azure Marketplace image. The installation program uses this image to deploy worker nodes. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you are going to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100 Note Regardless of the version of OpenShift Container Platform you are installing, the correct version of the Azure Marketplace image to use is 4.8.x. If required, as part of the installation process, your VMs are automatically upgraded. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer. If you use the Azure Resource Manager (ARM) template to deploy your worker nodes: Update storageProfile.imageReference by deleting the id parameter and adding the offer , publisher , sku , and version parameters by using the values from your offer. Specify a plan for the virtual machines (VMs). Example 06_workers.json ARM template with an updated storageProfile.imageReference object and a specified plan ... "plan" : { "name": "rh-ocp-worker", "product": "rh-ocp-worker", "publisher": "redhat" }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { ... "storageProfile": { "imageReference": { "offer": "rh-ocp-worker", "publisher": "redhat", "sku": "rh-ocp-worker", "version": "4.8.2021122100" } ... } ... } 6.11.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 6.11.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 6.11.8. Creating the installation files for Azure To install OpenShift Container Platform on Microsoft Azure using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 6.11.8.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.11.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 6.11.8.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Optional: If you do not want the cluster to provision compute machines, empty the compute pool by editing the resulting install-config.yaml file to set replicas to 0 for the compute pool: compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1 1 Set to 0 . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 6.11.8.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.11.8.4. Exporting common variables for ARM templates You must export a common set of variables that are used with the provided Azure Resource Manager (ARM) templates used to assist in completing a user-provided infrastructure install on Microsoft Azure. Note Specific ARM templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Export common variables found in the install-config.yaml to be used by the provided ARM templates: USD export CLUSTER_NAME=<cluster_name> 1 USD export AZURE_REGION=<azure_region> 2 USD export SSH_KEY=<ssh_key> 3 USD export BASE_DOMAIN=<base_domain> 4 USD export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5 1 The value of the .metadata.name attribute from the install-config.yaml file. 2 The region to deploy the cluster into, for example centralus . This is the value of the .platform.azure.region attribute from the install-config.yaml file. 3 The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it contains spaces. This is the value of the .sshKey attribute from the install-config.yaml file. 4 The base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. This is the value of the .baseDomain attribute from the install-config.yaml file. 5 The resource group where the public DNS zone exists. This is the value of the .platform.azure.baseDomainResourceGroupName attribute from the install-config.yaml file. For example: USD export CLUSTER_NAME=test-cluster USD export AZURE_REGION=centralus USD export SSH_KEY="ssh-rsa xxx/xxx/xxx= [email protected]" USD export BASE_DOMAIN=example.com USD export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 6.11.8.5. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. When configuring Azure on user-provisioned infrastructure, you must export some common variables defined in the manifest files to use later in the Azure Resource Manager (ARM) templates: Export the infrastructure ID by using the following command: USD export INFRA_ID=<infra_id> 1 1 The OpenShift Container Platform cluster has been assigned an identifier ( INFRA_ID ) in the form of <cluster_name>-<random_string> . This will be used as the base name for most resources created using the provided ARM templates. This is the value of the .status.infrastructureName attribute from the manifests/cluster-infrastructure-02-config.yml file. Export the resource group by using the following command: USD export RESOURCE_GROUP=<resource_group> 1 1 All resources created in this Azure deployment exists as part of a resource group . The resource group name is also based on the INFRA_ID , in the form of <cluster_name>-<random_string>-rg . This is the value of the .status.platformStatus.azure.resourceGroupName attribute from the manifests/cluster-infrastructure-02-config.yml file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 6.11.9. Creating the Azure resource group You must create a Microsoft Azure resource group and an identity for that resource group. These are both used during the installation of your OpenShift Container Platform cluster on Azure. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the resource group in a supported Azure region: USD az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION} Create an Azure identity for the resource group: USD az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity This is used to grant the required access to Operators in your cluster. For example, this allows the Ingress Operator to create a public IP and its load balancer. You must assign the Azure identity to a role. Grant the Contributor role to the Azure identity: Export the following variables required by the Azure role assignment: USD export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv` USD export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv` Assign the Contributor role to the identity: USD az role assignment create --assignee "USD{PRINCIPAL_ID}" --role 'Contributor' --scope "USD{RESOURCE_GROUP_ID}" 6.11.10. Uploading the RHCOS cluster image and bootstrap Ignition config file The Azure client does not support deployments based on files existing locally. You must copy and store the RHCOS virtual hard disk (VHD) cluster image and bootstrap Ignition config file in a storage container so they are accessible during deployment. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create an Azure storage account to store the VHD cluster image: USD az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS Warning The Azure storage account name must be between 3 and 24 characters in length and use numbers and lower-case letters only. If your CLUSTER_NAME variable does not follow these restrictions, you must manually define the Azure storage account name. For more information on Azure storage account name restrictions, see Resolve errors for storage account names in the Azure documentation. Export the storage account key as an environment variable: USD export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query "[0].value" -o tsv` Export the URL of the RHCOS VHD to an environment variable: USD export VHD_URL=`openshift-install coreos print-stream-json | jq -r '.architectures.x86_64."rhel-coreos-extensions"."azure-disk".url'` Important The RHCOS images might not change with every release of OpenShift Container Platform. You must specify an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. Create the storage container for the VHD: USD az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} Copy the local VHD to a blob: USD az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob "rhcos.vhd" --destination-container vhd --source-uri "USD{VHD_URL}" Create a blob storage container and upload the generated bootstrap.ign file: USD az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} USD az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c "files" -f "<installation_directory>/bootstrap.ign" -n "bootstrap.ign" 6.11.11. Example for creating DNS zones DNS records are required for clusters that use user-provisioned infrastructure. You should choose the DNS strategy that fits your scenario. For this example, Azure's DNS solution is used, so you will create a new public DNS zone for external (internet) visibility and a private DNS zone for internal cluster resolution. Note The public DNS zone is not required to exist in the same resource group as the cluster deployment and might already exist in your organization for the desired base domain. If that is the case, you can skip creating the public DNS zone; be sure the installation config you generated earlier reflects that scenario. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the new public DNS zone in the resource group exported in the BASE_DOMAIN_RESOURCE_GROUP environment variable: USD az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can skip this step if you are using a public DNS zone that already exists. Create the private DNS zone in the same resource group as the rest of this deployment: USD az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can learn more about configuring a public DNS zone in Azure by visiting that section. 6.11.12. Creating a VNet in Azure You must create a virtual network (VNet) in Microsoft Azure for your OpenShift Container Platform cluster to use. You can customize the VNet to meet your requirements. One way to create the VNet is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Copy the template from the ARM template for the VNet section of this topic and save it as 01_vnet.json in your cluster's installation directory. This template describes the VNet that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/01_vnet.json" \ --parameters baseName="USD{INFRA_ID}" 1 1 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Link the VNet template to the private DNS zone: USD az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v "USD{INFRA_ID}-vnet" -e false 6.11.12.1. ARM template for the VNet You can use the following Azure Resource Manager (ARM) template to deploy the VNet that you need for your OpenShift Container Platform cluster: Example 6.7. 01_vnet.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", "addressPrefix" : "10.0.0.0/16", "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]", "masterSubnetPrefix" : "10.0.0.0/24", "nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]", "nodeSubnetPrefix" : "10.0.1.0/24", "clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/virtualNetworks", "name" : "[variables('virtualNetworkName')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]" ], "properties" : { "addressSpace" : { "addressPrefixes" : [ "[variables('addressPrefix')]" ] }, "subnets" : [ { "name" : "[variables('masterSubnetName')]", "properties" : { "addressPrefix" : "[variables('masterSubnetPrefix')]", "serviceEndpoints": [], "networkSecurityGroup" : { "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]" } } }, { "name" : "[variables('nodeSubnetName')]", "properties" : { "addressPrefix" : "[variables('nodeSubnetPrefix')]", "serviceEndpoints": [], "networkSecurityGroup" : { "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]" } } } ] } }, { "type" : "Microsoft.Network/networkSecurityGroups", "name" : "[variables('clusterNsgName')]", "apiVersion" : "2018-10-01", "location" : "[variables('location')]", "properties" : { "securityRules" : [ { "name" : "apiserver_in", "properties" : { "protocol" : "Tcp", "sourcePortRange" : "*", "destinationPortRange" : "6443", "sourceAddressPrefix" : "*", "destinationAddressPrefix" : "*", "access" : "Allow", "priority" : 101, "direction" : "Inbound" } } ] } } ] } 6.11.13. Deploying the RHCOS cluster image for the Azure infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Microsoft Azure for your OpenShift Container Platform nodes. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Store the RHCOS virtual hard disk (VHD) cluster image in an Azure storage container. Store the bootstrap Ignition config file in an Azure storage container. Procedure Copy the template from the ARM template for image storage section of this topic and save it as 02_storage.json in your cluster's installation directory. This template describes the image storage that your cluster requires. Export the RHCOS VHD blob URL as a variable: USD export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -o tsv` Deploy the cluster image: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/02_storage.json" \ --parameters vhdBlobURL="USD{VHD_BLOB_URL}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The blob URL of the RHCOS VHD to be used to create master and worker machines. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 6.11.13.1. ARM template for image storage You can use the following Azure Resource Manager (ARM) template to deploy the stored Red Hat Enterprise Linux CoreOS (RHCOS) image that you need for your OpenShift Container Platform cluster: Example 6.8. 02_storage.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vhdBlobURL" : { "type" : "string", "metadata" : { "description" : "URL pointing to the blob where the VHD to be used to create master and worker machines is located" } } }, "variables" : { "location" : "[resourceGroup().location]", "imageName" : "[concat(parameters('baseName'), '-image')]", "imageNameGen2" : "[concat(parameters('baseName'), '-gen2')]" }, "resources" : [ { "apiVersion" : "2018-06-01", "type": "Microsoft.Compute/images", "name": "[variables('imageName')]", "location" : "[variables('location')]", "properties": { "storageProfile": { "osDisk": { "osType": "Linux", "osState": "Generalized", "blobUri": "[parameters('vhdBlobURL')]", "storageAccountType": "Standard_LRS" } } } }, { "apiVersion": "2020-12-01", "type": "Microsoft.Compute/images", "name": "[variables('imageNameGen2')]", "location": "[variables('location')]", "properties": { "hyperVGeneration": "V2", "storageProfile": { "osDisk": { "osType": "Linux", "osState": "Generalized", "blobUri": "[parameters('vhdBlobURL')]", "storageAccountType": "Standard_LRS" } } } } ] } 6.11.14. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 6.11.14.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 6.38. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 6.39. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 6.40. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 6.11.15. Creating networking and load balancing components in Azure You must configure networking and load balancing in Microsoft Azure for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Procedure Copy the template from the ARM template for the network and load balancers section of this topic and save it as 03_infra.json in your cluster's installation directory. This template describes the networking and load balancing objects that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/03_infra.json" \ --parameters privateDNSZoneName="USD{CLUSTER_NAME}.USD{BASE_DOMAIN}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The name of the private DNS zone. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Create an api DNS record in the public zone for the API public load balancer. The USD{BASE_DOMAIN_RESOURCE_GROUP} variable must point to the resource group where the public DNS zone exists. Export the following variable: USD export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query "[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv` Create the api DNS record in a new public zone: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60 If you are adding the cluster to an existing public zone, you can create the api DNS record in it instead: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60 6.11.15.1. ARM template for the network and load balancers You can use the following Azure Resource Manager (ARM) template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster: Example 6.9. 03_infra.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "privateDNSZoneName" : { "type" : "string", "metadata" : { "description" : "Name of the private DNS zone" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterPublicIpAddressName" : "[concat(parameters('baseName'), '-master-pip')]", "masterPublicIpAddressID" : "[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]", "masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]", "masterLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "internalLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]", "skuName": "Standard" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/publicIPAddresses", "name" : "[variables('masterPublicIpAddressName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "properties" : { "publicIPAllocationMethod" : "Static", "dnsSettings" : { "domainNameLabel" : "[variables('masterPublicIpAddressName')]" } } }, { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/loadBalancers", "name" : "[variables('masterLoadBalancerName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "dependsOn" : [ "[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]" ], "properties" : { "frontendIPConfigurations" : [ { "name" : "public-lb-ip", "properties" : { "publicIPAddress" : { "id" : "[variables('masterPublicIpAddressID')]" } } } ], "backendAddressPools" : [ { "name" : "public-lb-backend" } ], "loadBalancingRules" : [ { "name" : "api-internal", "properties" : { "frontendIPConfiguration" : { "id" :"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip')]" }, "backendAddressPool" : { "id" : "[concat(variables('masterLoadBalancerID'), '/backendAddressPools/public-lb-backend')]" }, "protocol" : "Tcp", "loadDistribution" : "Default", "idleTimeoutInMinutes" : 30, "frontendPort" : 6443, "backendPort" : 6443, "probe" : { "id" : "[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]" } } } ], "probes" : [ { "name" : "api-internal-probe", "properties" : { "protocol" : "Https", "port" : 6443, "requestPath": "/readyz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } } ] } }, { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/loadBalancers", "name" : "[variables('internalLoadBalancerName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "properties" : { "frontendIPConfigurations" : [ { "name" : "internal-lb-ip", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "privateIPAddressVersion" : "IPv4" } } ], "backendAddressPools" : [ { "name" : "internal-lb-backend" } ], "loadBalancingRules" : [ { "name" : "api-internal", "properties" : { "frontendIPConfiguration" : { "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" }, "frontendPort" : 6443, "backendPort" : 6443, "enableFloatingIP" : false, "idleTimeoutInMinutes" : 30, "protocol" : "Tcp", "enableTcpReset" : false, "loadDistribution" : "Default", "backendAddressPool" : { "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" }, "probe" : { "id" : "[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]" } } }, { "name" : "sint", "properties" : { "frontendIPConfiguration" : { "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" }, "frontendPort" : 22623, "backendPort" : 22623, "enableFloatingIP" : false, "idleTimeoutInMinutes" : 30, "protocol" : "Tcp", "enableTcpReset" : false, "loadDistribution" : "Default", "backendAddressPool" : { "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" }, "probe" : { "id" : "[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]" } } } ], "probes" : [ { "name" : "api-internal-probe", "properties" : { "protocol" : "Https", "port" : 6443, "requestPath": "/readyz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } }, { "name" : "sint-probe", "properties" : { "protocol" : "Https", "port" : 22623, "requestPath": "/healthz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "name": "[concat(parameters('privateDNSZoneName'), '/api')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "name": "[concat(parameters('privateDNSZoneName'), '/api-int')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" } ] } } ] } 6.11.16. Creating the bootstrap machine in Azure You must create the bootstrap machine in Microsoft Azure to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Procedure Copy the template from the ARM template for the bootstrap machine section of this topic and save it as 04_bootstrap.json in your cluster's installation directory. This template describes the bootstrap machine that your cluster requires. Export the bootstrap URL variable: USD bootstrap_url_expiry=`date -u -d "10 hours" '+%Y-%m-%dT%H:%MZ'` USD export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv` Export the bootstrap ignition variable: USD export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/04_bootstrap.json" \ --parameters bootstrapIgnition="USD{BOOTSTRAP_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The bootstrap Ignition content for the bootstrap cluster. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 6.11.16.1. ARM template for the bootstrap machine You can use the following Azure Resource Manager (ARM) template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 6.10. 04_bootstrap.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "bootstrapIgnition" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Bootstrap ignition content for the bootstrap cluster" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "bootstrapVMSize" : { "type" : "string", "defaultValue" : "Standard_D4s_v3", "metadata" : { "description" : "The size of the Bootstrap Virtual Machine" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "sshKeyPath" : "/home/core/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "vmName" : "[concat(parameters('baseName'), '-bootstrap')]", "nicName" : "[concat(variables('vmName'), '-nic')]", "imageName" : "[concat(parameters('baseName'), '-image')]", "clusterNsgName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-nsg')]", "sshPublicIpAddressName" : "[concat(variables('vmName'), '-ssh-pip')]" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/publicIPAddresses", "name" : "[variables('sshPublicIpAddressName')]", "location" : "[variables('location')]", "sku": { "name": "Standard" }, "properties" : { "publicIPAllocationMethod" : "Static", "dnsSettings" : { "domainNameLabel" : "[variables('sshPublicIpAddressName')]" } } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "name" : "[variables('nicName')]", "location" : "[variables('location')]", "dependsOn" : [ "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" ], "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "publicIPAddress": { "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" }, "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "loadBalancerBackendAddressPools" : [ { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]" }, { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" } ] } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "name" : "[variables('vmName')]", "location" : "[variables('location')]", "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('bootstrapVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmName')]", "adminUsername" : "core", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('bootstrapIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmName'),'_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB" : 100 } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" } ] } } }, { "apiVersion" : "2018-06-01", "type": "Microsoft.Network/networkSecurityGroups/securityRules", "name" : "[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]", "location" : "[variables('location')]", "dependsOn" : [ "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]" ], "properties": { "protocol" : "Tcp", "sourcePortRange" : "*", "destinationPortRange" : "22", "sourceAddressPrefix" : "*", "destinationAddressPrefix" : "*", "access" : "Allow", "priority" : 100, "direction" : "Inbound" } } ] } 6.11.17. Creating the control plane machines in Azure You must create the control plane machines in Microsoft Azure for your cluster to use. One way to create these machines is to modify the provided Azure Resource Manager (ARM) template. Note By default, Microsoft Azure places control plane machines and compute machines in a pre-set availability zone. You can manually set an availability zone for a compute node or control plane node. To do this, modify a vendor's Azure Resource Manager (ARM) template by specifying each of your availability zones in the zones parameter of the virtual machine resource. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the ARM template for control plane machines section of this topic and save it as 05_masters.json in your cluster's installation directory. This template describes the control plane machines that your cluster requires. Export the following variable needed by the control plane machine deployment: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/05_masters.json" \ --parameters masterIgnition="USD{MASTER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The Ignition content for the control plane nodes. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 6.11.17.1. ARM template for control plane machines You can use the following Azure Resource Manager (ARM) template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 6.11. 05_masters.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "masterIgnition" : { "type" : "string", "metadata" : { "description" : "Ignition content for the master nodes" } }, "numberOfMasters" : { "type" : "int", "defaultValue" : 3, "minValue" : 2, "maxValue" : 30, "metadata" : { "description" : "Number of OpenShift masters to deploy" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "privateDNSZoneName" : { "type" : "string", "defaultValue" : "", "metadata" : { "description" : "unused" } }, "masterVMSize" : { "type" : "string", "defaultValue" : "Standard_D8s_v3", "metadata" : { "description" : "The size of the Master Virtual Machines" } }, "diskSizeGB" : { "type" : "int", "defaultValue" : 1024, "metadata" : { "description" : "Size of the Master VM OS disk, in GB" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "sshKeyPath" : "/home/core/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "imageName" : "[concat(parameters('baseName'), '-image')]", "copy" : [ { "name" : "vmNames", "count" : "[parameters('numberOfMasters')]", "input" : "[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]" } ] }, "resources" : [ { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "copy" : { "name" : "nicCopy", "count" : "[length(variables('vmNames'))]" }, "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", "location" : "[variables('location')]", "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "loadBalancerBackendAddressPools" : [ { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]" }, { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" } ] } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "copy" : { "name" : "vmCopy", "count" : "[length(variables('vmNames'))]" }, "name" : "[variables('vmNames')[copyIndex()]]", "location" : "[variables('location')]", "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('masterVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmNames')[copyIndex()]]", "adminUsername" : "core", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('masterIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmNames')[copyIndex()], '_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "caching": "ReadOnly", "writeAcceleratorEnabled": false, "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB" : "[parameters('diskSizeGB')]" } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", "properties": { "primary": false } } ] } } } ] } 6.11.18. Wait for bootstrap completion and remove bootstrap resources in Azure After you create all of the required infrastructure in Microsoft Azure, wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in USD az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes USD az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes USD az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait USD az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign USD az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip Note If you do not delete the bootstrap server, installation may not succeed due to API traffic being routed to the bootstrap server. 6.11.19. Creating additional worker machines in Azure You can create worker machines in Microsoft Azure for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Azure Resource Manager (ARM) template. Additional instances can be launched by including additional resources of type 06_workers.json in the file. Note By default, Microsoft Azure places control plane machines and compute machines in a pre-set availability zone. You can manually set an availability zone for a compute node or control plane node. To do this, modify a vendor's ARM template by specifying each of your availability zones in the zones parameter of the virtual machine resource. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the ARM template for worker machines section of this topic and save it as 06_workers.json in your cluster's installation directory. This template describes the worker machines that your cluster requires. Export the following variable needed by the worker machine deployment: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/06_workers.json" \ --parameters workerIgnition="USD{WORKER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The Ignition content for the worker nodes. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 6.11.19.1. ARM template for worker machines You can use the following Azure Resource Manager (ARM) template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 6.12. 06_workers.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "workerIgnition" : { "type" : "string", "metadata" : { "description" : "Ignition content for the worker nodes" } }, "numberOfNodes" : { "type" : "int", "defaultValue" : 3, "minValue" : 2, "maxValue" : 30, "metadata" : { "description" : "Number of OpenShift compute nodes to deploy" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "nodeVMSize" : { "type" : "string", "defaultValue" : "Standard_D4s_v3", "metadata" : { "description" : "The size of the each Node Virtual Machine" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "nodeSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-worker-subnet')]", "nodeSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]", "infraLoadBalancerName" : "[parameters('baseName')]", "sshKeyPath" : "/home/capi/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "imageName" : "[concat(parameters('baseName'), '-image')]", "copy" : [ { "name" : "vmNames", "count" : "[parameters('numberOfNodes')]", "input" : "[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]" } ] }, "resources" : [ { "apiVersion" : "2019-05-01", "name" : "[concat('node', copyIndex())]", "type" : "Microsoft.Resources/deployments", "copy" : { "name" : "nodeCopy", "count" : "[length(variables('vmNames'))]" }, "properties" : { "mode" : "Incremental", "template" : { "USDschema" : "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "resources" : [ { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", "location" : "[variables('location')]", "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('nodeSubnetRef')]" } } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "name" : "[variables('vmNames')[copyIndex()]]", "location" : "[variables('location')]", "tags" : { "kubernetes.io-cluster-ffranzupi": "owned" }, "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('nodeVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmNames')[copyIndex()]]", "adminUsername" : "capi", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('workerIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmNames')[copyIndex()],'_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB": 128 } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", "properties": { "primary": true } } ] } } } ] } } } ] } 6.11.20. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 6.11.21. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.11.22. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.24.0 master-1 Ready master 63m v1.24.0 master-2 Ready master 64m v1.24.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.24.0 master-1 Ready master 73m v1.24.0 master-2 Ready master 74m v1.24.0 worker-0 Ready worker 11m v1.24.0 worker-1 Ready worker 11m v1.24.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 6.11.23. Adding the Ingress DNS records If you removed the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites You deployed an OpenShift Container Platform cluster on Microsoft Azure by using infrastructure that you provisioned. Install the OpenShift CLI ( oc ). Install or update the Azure CLI . Procedure Confirm the Ingress router has created a load balancer and populated the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20 Export the Ingress router IP as a variable: USD export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add a *.apps record to the public DNS zone. If you are adding this cluster to a new public zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300 If you are adding this cluster to an already existing public zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300 Add a *.apps record to the private DNS zone: Create a *.apps record by using the following command: USD az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300 Add the *.apps record to the private DNS zone by using the following command: USD az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com 6.11.24. Completing an Azure installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Microsoft Azure user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned Azure infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.11.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.12. Uninstalling a cluster on Azure You can remove a cluster that you deployed to Microsoft Azure. 6.12.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. While you can uninstall the cluster using the copy of the installation program that was used to deploy it, using OpenShift Container Platform version 4.13 or later is recommended. The removal of service principals is dependent on the Microsoft Azure AD Graph API. Using version 4.13 or later of the installation program ensures that service principals are removed without the need for manual intervention, if and when Microsoft decides to retire the Azure AD Graph API. Procedure On the computer that you used to install the cluster, go to the directory that contains the installation program, and run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"az login",
"az account list --refresh",
"[ { \"cloudName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id> 1",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }",
"az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1",
"openshift-install create install-config --dir <installation_directory>",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"openshift-install create manifests --dir <installation_directory>",
"openshift-install version",
"release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component-secret> namespace: <component-namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"grep \"release.openshift.io/feature-gate\" *",
"0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-gate: TechPreviewNoUpgrade",
"openshift-install create cluster --dir <installation_directory>",
"export RESOURCEGROUP=\"<resource_group>\" \\ 1 LOCATION=\"<location>\" 2",
"export KEYVAULT_NAME=\"<keyvault_name>\" \\ 1 KEYVAULT_KEY_NAME=\"<keyvault_key_name>\" \\ 2 DISK_ENCRYPTION_SET_NAME=\"<disk_encryption_set_name>\" 3",
"export CLUSTER_SP_ID=\"<service_principal_id>\" 1",
"az feature register --namespace \"Microsoft.Compute\" --name \"EncryptionAtHost\"",
"az feature show --namespace Microsoft.Compute --name EncryptionAtHost",
"az provider register -n Microsoft.Compute",
"az group create --name USDRESOURCEGROUP --location USDLOCATION",
"az keyvault create -n USDKEYVAULT_NAME -g USDRESOURCEGROUP -l USDLOCATION --enable-purge-protection true",
"az keyvault key create --vault-name USDKEYVAULT_NAME -n USDKEYVAULT_KEY_NAME --protection software",
"KEYVAULT_ID=USD(az keyvault show --name USDKEYVAULT_NAME --query \"[id]\" -o tsv)",
"KEYVAULT_KEY_URL=USD(az keyvault key show --vault-name USDKEYVAULT_NAME --name USDKEYVAULT_KEY_NAME --query \"[key.kid]\" -o tsv)",
"az disk-encryption-set create -n USDDISK_ENCRYPTION_SET_NAME -l USDLOCATION -g USDRESOURCEGROUP --source-vault USDKEYVAULT_ID --key-url USDKEYVAULT_KEY_URL",
"DES_IDENTITY=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g USDRESOURCEGROUP --query \"[identity.principalId]\" -o tsv)",
"az keyvault set-policy -n USDKEYVAULT_NAME -g USDRESOURCEGROUP --object-id USDDES_IDENTITY --key-permissions wrapkey unwrapkey get",
"DES_RESOURCE_ID=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g USDRESOURCEGROUP --query \"[id]\" -o tsv)",
"az role assignment create --assignee USDCLUSTER_SP_ID --role \"<reader_role>\" \\ 1 --scope USDDES_RESOURCE_ID -o jsonc",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: azure: type: Standard_D4s_v5 osImage: publisher: redhat offer: rh-ocp-worker sku: rh-ocp-worker version: 4.8.2021122100 replicas: 3",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 11 region: centralus 12 resourceGroupName: existing_resource_group 13 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 14 fips: false 15 sshKey: ssh-ed25519 AAAA... 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"az identity list --resource-group \"<existing_resource_group>\"",
"az group list",
"az identity list --resource-group \"<installer_created_resource_group>\"",
"az role assignment create --role \"<privileged_role>\" \\ 1 --assignee \"<resource_group_identity>\" 2",
"az disk-encryption-set show -n <disk_encryption_set_name> \\ 1 --resource-group <resource_group_name> 2",
"az identity show -g <cluster_resource_group> \\ 1 -n <cluster_service_principal_name> \\ 2 --query principalId --out tsv",
"az role assignment create --assignee <cluster_service_principal_id> \\ 1 --role 'Contributor' \\// --scope <disk_encryption_set_id> \\ 2",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: \"<disk_encryption_set_ID>\" 1 resourceGroup: \"<resource_group_name>\" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer",
"oc create -f storage-class-definition.yaml",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 12 region: centralus 13 resourceGroupName: existing_resource_group 14 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 15 fips: false 16 sshKey: ssh-ed25519 AAAA... 17",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory>",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"az identity list --resource-group \"<existing_resource_group>\"",
"az group list",
"az identity list --resource-group \"<installer_created_resource_group>\"",
"az role assignment create --role \"<privileged_role>\" \\ 1 --assignee \"<resource_group_identity>\" 2",
"az disk-encryption-set show -n <disk_encryption_set_name> \\ 1 --resource-group <resource_group_name> 2",
"az identity show -g <cluster_resource_group> \\ 1 -n <cluster_service_principal_name> \\ 2 --query principalId --out tsv",
"az role assignment create --assignee <cluster_service_principal_id> \\ 1 --role 'Contributor' \\// --scope <disk_encryption_set_id> \\ 2",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: \"<disk_encryption_set_ID>\" 1 resourceGroup: \"<resource_group_name>\" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer",
"oc create -f storage-class-definition.yaml",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 11 region: centralus 12 resourceGroupName: existing_resource_group 13 networkResourceGroupName: vnet_resource_group 14 virtualNetwork: vnet 15 controlPlaneSubnet: control_plane_subnet 16 computeSubnet: compute_subnet 17 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 18 fips: false 19 sshKey: ssh-ed25519 AAAA... 20",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"az identity list --resource-group \"<existing_resource_group>\"",
"az group list",
"az identity list --resource-group \"<installer_created_resource_group>\"",
"az role assignment create --role \"<privileged_role>\" \\ 1 --assignee \"<resource_group_identity>\" 2",
"az disk-encryption-set show -n <disk_encryption_set_name> \\ 1 --resource-group <resource_group_name> 2",
"az identity show -g <cluster_resource_group> \\ 1 -n <cluster_service_principal_name> \\ 2 --query principalId --out tsv",
"az role assignment create --assignee <cluster_service_principal_id> \\ 1 --role 'Contributor' \\// --scope <disk_encryption_set_id> \\ 2",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: \"<disk_encryption_set_ID>\" 1 resourceGroup: \"<resource_group_name>\" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer",
"oc create -f storage-class-definition.yaml",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 11 region: centralus 12 resourceGroupName: existing_resource_group 13 networkResourceGroupName: vnet_resource_group 14 virtualNetwork: vnet 15 controlPlaneSubnet: control_plane_subnet 16 computeSubnet: compute_subnet 17 outboundType: UserDefinedRouting 18 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"az identity list --resource-group \"<existing_resource_group>\"",
"az group list",
"az identity list --resource-group \"<installer_created_resource_group>\"",
"az role assignment create --role \"<privileged_role>\" \\ 1 --assignee \"<resource_group_identity>\" 2",
"az disk-encryption-set show -n <disk_encryption_set_name> \\ 1 --resource-group <resource_group_name> 2",
"az identity show -g <cluster_resource_group> \\ 1 -n <cluster_service_principal_name> \\ 2 --query principalId --out tsv",
"az role assignment create --assignee <cluster_service_principal_id> \\ 1 --role 'Contributor' \\// --scope <disk_encryption_set_id> \\ 2",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: \"<disk_encryption_set_ID>\" 1 resourceGroup: \"<resource_group_name>\" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer",
"oc create -f storage-class-definition.yaml",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 11 region: usgovvirginia resourceGroupName: existing_resource_group 12 networkResourceGroupName: vnet_resource_group 13 virtualNetwork: vnet 14 controlPlaneSubnet: control_plane_subnet 15 computeSubnet: compute_subnet 16 outboundType: UserDefinedRouting 17 cloudName: AzureUSGovernmentCloud 18 pullSecret: '{\"auths\": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"az identity list --resource-group \"<existing_resource_group>\"",
"az group list",
"az identity list --resource-group \"<installer_created_resource_group>\"",
"az role assignment create --role \"<privileged_role>\" \\ 1 --assignee \"<resource_group_identity>\" 2",
"az disk-encryption-set show -n <disk_encryption_set_name> \\ 1 --resource-group <resource_group_name> 2",
"az identity show -g <cluster_resource_group> \\ 1 -n <cluster_service_principal_name> \\ 2 --query principalId --out tsv",
"az role assignment create --assignee <cluster_service_principal_id> \\ 1 --role 'Contributor' \\// --scope <disk_encryption_set_id> \\ 2",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: \"<disk_encryption_set_ID>\" 1 resourceGroup: \"<resource_group_name>\" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer",
"oc create -f storage-class-definition.yaml",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"az login",
"az account list --refresh",
"[ { \"cloudName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id> 1",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }",
"az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"\"plan\" : { \"name\": \"rh-ocp-worker\", \"product\": \"rh-ocp-worker\", \"publisher\": \"redhat\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"storageProfile\": { \"imageReference\": { \"offer\": \"rh-ocp-worker\", \"publisher\": \"redhat\", \"sku\": \"rh-ocp-worker\", \"version\": \"4.8.2021122100\" } } }",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.11.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5",
"export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"export INFRA_ID=<infra_id> 1",
"export RESOURCE_GROUP=<resource_group> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}",
"az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity",
"export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv`",
"export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv`",
"az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role 'Contributor' --scope \"USD{RESOURCE_GROUP_ID}\"",
"az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS",
"export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`",
"export VHD_URL=`openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.\"rhel-coreos-extensions\".\"azure-disk\".url'`",
"az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob \"rhcos.vhd\" --destination-container vhd --source-uri \"USD{VHD_URL}\"",
"az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"",
"az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1",
"az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v \"USD{INFRA_ID}-vnet\" -e false",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"addressPrefix\" : \"10.0.0.0/16\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetPrefix\" : \"10.0.0.0/24\", \"nodeSubnetName\" : \"[concat(parameters('baseName'), '-worker-subnet')]\", \"nodeSubnetPrefix\" : \"10.0.1.0/24\", \"clusterNsgName\" : \"[concat(parameters('baseName'), '-nsg')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/virtualNetworks\", \"name\" : \"[variables('virtualNetworkName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]\" ], \"properties\" : { \"addressSpace\" : { \"addressPrefixes\" : [ \"[variables('addressPrefix')]\" ] }, \"subnets\" : [ { \"name\" : \"[variables('masterSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('masterSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } }, { \"name\" : \"[variables('nodeSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('nodeSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } } ] } }, { \"type\" : \"Microsoft.Network/networkSecurityGroups\", \"name\" : \"[variables('clusterNsgName')]\", \"apiVersion\" : \"2018-10-01\", \"location\" : \"[variables('location')]\", \"properties\" : { \"securityRules\" : [ { \"name\" : \"apiserver_in\", \"properties\" : { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"6443\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 101, \"direction\" : \"Inbound\" } } ] } } ] }",
"export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vhdBlobURL\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"URL pointing to the blob where the VHD to be used to create master and worker machines is located\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"imageName\" : \"[concat(parameters('baseName'), '-image')]\", \"imageNameGen2\" : \"[concat(parameters('baseName'), '-gen2')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\": \"Microsoft.Compute/images\", \"name\": \"[variables('imageName')]\", \"location\" : \"[variables('location')]\", \"properties\": { \"storageProfile\": { \"osDisk\": { \"osType\": \"Linux\", \"osState\": \"Generalized\", \"blobUri\": \"[parameters('vhdBlobURL')]\", \"storageAccountType\": \"Standard_LRS\" } } } }, { \"apiVersion\": \"2020-12-01\", \"type\": \"Microsoft.Compute/images\", \"name\": \"[variables('imageNameGen2')]\", \"location\": \"[variables('location')]\", \"properties\": { \"hyperVGeneration\": \"V2\", \"storageProfile\": { \"osDisk\": { \"osType\": \"Linux\", \"osState\": \"Generalized\", \"blobUri\": \"[parameters('vhdBlobURL')]\", \"storageAccountType\": \"Standard_LRS\" } } } } ] }",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters privateDNSZoneName=\"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Name of the private DNS zone\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterPublicIpAddressName\" : \"[concat(parameters('baseName'), '-master-pip')]\", \"masterPublicIpAddressID\" : \"[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]\", \"masterLoadBalancerName\" : \"[concat(parameters('baseName'), '-public-lb')]\", \"masterLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"internalLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]\", \"skuName\": \"Standard\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('masterPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('masterPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('masterLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]\" ], \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"public-lb-ip\", \"properties\" : { \"publicIPAddress\" : { \"id\" : \"[variables('masterPublicIpAddressID')]\" } } } ], \"backendAddressPools\" : [ { \"name\" : \"public-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" :\"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip')]\" }, \"backendAddressPool\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/backendAddressPools/public-lb-backend')]\" }, \"protocol\" : \"Tcp\", \"loadDistribution\" : \"Default\", \"idleTimeoutInMinutes\" : 30, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"probe\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('internalLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"internal-lb-ip\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"privateIPAddressVersion\" : \"IPv4\" } } ], \"backendAddressPools\" : [ { \"name\" : \"internal-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]\" } } }, { \"name\" : \"sint\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 22623, \"backendPort\" : 22623, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } }, { \"name\" : \"sint-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 22623, \"requestPath\": \"/healthz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api-int')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } } ] }",
"bootstrap_url_expiry=`date -u -d \"10 hours\" '+%Y-%m-%dT%H:%MZ'`",
"export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv`",
"export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"bootstrapIgnition\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Bootstrap ignition content for the bootstrap cluster\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"bootstrapVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the Bootstrap Virtual Machine\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[concat(parameters('baseName'), '-public-lb')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"vmName\" : \"[concat(parameters('baseName'), '-bootstrap')]\", \"nicName\" : \"[concat(variables('vmName'), '-nic')]\", \"imageName\" : \"[concat(parameters('baseName'), '-image')]\", \"clusterNsgName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-nsg')]\", \"sshPublicIpAddressName\" : \"[concat(variables('vmName'), '-ssh-pip')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('sshPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"Standard\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('sshPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[variables('nicName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" ], \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"publicIPAddress\": { \"id\": \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" }, \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmName')]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('bootstrapVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmName')]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('bootstrapIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/images', variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmName'),'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : 100 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]\" } ] } } }, { \"apiVersion\" : \"2018-06-01\", \"type\": \"Microsoft.Network/networkSecurityGroups/securityRules\", \"name\" : \"[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]\" ], \"properties\": { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"22\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 100, \"direction\" : \"Inbound\" } } ] }",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"masterIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the master nodes\" } }, \"numberOfMasters\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift masters to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"defaultValue\" : \"\", \"metadata\" : { \"description\" : \"unused\" } }, \"masterVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D8s_v3\", \"metadata\" : { \"description\" : \"The size of the Master Virtual Machines\" } }, \"diskSizeGB\" : { \"type\" : \"int\", \"defaultValue\" : 1024, \"metadata\" : { \"description\" : \"Size of the Master VM OS disk, in GB\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[concat(parameters('baseName'), '-public-lb')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"imageName\" : \"[concat(parameters('baseName'), '-image')]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfMasters')]\", \"input\" : \"[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"copy\" : { \"name\" : \"nicCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"copy\" : { \"name\" : \"vmCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('masterVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('masterIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/images', variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()], '_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"caching\": \"ReadOnly\", \"writeAcceleratorEnabled\": false, \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : \"[parameters('diskSizeGB')]\" } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": false } } ] } } } ] }",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"workerIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the worker nodes\" } }, \"numberOfNodes\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift compute nodes to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"nodeVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the each Node Virtual Machine\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"nodeSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-worker-subnet')]\", \"nodeSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]\", \"infraLoadBalancerName\" : \"[parameters('baseName')]\", \"sshKeyPath\" : \"/home/capi/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"imageName\" : \"[concat(parameters('baseName'), '-image')]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfNodes')]\", \"input\" : \"[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2019-05-01\", \"name\" : \"[concat('node', copyIndex())]\", \"type\" : \"Microsoft.Resources/deployments\", \"copy\" : { \"name\" : \"nodeCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"properties\" : { \"mode\" : \"Incremental\", \"template\" : { \"USDschema\" : \"http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('nodeSubnetRef')]\" } } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"tags\" : { \"kubernetes.io-cluster-ffranzupi\": \"owned\" }, \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('nodeVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"capi\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('workerIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/images', variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()],'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\": 128 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": true } } ] } } } ] } } } ] }",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.24.0 master-1 Ready master 63m v1.24.0 master-2 Ready master 64m v1.24.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.24.0 master-1 Ready master 73m v1.24.0 master-2 Ready master 74m v1.24.0 worker-0 Ready worker 11m v1.24.0 worker-1 Ready worker 11m v1.24.0",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20",
"export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300",
"az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER}",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/installing/installing-on-azure |
Chapter 2. Installing Cryostat on Red Hat OpenShift by using a Red Hat build of Cryostat Operator | Chapter 2. Installing Cryostat on Red Hat OpenShift by using a Red Hat build of Cryostat Operator You can use the Operator Lifecycle Manager (OLM) to install the Red Hat build of Cryostat Operator in a project on your Red Hat OpenShift cluster. You can use the Red Hat build of Cryostat Operator to create single namespace or multi-namespace Cryostat instances. You can control these instances by using a GUI that is accessible from the Red Hat OpenShift web console. Important If you need to upgrade your Red Hat build of Cryostat Operator subscription from Cryostat 2.0 to Cryostat 2.4, you must change the update channel from stable-2.0 to stable . Prerequisites Created an OpenShift Container Platform 4.11 or later cluster. Created a Red Hat OpenShift user account with permissions to install Red Hat build of Cryostat Operator in a project. Installed Operator Lifecycle Manager (OLM) on your cluster. Installed cert-manager with the cert-manager Operator for Red Hat OpenShift. If you are using OpenShift Container Platform 4.11 or later, you can install the cert-manager Operator for Red Hat OpenShift. For more information, see cert-manager Operator for Red Hat OpenShift (OpenShift Container Platform) . Logged in to Red Hat OpenShift by using the Red Hat OpenShift web console. Procedure In your browser, navigate to Home > Projects by using the web console. Select the name of the project in which you want to install the Red Hat build of Cryostat Operator. Install the Red Hat build of Cryostat Operator: In the navigation menu of your web console, navigate to Operators > OperatorHub . Select the Red Hat build of Cryostat Operator from the list. You can use the search box in the upper part of the screen to find the Red Hat build of Cryostat Operator. To install the Red Hat build of Cryostat Operator in your project, click Install . The Red Hat OpenShift web console prompts you to create a Cryostat custom resource (CR). Note If you are installing a Cryostat instance that is enabled for multiple namespaces, in the Installation mode area, click the All namespaces on the cluster (default) radio button. You can create the CR either manually or automatically. If you want to create the CR manually, see step 4. If you want to create the CR automatically, see step 5. If you want to create the CR manually, complete the following steps: Navigate to Operators > Installed Operators by using the web console and select Red Hat build of Cryostat Operator from the list of installed operators: Figure 2.1. Viewing the Red Hat build of Cryostat operator in the list of installed operators Click the Details tab. To create a single-namespace Cryostat instance, go to the Provided APIs section. Then, under Cryostat , click Create instance . Note If you want to create a Cryostat instance that is enabled for multiple namespaces, in the Provided APIs section, select Cluster Cryostat and click Create instance . The Cluster Cryostat API has configuration options that control the deployment of the Cryostat application and its related components. For more information, see Creating Cryostat on multiple namespaces . Figure 2.2. Selecting the Cryostat API that is provided by the Red Hat build of Cryostat Operator Click either the Form view radio button or the YAML view radio button. If you want to enter your information in the YAML configuration file, click YAML view . Specify a name for the instance of Cryostat that you want to create. Optional : In the Labels field, specify a label or annotation for the Operand workload you want to deploy. You can also specify additional configuration options for your deployment: Figure 2.3. Creating an instance of Cryostat by using a form in the web console Alternatively, you can use a YAML template to create your instance and specify additional configuration options instead of using the form: Figure 2.4. Creating an instance of Cryostat by using a YAML template in the web console If you want to create the CR by using the automatic prompt option, follow the prompt's instructions and then complete the following steps: Click either the Form view radio button or the YAML view radio button. If you want to enter your information in the YAML configuration file, click YAML view . Specify a name for the instance of Cryostat that you want to create. Optional : In the Labels field, specify a label or annotation for the Operand workload you want to deploy. You can also specify additional configuration options for your deployment: Figure 2.5. Creating an instance of Cryostat by using a form in the web console Alternatively, you can use a YAML template to create your instance and specify additional configuration options instead of using the form: Figure 2.6. Creating an instance of Cryostat by using a YAML template in the web console To start the creation process for your Cryostat instance, click Create . You must wait for all resources of your Cryostat instance to be ready before you can access it. Verification In the navigation menu of the web console, click Operators , then click Installed Operators . From the table of installed operators, select Red Hat build of Cryostat Operator . Select the Cryostat tab. Your Cryostat instance opens in the table of instances and lists the following conditions: TLSSetupComplete is set to true . MainDeploymentAvailable is set to true . Optional: If you enabled the reports generator service then ReportsDeploymentAvailable is shown and set to true . Figure 2.7. Example of conditions set to True under the Status column for a Cryostat instance on OpenShift Optional: Select your Cryostat instance from the Cryostat table. Go to the Cryostat Conditions table, where you can see more information for each condition. Figure 2.8. Example of a Cryostat Conditions table that lists each condition and its criteria Steps Accessing Cryostat by using the web console 2.1. Creating Cryostat on multiple namespaces The Red Hat build of Cryostat Operator provides the Cluster Cryostat API, which you can use to create Cryostat instances that work across multiple namespaces. Prerequisites Created an OpenShift Container Platform 4.11 or later cluster. Created a Red Hat OpenShift user account with permissions to install Red Hat build of Cryostat Operator in a project. Installed the Operator Lifecycle Manager (OLM) on your cluster. Installed cert-manager by using the cert-manager Operator for Red Hat OpenShift. If you are using OpenShift Container Platform 4.11 or later, you can install the cert-manager Operator for Red Hat OpenShift.For more information, see cert-manager Operator for Red Hat OpenShift (OpenShift Container Platform) . Logged in to Red Hat OpenShift by using the Red Hat OpenShift web console. Procedure In your browser, navigate to Home > Projects by using the web console. Select the name of the project where you want to install the Red Hat build of Cryostat Operator. Install the Red Hat build of Cryostat Operator. The Red Hat OpenShift web console prompts you to create a Cryostat custom resource (CR). To create the CR, complete the following steps: Navigate to Operators > Installed Operators by using the web console and select Red Hat build of Cryostat Operator from the list of installed operators: Figure 2.9. Viewing the Red Hat build of Cryostat Operator in the list of installed operators Click the Details tab. To create a multi-namespace instance of Cryostat, go to the Provided APIs section. Then, under Cluster Cryostat , click Create instance . Figure 2.10. Selecting the Cluster Cryostat API that is provided by the Red Hat build of Cryostat Operator Click either the Form view radio button or the YAML view radio button. If you want to enter your information in the YAML configuration file, click YAML view . Specify a unique name for the Cluster Cryostat instance that you want to create. Note Ensure that the name you specify for your Cluster Cryostat instance is unique and does not conflict with the name of any single-namespace Cryostat instances that might already be created in the install namespace or target namespaces of the Cluster Cryostat instance. Optional : In the Labels field, specify a label or annotation for the Operand workload you want to deploy. You can also specify additional configuration options for your deployment: Figure 2.11. Creating a Cluster Cryostat instance by using a form in the web console In the Install Namespace field, select a namespace where you want to install this instance of Cryostat. Tip The Red Hat build of Cryostat Operator uses a larger set of permissions compared to the Cryostat application, and Cryostat might have more permissions than your target workloads. Therefore, for optimal security, install the Cryostat instance into a different namespace from where the Red Hat build of Cryostat Operator operator is installed and from where your target workloads are located. In the Target Namespaces field, select namespaces whose workloads you want to permit this instance of Cryostat to access and work with. Optionally, you can select the same namespace where you installed Cryostat or you can choose a different namespace. To add additional namespaces, click +Add Target Namespace . Important Users who can access the Cryostat instance have access to all target applications in any namespace that is visible to that Cryostat instance. Therefore, when you deploy a multi-namespace Cryostat instance, you must consider which namespaces to select for monitoring, which namespace to install Cryostat into, and which users you want to grant access to. Alternatively, you can use a YAML template to create your instance and specify additional configuration options instead of using the form: Figure 2.12. Creating a Cluster Cryostat instance by using a YAML template in the web console Click Create to start the creation process for your Cryostat multi-namespace instance. You must wait for all resources of your Cluster Cryostat instance to be ready before you can access the instance. Verification In the navigation menu of the web console, navigate to Operators > OperatorHub . From the table of installed operators, select Red Hat build of Cryostat Operator . Click the Cluster Cryostat tab. Your Cryostat instance opens in the table of instances and lists the following conditions: TLSSetupComplete is set to true . MainDeploymentAvailable is set to true . Optional: If you enabled the reports generator service, ReportsDeploymentAvailable is shown and set to true . Figure 2.13. Example of conditions set to True under the Status column for a Cluster Cryostat instance on OpenShift Optional: From the Cluster Cryostat table, select your Cryostat instance. Go to the Conditions table to view more information for each condition. Figure 2.14. Example of a Cryostat Conditions table that lists each condition and its criteria Additional resources Best practices for setting up Cryostat in different cluster configurations (Red Hat Knowledgebase) Accessing Cryostat by using the web console | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/getting_started_with_cryostat/installing-cryostat-on-openshift-using-an-operator_cryostat |
E.3. How to Identify and Assign IOMMU Groups | E.3. How to Identify and Assign IOMMU Groups This example demonstrates how to identify and assign the PCI devices that are present on the target system. For additional examples and information, see Section 16.7, "Assigning GPU Devices" . Procedure E.1. IOMMU groups List the devices Identify the devices in your system by running the virsh nodev-list device-type command. This example demonstrates how to locate the PCI devices. The output has been truncated for brevity. Locate the IOMMU grouping of a device For each device listed, further information about the device, including the IOMMU grouping, can be found using the virsh nodedev-dumpxml name-of-device command. For example, to find the IOMMU grouping for the PCI device named pci_0000_04_00_0 (PCI address 0000:04:00.0), use the following command: This command generates a XML dump similar to the one shown. <device> <name>pci_0000_04_00_0</name> <path>/sys/devices/pci0000:00/0000:00:1c.0/0000:04:00.0</path> <parent>pci_0000_00_1c_0</parent> <capability type='pci'> <domain>0</domain> <bus>4</bus> <slot>0</slot> <function>0</function> <product id='0x10d3'>82574L Gigabit Network Connection</product> <vendor id='0x8086'>Intel Corporation</vendor> <iommuGroup number='8'> <!--This is the element block you will need to use--> <address domain='0x0000' bus='0x00' slot='0x1c' function='0x0'/> <address domain='0x0000' bus='0x00' slot='0x1c' function='0x4'/> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </iommuGroup> <pci-express> <link validity='cap' port='0' speed='2.5' width='1'/> <link validity='sta' speed='2.5' width='1'/> </pci-express> </capability> </device> Figure E.1. IOMMU Group XML View the PCI data In the output collected above, there is one IOMMU group with 4 devices. This is an example of a multi-function PCIe root port without ACS support. The two functions in slot 0x1c are PCIe root ports, which can be identified by running the lspci command (from the pciutils package): Repeat this step for the two PCIe devices on buses 0x04 and 0x05, which are endpoint devices. Assign the endpoints to the guest virtual machine In order to assign either one of the endpoints to a virtual machine, the endpoint which you are not assigning at the moment, must be bound to a VFIO compatible driver so that the IOMMU group is not split between user and host drivers. If for example, using the output received above, you were to configuring a virtual machine with only 04:00.0, the virtual machine will fail to start unless 05:00.0 is detached from host drivers. To detach 05:00.0, run the virsh nodedev-detach command as root: Assigning both endpoints to the virtual machine is another option for resolving this issue. Note that libvirt will automatically perform this operation for the attached devices when using the yes value for the managed attribute within the <hostdev> element. For example: <hostdev mode='subsystem' type='pci' managed='yes'> . See Note for more information. Note libvirt has two ways to handle PCI devices. They can be either managed or unmanaged. This is determined by the value given to the managed attribute within the <hostdev> element. When the device is managed, libvirt automatically detaches the device from the existing driver and then assigns it to the virtual machine by binding it to vfio-pci on boot (for the virtual machine). When the virtual machine is shutdown or deleted or the PCI device is detached from the virtual machine, libvirt unbinds the device from vfio-pci and rebinds it to the original driver. If the device is unmanaged, libvirt will not automate the process and you will have to ensure all of these management aspects as described are done before assigning the device to a virtual machine, and after the device is no longer used by the virtual machine you will have to reassign the devices as well. Failure to do these actions in an unmanaged device will cause the virtual machine to fail. Therefore, it may be easier to make sure that libvirt manages the device. | [
"virsh nodedev-list pci pci_0000_00_00_0 pci_0000_00_01_0 pci_0000_00_03_0 pci_0000_00_07_0 [...] pci_0000_00_1c_0 pci_0000_00_1c_4 [...] pci_0000_01_00_0 pci_0000_01_00_1 [...] pci_0000_03_00_0 pci_0000_03_00_1 pci_0000_04_00_0 pci_0000_05_00_0 pci_0000_06_0d_0",
"virsh nodedev-dumpxml pci_0000_04_00_0",
"<device> <name>pci_0000_04_00_0</name> <path>/sys/devices/pci0000:00/0000:00:1c.0/0000:04:00.0</path> <parent>pci_0000_00_1c_0</parent> <capability type='pci'> <domain>0</domain> <bus>4</bus> <slot>0</slot> <function>0</function> <product id='0x10d3'>82574L Gigabit Network Connection</product> <vendor id='0x8086'>Intel Corporation</vendor> <iommuGroup number='8'> <!--This is the element block you will need to use--> <address domain='0x0000' bus='0x00' slot='0x1c' function='0x0'/> <address domain='0x0000' bus='0x00' slot='0x1c' function='0x4'/> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </iommuGroup> <pci-express> <link validity='cap' port='0' speed='2.5' width='1'/> <link validity='sta' speed='2.5' width='1'/> </pci-express> </capability> </device>",
"lspci -s 1c 00:1c.0 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 1 00:1c.4 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 5",
"lspci -s 4 04:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection This is used in the next step and is called 04:00.0 lspci -s 5 This is used in the next step and is called 05:00.0 05:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5755 Gigabit Ethernet PCI Express (rev 02)",
"virsh nodedev-detach pci_0000_05_00_0 Device pci_0000_05_00_0 detached"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/iommu-example |
7.112. libguestfs | 7.112. libguestfs 7.112.1. RHBA-2013:0324 - libguestfs bug fix and enhancement update Updated libguestfs packages that fix numerous bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The libguestfs packages contain a library, which is used for accessing and modifying guest disk images. Bug Fixes BZ# 801640 Previously, when using the resize2fs -M command and an error due to lack of free space occurred, the returned error message was incorrect and could confuse the user. With this update, a proper error message is returned instead. BZ# 822626 Due to a bug in the source code, an error occurred when using the virt-ls --checksum command and the following error message was returned: The underlying source code has been modified and virt-ls --checksum now works as expected. BZ#830369 Due to the guestfs_inspect_get_hostname() function, the libguestfs -based commands did not work properly when an empty /etc/HOSTNAME file was created on a Linux guest. This update applies a patch to fix this bug and the libguestfs based commands now work in the described scenario. BZ#836573 Previously, the libguestfs library did not handle the /dev/disk/by-id/* paths. Consequently, it was impossible to examine a guest using commands with such a path and an error message was returned. With this update, a patch has been applied to fix this bug and the libguestfs library no longer returns error in this situation. BZ# 837691 Previously, under certain conditions, writing to disks in the qcow2 format could cause silent data loss. The underlying source code has been modified to prevent this behavior and writing to disks in the qcow2 format now works as expected. BZ# 838609 Due to a race condition between the guestmount and the fusermount tools, unmouting and then immediately using a disk image was not safe and could cause data loss or memory corruption. This update adds the new --pid-file option for guestmount to avoid the race condition between these tools and attempts to use disk images immediately after unmounting can no longer cause data loss or memory corruption. BZ#852396 Previously, the libguestfs library limited the total size of downloaded hive files from a Windows Registry to 100 MB. Consequently, an attempt to inspect systems with large amount of hive files caused libguestfs to return an error message. With this update, the limit was increased to 300 MB and libguestfs can now inspect a larger Widows Registry properly. BZ# 853763 Previously, using the file utility to detect the format of a disk image could produce different output for different versions of this utility. The underlying source code has been modified and output is now the same for all versions of the file utility. BZ# 858126 Due to a bug in the underlying source code, the virt-inspector tool failed to work with certain Windows guests. This update applies a patch to fix this bug and virt-inspector now supports all Windows guests as expected. BZ# 858648 Due to recent changes in the iptables packages, the libguestfs library could not be installed with the new version of the iptables tool. The underlying source code has been modified to fix this bug and the installation of libguestfs works as expected. BZ#872454 Previously, the libguestfs library detected the Red Hat Enterprise Linux 5.1 guests as NetBSD guests. This update applies a patch to fix this bug and libguestfs now detects Red Hat Enterprise Linux 5.1 guest correctly. BZ# 880805 The virt-df command with -a or -d arguments works correctly only with a single guest. An attempt to use this command with multiple arguments, such as virt-df -a RHEL-Server-5.9-32-pv.raw -a opensuse.img , caused the disk image names to be displayed incorrectly. With this update, the plus sign ( " + " ) is displayed for each additional disk, so that the user can easily recognize them. In addition, the correct usage of the virt-df command has been described in the virt-df(1) man page. Enhancements BZ# 830135 This enhancement improves the libguestfs library to support mount-local APIs. BZ# 836501 With this update, the dependency on the fuse packages has been added to libguestfs dependencies. All users of libguestfs are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | [
"libguestfs: error: checksum: path: parameter cannot be NULL"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/libguestfs |
Replacing nodes | Replacing nodes Red Hat OpenShift Data Foundation 4.18 Instructions for how to safely replace a node in an OpenShift Data Foundation cluster. Red Hat Storage Documentation Team Abstract This document explains how to safely replace a node in a Red Hat OpenShift Data Foundation cluster. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/replacing_nodes/index |
6.4. Using Indexes to Improve Database Performance | 6.4. Using Indexes to Improve Database Performance Searches performed by client applications can be time and resource intensive, depending on the size of the databases. To help alleviate this problem, use indexes to improve search performance. Indexes are files stored in the directory databases. Separate index files are maintained for each database in the directory service. Each file is named according to the attribute it indexes. The index file for a particular attribute can contain multiple types of indexes, so several types of index can be maintained for each attribute. For example, a file called cn.db contains all of the indexes for the common name attribute. Different types of indexes are used depending on the types of applications that use the directory service. Different applications may frequently search for a particular attribute, or may search the directory in a different language, or may require data in a particular format. 6.4.1. Overview of Directory Index Types Directory Server supports the following types of index: Presence index - Lists entries that possess a particular attribute, such as uid . Equality index - Lists entries that contain a specific attribute value, such as cn=Babs Jensen . Approximate index - Allows approximate (or "sounds-like") searches. For example, an entry might contain the attribute value of cn=Babs L. Jensen . An approximate search would return this value for searches against cn~=Babs Jensen , cn~=Babs , and cn~=Jensen . Note Approximate indexes require that names be written in English using ASCII characters. Substring index - Allows searches against substrings within entries. For example, a search for cn=*derson would match common names containing this string (such as Bill Anderson, Norma Henderson, and Steve Sanderson). International index - Improves the performance of searches for information in international directories. Configure the index to apply a matching rule by associating a locale (internationalization OID) with the attribute being indexed. Browsing index or virtual list view (VLV) index - Improves the display performance of entries in the web console. A browsing index can be created on any branch in the directory tree to improve the display performance. 6.4.2. Evaluating the Costs of Indexing Indexes improve search performance in the directory databases, but there is a cost involved: Indexes increase the time it takes to modify entries. The more indexes being maintained, the longer it takes the directory service to update the database. Index files use disk space. The more attributes being indexed, the more files are created. If there are approximate and substring indexes for attributes that contain long strings, these files can grow rapidly. Index files use memory. To run more efficiently, the directory service places as many index files in memory as possible. Index files use memory out of the pool available depending upon the database cache size. A large number of index files requires a larger database cache. Index files take time to create. Although index files save time during searches, maintaining unnecessary indexes can waste time. Be certain to maintain only the files needed by the client applications using the directory service. | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/Designing_the_Directory_Topology-Using_Indexes_to_Improve_Database_Performance |
Chapter 1. Preparing your environment for installation | Chapter 1. Preparing your environment for installation Before you install Satellite, ensure that your environment meets the following requirements. 1.1. System requirements The following requirements apply to the networked base operating system: x86_64 architecture The latest version of Red Hat Enterprise Linux 8 4-core 2.0 GHz CPU at a minimum A minimum of 20 GB RAM is required for Satellite Server to function. In addition, a minimum of 4 GB RAM of swap space is also recommended. Satellite running with less RAM than the minimum value might not operate correctly. A unique host name, which can contain lower-case letters, numbers, dots (.) and hyphens (-) A current Red Hat Satellite subscription Administrative user (root) access Full forward and reverse DNS resolution using a fully-qualified domain name Satellite only supports UTF-8 encoding. If your territory is USA and your language is English, set en_US.utf-8 as the system-wide locale settings. For more information about configuring system locale in Red Hat Enterprise Linux, see Configuring System Locale guide . Your Satellite must have the Red Hat Satellite Infrastructure Subscription manifest in your Customer Portal. Satellite must have satellite-capsule-6.x repository enabled and synced. To create, manage, and export a Red Hat Subscription Manifest in the Customer Portal, see Creating and managing manifests for a connected Satellite Server in Subscription Central . Satellite Server and Capsule Server do not support shortnames in the hostnames. When using custom certificates, the Common Name (CN) of the custom certificate must be a fully qualified domain name (FQDN) instead of a shortname. This does not apply to the clients of a Satellite. Before you install Satellite Server, ensure that your environment meets the requirements for installation. Satellite Server must be installed on a freshly provisioned system that serves no other function except to run Satellite Server. The freshly provisioned system must not have the following users provided by external identity providers to avoid conflicts with the local users that Satellite Server creates: apache foreman foreman-proxy postgres pulp puppet redis tomcat Certified hypervisors Satellite Server is fully supported on both physical systems and virtual machines that run on hypervisors that are supported to run Red Hat Enterprise Linux. For more information about certified hypervisors, see Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, Red Hat OpenShift Virtualization and Red Hat Enterprise Linux with KVM . SELinux mode SELinux must be enabled, either in enforcing or permissive mode. Installation with disabled SELinux is not supported. FIPS mode You can install Satellite on a Red Hat Enterprise Linux system that is operating in FIPS mode. You cannot enable FIPS mode after the installation of Satellite. For more information, see Switching RHEL to FIPS mode in Red Hat Enterprise Linux 8 Security hardening . Note Satellite supports DEFAULT and FIPS crypto-policies. The FUTURE crypto-policy is not supported for Satellite and Capsule installations. The FUTURE policy is a stricter forward-looking security level intended for testing a possible future policy. For more information, see Using system-wide cryptographic policies in the Red Hat Enterprise Linux guide. Inter-Satellite Synchronization (ISS) In a scenario with air-gapped Satellite Servers, all your Satellite Servers must be on the same Satellite version for ISS Export Sync to work. ISS Network Sync works across all Satellite versions that support it. For more information, see Synchronizing Content Between Satellite Servers in Managing content . 1.2. Storage requirements The following table details storage requirements for specific directories. These values are based on expected use case scenarios and can vary according to individual environments. The runtime size was measured with Red Hat Enterprise Linux 6, 7, and 8 repositories synchronized. Table 1.1. Storage requirements for a Satellite Server installation Directory Installation Size Runtime Size /var/log 10 MB 10 GB /var/lib/pgsql 100 MB 20 GB /usr 10 GB Not Applicable /opt/puppetlabs 500 MB Not Applicable /var/lib/pulp 1 MB 300 GB For external database servers: /var/lib/pgsql with installation size of 100 MB and runtime size of 20 GB. For detailed information on partitioning and size, see Partitioning reference in the Red Hat Enterprise Linux 8 System Design Guide . 1.3. Storage guidelines Consider the following guidelines when installing Satellite Server to increase efficiency. If you mount the /tmp directory as a separate file system, you must use the exec mount option in the /etc/fstab file. If /tmp is already mounted with the noexec option, you must change the option to exec and re-mount the file system. This is a requirement for the puppetserver service to work. Because most Satellite Server data is stored in the /var directory, mounting /var on LVM storage can help the system to scale. Use high-bandwidth, low-latency storage for the /var/lib/pulp/ directories. As Red Hat Satellite has many operations that are I/O intensive, using high latency, low-bandwidth storage causes performance degradation. Ensure your installation has a speed in the range 60 - 80 Megabytes per second. You can use the storage-benchmark script to get this data. For more information on using the storage-benchmark script, see Impact of Disk Speed on Satellite Operations . File system guidelines Do not use the GFS2 file system as the input-output latency is too high. Log file storage Log files are written to /var/log/messages/, /var/log/httpd/ , and /var/lib/foreman-proxy/openscap/content/ . You can manage the size of these files using logrotate . For more information, see How to use logrotate utility to rotate log files . The exact amount of storage you require for log messages depends on your installation and setup. SELinux considerations for NFS mount When the /var/lib/pulp directory is mounted using an NFS share, SELinux blocks the synchronization process. To avoid this, specify the SELinux context of the /var/lib/pulp directory in the file system table by adding the following lines to /etc/fstab : If NFS share is already mounted, remount it using the above configuration and enter the following command: Duplicated packages Packages that are duplicated in different repositories are only stored once on the disk. Additional repositories containing duplicate packages require less additional storage. The bulk of storage resides in the /var/lib/pulp/ directory. These end points are not manually configurable. Ensure that storage is available on the /var file system to prevent storage problems. Symbolic links You cannot use symbolic links for /var/lib/pulp/ . Synchronized RHEL ISO If you plan to synchronize RHEL content ISOs to Satellite, note that all minor versions of Red Hat Enterprise Linux also synchronize. You must plan to have adequate storage on your Satellite to manage this. 1.4. Supported operating systems You can install the operating system from a disc, local ISO image, kickstart, or any other method that Red Hat supports. Red Hat Satellite Server is supported on the latest version of Red Hat Enterprise Linux 8 that is available at the time when Satellite Server is installed. versions of Red Hat Enterprise Linux including EUS or z-stream are not supported. The following operating systems are supported by the installer, have packages, and are tested for deploying Satellite: Table 1.2. Operating systems supported by satellite-installer Operating System Architecture Notes Red Hat Enterprise Linux 8 x86_64 only Red Hat advises against using an existing system because the Satellite installer will affect the configuration of several components. Red Hat Satellite Server requires a Red Hat Enterprise Linux installation with the @Base package group with no other package-set modifications, and without third-party configurations or software not directly necessary for the direct operation of the server. This restriction includes hardening and other non-Red Hat security software. If you require such software in your infrastructure, install and verify a complete working Satellite Server first, then create a backup of the system before adding any non-Red Hat software. Red Hat does not support using the system for anything other than running Satellite Server. 1.5. Supported browsers Satellite supports recent versions of Firefox and Google Chrome browsers. The Satellite web UI and command-line interface support English, Simplified Chinese, Japanese, French. 1.6. Port and firewall requirements For the components of Satellite architecture to communicate, ensure that the required network ports are open and free on the base operating system. You must also ensure that the required network ports are open on any network-based firewalls. Use this information to configure any network-based firewalls. Note that some cloud solutions must be specifically configured to allow communications between machines because they isolate machines similarly to network-based firewalls. If you use an application-based firewall, ensure that the application-based firewall permits all applications that are listed in the tables and known to your firewall. If possible, disable the application checking and allow open port communication based on the protocol. Integrated Capsule Satellite Server has an integrated Capsule and any host that is directly connected to Satellite Server is a Client of Satellite in the context of this section. This includes the base operating system on which Capsule Server is running. Clients of Capsule Hosts which are clients of Capsules, other than Satellite's integrated Capsule, do not need access to Satellite Server. For more information on Satellite Topology and an illustration of port connections, see Capsule Networking in Overview, concepts, and deployment considerations . Required ports can change based on your configuration. The following tables indicate the destination port and the direction of network traffic: Table 1.3. Satellite Server incoming traffic Destination Port Protocol Service Source Required For Description 53 TCP and UDP DNS DNS Servers and clients Name resolution DNS (optional) 67 UDP DHCP Client Dynamic IP DHCP (optional) 69 UDP TFTP Client TFTP Server (optional) 443 TCP HTTPS Capsule Red Hat Satellite API Communication from Capsule 443, 80 TCP HTTPS, HTTP Client Global Registration Registering hosts to Satellite Port 443 is required for registration initiation, uploading facts, and sending installed packages and traces Port 80 notifies Satellite on the /unattended/built endpoint that registration has finished 443 TCP HTTPS Red Hat Satellite Content Mirroring Management 443 TCP HTTPS Red Hat Satellite Capsule API Smart Proxy functionality 443, 80 TCP HTTPS, HTTP Capsule Content Retrieval Content 443, 80 TCP HTTPS, HTTP Client Content Retrieval Content 1883 TCP MQTT Client Pull based REX (optional) Content hosts for REX job notification (optional) 5910 - 5930 TCP HTTPS Browsers Compute Resource's virtual console 8000 TCP HTTP Client Provisioning templates Template retrieval for client installers, iPXE or UEFI HTTP Boot 8000 TCP HTTPS Client PXE Boot Installation 8140 TCP HTTPS Client Puppet agent Client updates (optional) 9090 TCP HTTPS Red Hat Satellite Capsule API Smart Proxy functionality 9090 TCP HTTPS Client OpenSCAP Configure Client (if the OpenSCAP plugin is installed) 9090 TCP HTTPS Discovered Node Discovery Host discovery and provisioning (if the discovery plugin is installed) Any host that is directly connected to Satellite Server is a client in this context because it is a client of the integrated Capsule. This includes the base operating system on which a Capsule Server is running. A DHCP Capsule performs ICMP ping or TCP echo connection attempts to hosts in subnets with DHCP IPAM set to find out if an IP address considered for use is free. This behavior can be turned off using satellite-installer --foreman-proxy-dhcp-ping-free-ip=false . Note Some outgoing traffic returns to Satellite to enable internal communication and security operations. Table 1.4. Satellite Server outgoing traffic Destination Port Protocol Service Destination Required For Description ICMP ping Client DHCP Free IP checking (optional) 7 TCP echo Client DHCP Free IP checking (optional) 22 TCP SSH Target host Remote execution Run jobs 22, 16514 TCP SSH SSH/TLS Compute Resource Satellite originated communications, for compute resources in libvirt 53 TCP and UDP DNS DNS Servers on the Internet DNS Server Resolve DNS records (optional) 53 TCP and UDP DNS DNS Server Capsule DNS Validation of DNS conflicts (optional) 53 TCP and UDP DNS DNS Server Orchestration Validation of DNS conflicts 68 UDP DHCP Client Dynamic IP DHCP (optional) 80 TCP HTTP Remote repository Content Sync Remote repositories 389, 636 TCP LDAP, LDAPS External LDAP Server LDAP LDAP authentication, necessary only if external authentication is enabled. The port can be customized when LDAPAuthSource is defined 443 TCP HTTPS Satellite Capsule Capsule Configuration management Template retrieval OpenSCAP Remote Execution result upload 443 TCP HTTPS Amazon EC2, Azure, Google GCE Compute resources Virtual machine interactions (query/create/destroy) (optional) 443 TCP HTTPS console.redhat.com Red Hat Cloud plugin API calls 443 TCP HTTPS cdn.redhat.com Content Sync Red Hat CDN 443 TCP HTTPS api.access.redhat.com SOS report Assisting support cases filed through the Red Hat Customer Portal (optional) 443 TCP HTTPS cert-api.access.redhat.com Telemetry data upload and report 443 TCP HTTPS Capsule Content mirroring Initiation 443 TCP HTTPS Infoblox DHCP Server DHCP management When using Infoblox for DHCP, management of the DHCP leases (optional) 623 Client Power management BMC On/Off/Cycle/Status 5000 TCP HTTPS OpenStack Compute Resource Compute resources Virtual machine interactions (query/create/destroy) (optional) 5900 - 5930 TCP SSL/TLS Hypervisor noVNC console Launch noVNC console 7911 TCP DHCP, OMAPI DHCP Server DHCP The DHCP target is configured using --foreman-proxy-dhcp-server and defaults to localhost ISC and remote_isc use a configurable port that defaults to 7911 and uses OMAPI 8443 TCP HTTPS Client Discovery Capsule sends reboot command to the discovered host (optional) 9090 TCP HTTPS Capsule Capsule API Management of Capsules 1.7. Enabling connections from a client to Satellite Server Capsules and Content Hosts that are clients of a Satellite Server's internal Capsule require access through Satellite's host-based firewall and any network-based firewalls. Use this procedure to configure the host-based firewall on the system that Satellite is installed on, to enable incoming connections from Clients, and to make the configuration persistent across system reboots. For more information on the ports used, see Port and firewall requirements in Installing Satellite Server in a connected network environment . Procedure Open the ports for clients on Satellite Server: Allow access to services on Satellite Server: Make the changes persistent: Verification Enter the following command: For more information, see Using and Configuring firewalld in Red Hat Enterprise Linux 8 Securing networks . 1.8. Verifying DNS resolution Verify the full forward and reverse DNS resolution using a fully-qualified domain name to prevent issues while installing Satellite. Procedure Ensure that the host name and local host resolve correctly: Successful name resolution results in output similar to the following: To avoid discrepancies with static and transient host names, set all the host names on the system by entering the following command: For more information, see the Changing a hostname using hostnamectl in the Red Hat Enterprise Linux 8 Configuring and managing networking . Warning Name resolution is critical to the operation of Satellite. If Satellite cannot properly resolve its fully qualified domain name, tasks such as content management, subscription management, and provisioning will fail. 1.9. Tuning Satellite Server with predefined profiles If your Satellite deployment includes more than 5000 hosts, you can use predefined tuning profiles to improve performance of Satellite. Note that you cannot use tuning profiles on Capsules. You can choose one of the profiles depending on the number of hosts your Satellite manages and available hardware resources. The tuning profiles are available in the /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes directory. When you run the satellite-installer command with the --tuning option, deployment configuration settings are applied to Satellite in the following order: The default tuning profile defined in the /usr/share/foreman-installer/config/foreman.hiera/tuning/common.yaml file The tuning profile that you want to apply to your deployment and is defined in the /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes/ directory Optional: If you have configured a /etc/foreman-installer/custom-hiera.yaml file, Satellite applies these configuration settings. Note that the configuration settings that are defined in the /etc/foreman-installer/custom-hiera.yaml file override the configuration settings that are defined in the tuning profiles. Therefore, before applying a tuning profile, you must compare the configuration settings that are defined in the default tuning profile in /usr/share/foreman-installer/config/foreman.hiera/tuning/common.yaml , the tuning profile that you want to apply and your /etc/foreman-installer/custom-hiera.yaml file, and remove any duplicated configuration from the /etc/foreman-installer/custom-hiera.yaml file. default Number of hosts: 0 - 5000 RAM: 20G Number of CPU cores: 4 medium Number of hosts: 5001 - 10000 RAM: 32G Number of CPU cores: 8 large Number of hosts: 10001 - 20000 RAM: 64G Number of CPU cores: 16 extra-large Number of hosts: 20001 - 60000 RAM: 128G Number of CPU cores: 32 extra-extra-large Number of hosts: 60000+ RAM: 256G Number of CPU cores: 48+ Procedure Optional: If you have configured the custom-hiera.yaml file on Satellite Server, back up the /etc/foreman-installer/custom-hiera.yaml file to custom-hiera.original . You can use the backup file to restore the /etc/foreman-installer/custom-hiera.yaml file to its original state if it becomes corrupted: Optional: If you have configured the custom-hiera.yaml file on Satellite Server, review the definitions of the default tuning profile in /usr/share/foreman-installer/config/foreman.hiera/tuning/common.yaml and the tuning profile that you want to apply in /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes/ . Compare the configuration entries against the entries in your /etc/foreman-installer/custom-hiera.yaml file and remove any duplicated configuration settings in your /etc/foreman-installer/custom-hiera.yaml file. Enter the satellite-installer command with the --tuning option for the profile that you want to apply. For example, to apply the medium tuning profile settings, enter the following command: | [
"nfs.example.com:/nfsshare /var/lib/pulp nfs context=\"system_u:object_r:var_lib_t:s0\" 1 2",
"restorecon -R /var/lib/pulp",
"firewall-cmd --add-port=\"8000/tcp\" --add-port=\"9090/tcp\"",
"firewall-cmd --add-service=dns --add-service=dhcp --add-service=tftp --add-service=http --add-service=https --add-service=puppetmaster",
"firewall-cmd --runtime-to-permanent",
"firewall-cmd --list-all",
"ping -c1 localhost ping -c1 `hostname -f` # my_system.domain.com",
"ping -c1 localhost PING localhost (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.043 ms --- localhost ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms ping -c1 `hostname -f` PING hostname.gateway (XX.XX.XX.XX) 56(84) bytes of data. 64 bytes from hostname.gateway (XX.XX.XX.XX): icmp_seq=1 ttl=64 time=0.019 ms --- localhost.gateway ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms",
"hostnamectl set-hostname name",
"cp /etc/foreman-installer/custom-hiera.yaml /etc/foreman-installer/custom-hiera.original",
"satellite-installer --tuning medium"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_satellite_server_in_a_connected_network_environment/preparing_your_environment_for_installation_satellite |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/automating_sap_hana_scale-out_system_replication_using_the_rhel_ha_add-on/conscious-language-message_automating-sap-hana-scale-out-v9 |
8.10. Virtual Machine Timer Management with libvirt | 8.10. Virtual Machine Timer Management with libvirt Accurate time keeping on guest virtual machines is a key challenge for virtualization platforms. Different hypervisors attempt to handle the problem of time keeping in a variety of ways. libvirt provides hypervisor independent configuration settings for time management, using the <clock> and <timer> elements in the domain XML. The domain XML can be edited using the virsh edit command. See Section 14.6, "Editing a Guest Virtual Machine's configuration file" for details. The <clock> element is used to determine how the guest virtual machine clock is synchronized with the host physical machine clock. The clock element has the following attributes: offset determines how the guest virtual machine clock is offset from the host physical machine clock. The offset attribute has the following possible values: Table 8.1. Offset attribute values Value Description utc The guest virtual machine clock will be synchronized to UTC when booted. localtime The guest virtual machine clock will be synchronized to the host physical machine's configured timezone when booted, if any. timezone The guest virtual machine clock will be synchronized to a given timezone, specified by the timezone attribute. variable The guest virtual machine clock will be synchronized to an arbitrary offset from UTC. The delta relative to UTC is specified in seconds, using the adjustment attribute. The guest virtual machine is free to adjust the Real Time Clock (RTC) over time and expect that it will be honored following the reboot. This is in contrast to utc mode, where any RTC adjustments are lost at each reboot. Note The value utc is set as the clock offset in a virtual machine by default. However, if the guest virtual machine clock is run with the localtime value, the clock offset needs to be changed to a different value in order to have the guest virtual machine clock synchronized with the host physical machine clock. The timezone attribute determines which timezone is used for the guest virtual machine clock. The adjustment attribute provides the delta for guest virtual machine clock synchronization. In seconds, relative to UTC. Example 8.1. Always synchronize to UTC Example 8.2. Always synchronize to the host physical machine timezone Example 8.3. Synchronize to an arbitrary timezone Example 8.4. Synchronize to UTC + arbitrary offset 8.10.1. timer Child Element for clock A clock element can have zero or more timer elements as children. The timer element specifies a time source used for guest virtual machine clock synchronization. The timer element has the following attributes. Only the name is required, all other attributes are optional. The name attribute dictates the type of the time source to use, and can be one of the following: Table 8.2. name attribute values Value Description pit Programmable Interval Timer - a timer with periodic interrupts. rtc Real Time Clock - a continuously running timer with periodic interrupts. tsc Time Stamp Counter - counts the number of ticks since reset, no interrupts. kvmclock KVM clock - recommended clock source for KVM guest virtual machines. KVM pvclock, or kvm-clock lets guest virtual machines read the host physical machine's wall clock time. | [
"<clock offset=\"utc\" />",
"<clock offset=\"localtime\" />",
"<clock offset=\"timezone\" timezone=\"Europe/Paris\" />",
"<clock offset=\"variable\" adjustment=\"123456\" />"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-tips_and_tricks-libvirt_managed_timers |
Chapter 7. Installing and Configuring Certificate System | Chapter 7. Installing and Configuring Certificate System Red Hat Certificate System provides different subsystems that can be installed individually. For example, you can install multiple subsystem instances on a single server or you can run them independently on different hosts. This enables you to adapt the installation to your environment to provide a higher availability, scalability, and fail-over support. This chapter describes the package installation and how to set up the individual subsystems. The Certificate System includes the following subsystems: Certificate Authority (CA) Key Recovery Authority (KRA) Online Certificate Status Protocol (OCSP) Responder Token Key Service (TKS) Token Processing System (TPS) Each subsystem is installed and configured individually as a standalone Tomcat web server instance. However, Red Hat Certificate System additionally supports running a single shared Tomcat web server instance that can contain up to one of each subsystem. 7.1. Subsystem Configuration Order The order in which the individual subsystems are set up is important because of relationships between the different subsystems: At least one CA running as a security domain is required before any of the other public key infrastructure (PKI) subsystems can be installed. Install the OCSP after the CA has been configured. The KRA, and TKS subsystems can be installed in any order, after the CA and OCSP have been configured. The TPS subsystem depends on the CA and TKS, and optionally on the KRA and OCSP subsystem. Note In certain situations, administrators want to install a standalone KRA or OCSP which do not require a CA running as a security domain. For details, see Section 7.9, "Setting up a Standalone KRA or OCSP" . | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/Installation_and_Configuration |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant/2.x_latest/html/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant_user_guide/making-open-source-more-inclusive |
B.2. Using Keytool with JBoss Data Virtualization | B.2. Using Keytool with JBoss Data Virtualization When using the keytool to manage public key cryptography for JBoss Data Virtualization, use the following options: Set the alias to teiid using the -alias teiid option. Set the algorithm to RSA using the -keyslg RSA option. Set the validity period to 365 days using the -validity 365 option. Set the store type to JKS using the -storetype JKS option. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/using_keytool_with_jboss_data_virtualization |
Chapter 9. Disaster Recovery | Chapter 9. Disaster Recovery Disaster Recovery (DR) helps an organization to recover and resume business critical functions or normal operations when there are disruptions or disasters. OpenShift Data Foundation provides High Availability (HA) & DR solutions for stateful apps which are broadly categorized into two broad categories: Metro-DR : Single Region and cross data center protection with no data loss. Regional-DR : Cross Region protection with minimal potential data loss. Disaster Recovery with stretch cluster : Single OpenShift Data Foundation cluster is stretched between two different locations to provide the storage infrastructure with disaster recovery capabilities. 9.1. Metro-DR Metropolitan disaster recovery (Metro-DR) is composed of Red Hat Advanced Cluster Management for Kubernetes (RHACM), Red Hat Ceph Storage and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. This release of Metro-DR solution provides volume persistent data and metadata replication across sites that are geographically dispersed. In the public cloud these would be similar to protecting from an Availability Zone failure. Metro-DR ensures business continuity during the unavailability of a data center with no data loss. This solution is entitled with Red Hat Advanced Cluster Management (RHACM) and OpenShift Data Foundation Advanced SKUs and related bundles. Important You can now easily set up Metropolitan disaster recovery solutions for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see the knowledgebase article . Prerequisites Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Ensure that the primary managed cluster (Site-1) is co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2). Alternatively, the active RHACM hub cluster can be placed in a neutral site (site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2. Note Hub recovery for Metro-DR is a Technology Preview feature and is subject to Technology Preview support limitations. For detailed solution requirements, see Metro-DR requirements , deployment requirements for Red Hat Ceph Storage stretch cluster with arbiter and RHACM requirements . 9.2. Regional-DR Regional disaster recovery (Regional-DR) is composed of Red Hat Advanced Cluster Management for Kubernetes (RHACM) and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. It is built on Asynchronous data replication and hence could have a potential data loss but provides the protection against a broad set of failures. Red Hat OpenShift Data Foundation is backed by Ceph as the storage provider, whose lifecycle is managed by Rook and it's enhanced with the ability to: Enable pools for mirroring. Automatically mirror images across RBD pools. Provides csi-addons to manage per Persistent Volume Claim mirroring. This release of Regional-DR supports Multi-Cluster configuration that is deployed across different regions and data centers. For example, a 2-way replication across two managed clusters located in two different regions or data centers. This solution is entitled with Red Hat Advanced Cluster Management (RHACM) and OpenShift Data Foundation Advanced SKUs and related bundles. Important You can now easily set up Regional disaster recovery solutions for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see the knowledgebase article . Prerequisites Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Ensure that the primary managed cluster (Site-1) is co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2). Alternatively, the active RHACM hub cluster can be placed in a neutral site (site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2. For detailed solution requirements, see Regional-DR requirements and RHACM requirements . 9.3. Disaster Recovery with stretch cluster In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This feature is currently intended for deployment in the OpenShift Container Platform on-premises and in the same location. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks. Note The stretch cluster solution is designed for deployments where latencies do not exceed 10 ms maximum round-trip time (RTT) between the zones containing data volumes. For Arbiter nodes follow the latency requirements specified for etcd, see Guidance for Red Hat OpenShift Container Platform Clusters - Deployments Spanning Multiple Sites(Data Centers/Regions) . Contact Red Hat Customer Support if you are planning to deploy with higher latencies. To use the stretch cluster, You must have a minimum of five nodes across three zones, where: Two nodes per zone are used for each data-center zone, and one additional zone with one node is used for arbiter zone (the arbiter can be on a master node). All the nodes must be manually labeled with the zone labels prior to cluster creation. For example, the zones can be labeled as: topology.kubernetes.io/zone=arbiter (master or worker node) topology.kubernetes.io/zone=datacenter1 (minimum two worker nodes) topology.kubernetes.io/zone=datacenter2 (minimum two worker nodes) For more information, see Configuring OpenShift Data Foundation for stretch cluster . To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Important You can now easily set up disaster recovery with stretch cluster for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see OpenShift Virtualization in OpenShift Container Platform guide. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/planning_your_deployment/disaster-recovery |
Chapter 1. Common object reference | Chapter 1. Common object reference 1.1. com.coreos.monitoring.v1.AlertmanagerList schema Description AlertmanagerList is a list of Alertmanager Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Alertmanager) List of alertmanagers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.2. com.coreos.monitoring.v1.PodMonitorList schema Description PodMonitorList is a list of PodMonitor Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodMonitor) List of podmonitors. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.3. com.coreos.monitoring.v1.ProbeList schema Description ProbeList is a list of Probe Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Probe) List of probes. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.4. com.coreos.monitoring.v1.PrometheusList schema Description PrometheusList is a list of Prometheus Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Prometheus) List of prometheuses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.5. com.coreos.monitoring.v1.PrometheusRuleList schema Description PrometheusRuleList is a list of PrometheusRule Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PrometheusRule) List of prometheusrules. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.6. com.coreos.monitoring.v1.ServiceMonitorList schema Description ServiceMonitorList is a list of ServiceMonitor Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ServiceMonitor) List of servicemonitors. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.7. com.coreos.monitoring.v1.ThanosRulerList schema Description ThanosRulerList is a list of ThanosRuler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ThanosRuler) List of thanosrulers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.8. com.coreos.monitoring.v1beta1.AlertmanagerConfigList schema Description AlertmanagerConfigList is a list of AlertmanagerConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AlertmanagerConfig) List of alertmanagerconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.9. com.coreos.operators.v1.OLMConfigList schema Description OLMConfigList is a list of OLMConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OLMConfig) List of olmconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.10. com-coreos.operators.v1.OperatorGroupList schema Description OperatorGroupList is a list of OperatorGroup Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OperatorGroup) List of operatorgroups. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.11. com.coreos.operators.v1.OperatorList schema Description OperatorList is a list of Operator Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Operator) List of operators. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.12. com.coreos.operators.v1alpha1.CatalogSourceList schema Description CatalogSourceList is a list of CatalogSource Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CatalogSource) List of catalogsources. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.13. com.coreos.operators.v1alpha1.ClusterServiceVersionList schema Description ClusterServiceVersionList is a list of ClusterServiceVersion Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterServiceVersion) List of clusterserviceversions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.14. com.coreos.operators.v1alpha1.InstallPlanList schema Description InstallPlanList is a list of InstallPlan Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (InstallPlan) List of installplans. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.15. com.coreos.operators.v1alpha1.SubscriptionList schema Description SubscriptionList is a list of Subscription Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Subscription) List of subscriptions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.16. com.coreos.operators.v2.OperatorConditionList schema Description OperatorConditionList is a list of OperatorCondition Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OperatorCondition) List of operatorconditions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.17. com.github.openshift.api.apps.v1.DeploymentConfigList schema Description DeploymentConfigList is a collection of deployment configs. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DeploymentConfig) Items is a list of deployment configs kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.18. com.github.openshift.api.authorization.v1.ClusterRoleBindingList schema Description ClusterRoleBindingList is a collection of ClusterRoleBindings Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterRoleBinding) Items is a list of ClusterRoleBindings kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.19. com.github.openshift.api.authorization.v1.ClusterRoleList schema Description ClusterRoleList is a collection of ClusterRoles Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterRole) Items is a list of ClusterRoles kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.20. com.github.openshift.api.authorization.v1.RoleBindingList schema Description RoleBindingList is a collection of RoleBindings Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RoleBinding) Items is a list of RoleBindings kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.21. com.github.openshift.api.authorization.v1.RoleList schema Description RoleList is a collection of Roles Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Role) Items is a list of Roles kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.22. com.github.openshift.api.build.v1.BuildConfigList schema Description BuildConfigList is a collection of BuildConfigs. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (BuildConfig) items is a list of build configs kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.23. com.github.openshift.api.build.v1.BuildList schema Description BuildList is a collection of Builds. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Build) items is a list of builds kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.24. com.github.openshift.api.image.v1.ImageList schema Description ImageList is a list of Image objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Image) Items is a list of images kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.25. com.github.openshift.api.image.v1.ImageStreamList schema Description ImageStreamList is a list of ImageStream objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageStream) Items is a list of imageStreams kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.26. com.github.openshift.api.image.v1.ImageStreamTagList schema Description ImageStreamTagList is a list of ImageStreamTag objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageStreamTag) Items is the list of image stream tags kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.27. com.github.openshift.api.image.v1.ImageTagList schema Description ImageTagList is a list of ImageTag objects. When listing image tags, the image field is not populated. Tags are returned in alphabetical order by image stream and then tag. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageTag) Items is the list of image stream tags kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.28. com.github.openshift.api.oauth.v1.OAuthAccessTokenList schema Description OAuthAccessTokenList is a collection of OAuth access tokens Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuthAccessToken) Items is the list of OAuth access tokens kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.29. com.github.openshift.api.oauth.v1.OAuthAuthorizeTokenList schema Description OAuthAuthorizeTokenList is a collection of OAuth authorization tokens Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuthAuthorizeToken) Items is the list of OAuth authorization tokens kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.30. com.github.openshift.api.oauth.v1.OAuthClientAuthorizationList schema Description OAuthClientAuthorizationList is a collection of OAuth client authorizations Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuthClientAuthorization) Items is the list of OAuth client authorizations kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.31. com.github.openshift.api.oauth.v1.OAuthClientList schema Description OAuthClientList is a collection of OAuth clients Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuthClient) Items is the list of OAuth clients kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.32. com.github.openshift.api.oauth.v1.UserOAuthAccessTokenList schema Description UserOAuthAccessTokenList is a collection of access tokens issued on behalf of the requesting user Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (UserOAuthAccessToken) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.33. com.github.openshift.api.project.v1.ProjectList schema Description ProjectList is a list of Project objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Project) Items is the list of projects kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.34. com.github.openshift.api.quota.v1.AppliedClusterResourceQuotaList schema Description AppliedClusterResourceQuotaList is a collection of AppliedClusterResourceQuotas Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AppliedClusterResourceQuota) Items is a list of AppliedClusterResourceQuota kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.35. com.github.openshift.api.route.v1.RouteList schema Description RouteList is a collection of Routes. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Route) items is a list of routes kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.36. com.github.openshift.api.security.v1.RangeAllocationList schema Description RangeAllocationList is a list of RangeAllocations objects Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RangeAllocation) List of RangeAllocations. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.37. com.github.openshift.api.template.v1.BrokerTemplateInstanceList schema Description BrokerTemplateInstanceList is a list of BrokerTemplateInstance objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (BrokerTemplateInstance) items is a list of BrokerTemplateInstances kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.38. com.github.openshift.api.template.v1.TemplateInstanceList schema Description TemplateInstanceList is a list of TemplateInstance objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (TemplateInstance) items is a list of Templateinstances kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.39. com.github.openshift.api.template.v1.TemplateList schema Description TemplateList is a list of Template objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Template) Items is a list of templates kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.40. com.github.openshift.api.user.v1.GroupList schema Description GroupList is a collection of Groups Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Group) Items is the list of groups kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.41. com.github.openshift.api.user.v1.IdentityList schema Description IdentityList is a collection of Identities Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Identity) Items is the list of identities kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.42. com.github.openshift.api.user.v1.UserList schema Description UserList is a collection of Users Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (User) Items is the list of users kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.43. com.github.operator-framework.api.pkg.lib.version.OperatorVersion schema Description OperatorVersion is a wrapper around semver.Version which supports correct marshaling to YAML and JSON. Type string 1.44. com.github.operator-framework.api.pkg.operators.v1alpha1.APIServiceDefinitions schema Description APIServiceDefinitions declares all of the extension apis managed or required by an operator being ran by ClusterServiceVersion. Type object Schema Property Type Description owned array (APIServiceDescription) required array (APIServiceDescription) 1.45. com.github.operator-framework.api.pkg.operators.v1alpha1.CustomResourceDefinitions schema Description CustomResourceDefinitions declares all of the CRDs managed or required by an operator being ran by ClusterServiceVersion. If the CRD is present in the Owned list, it is implicitly required. Type object Schema Property Type Description owned array (CRDDescription) required array (CRDDescription) 1.46. com.github.operator-framework.api.pkg.operators.v1alpha1.InstallMode schema Description InstallMode associates an InstallModeType with a flag representing if the CSV supports it Type object Required type supported Schema Property Type Description supported boolean type string 1.47. com.github.operator-framework.operator-lifecycle-manager.pkg.package-server.apis.operators.v1.PackageManifestList schema Description PackageManifestList is a list of PackageManifest objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PackageManifest) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.48. io.cncf.cni.k8s.v1.NetworkAttachmentDefinitionList schema Description NetworkAttachmentDefinitionList is a list of NetworkAttachmentDefinition Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (NetworkAttachmentDefinition) List of network-attachment-definitions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.49. io.cncf.cni.whereabouts.v1alpha1.IPPoolList schema Description IPPoolList is a list of IPPool Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (IPPool) List of ippools. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.50. io.cncf.cni.whereabouts.v1alpha1.OverlappingRangeIPReservationList schema Description OverlappingRangeIPReservationList is a list of OverlappingRangeIPReservation Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OverlappingRangeIPReservation) List of overlappingrangeipreservations. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.51. io.k8s.api.admissionregistration.v1.MutatingWebhookConfigurationList schema Description MutatingWebhookConfigurationList is a list of MutatingWebhookConfiguration. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MutatingWebhookConfiguration) List of MutatingWebhookConfiguration. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.52. io.k8s.api.admissionregistration.v1.ValidatingWebhookConfigurationList schema Description ValidatingWebhookConfigurationList is a list of ValidatingWebhookConfiguration. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ValidatingWebhookConfiguration) List of ValidatingWebhookConfiguration. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.53. io.k8s.api.apps.v1.ControllerRevisionList schema Description ControllerRevisionList is a resource containing a list of ControllerRevision objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ControllerRevision) Items is the list of ControllerRevisions kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.54. io.k8s.api.apps.v1.DaemonSetList schema Description DaemonSetList is a collection of daemon sets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DaemonSet) A list of daemon sets. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.55. io.k8s.api.apps.v1.DeploymentList schema Description DeploymentList is a list of Deployments. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Deployment) Items is the list of Deployments. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. 1.56. io.k8s.api.apps.v1.ReplicaSetList schema Description ReplicaSetList is a collection of ReplicaSets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ReplicaSet) List of ReplicaSets. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.57. io.k8s.api.apps.v1.StatefulSetList schema Description StatefulSetList is a collection of StatefulSets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StatefulSet) Items is the list of stateful sets. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.58. io.k8s.api.autoscaling.v2.HorizontalPodAutoscalerList schema Description HorizontalPodAutoscalerList is a list of horizontal pod autoscaler objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HorizontalPodAutoscaler) items is the list of horizontal pod autoscaler objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list metadata. 1.59. io.k8s.api.batch.v1.CronJobList schema Description CronJobList is a collection of cron jobs. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CronJob) items is the list of CronJobs. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.60. io.k8s.api.batch.v1.JobList schema Description JobList is a collection of jobs. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Job) items is the list of Jobs. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.61. io.k8s.api.certificates.v1.CertificateSigningRequestList schema Description CertificateSigningRequestList is a collection of CertificateSigningRequest objects Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CertificateSigningRequest) items is a collection of CertificateSigningRequest objects kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.62. io.k8s.api.coordination.v1.LeaseList schema Description LeaseList is a list of Lease objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Lease) Items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.63. io.k8s.api.core.v1.ComponentStatusList schema Description Status of all the conditions for the component as a list of ComponentStatus objects. Deprecated: This API is deprecated in v1.19+ Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ComponentStatus) List of ComponentStatus objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.64. io.k8s.api.core.v1.ConfigMapList schema Description ConfigMapList is a resource containing a list of ConfigMap objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConfigMap) Items is the list of ConfigMaps. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.65. io.k8s.api.core.v1.ConfigMapVolumeSource schema Description Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Type object Schema Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array (KeyToPath) items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 1.66. io.k8s.api.core.v1.CSIVolumeSource schema Description Represents a source location of a volume to mount, managed by an external CSI driver Type object Required driver Schema Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef LocalObjectReference nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 1.67. io.k8s.api.core.v1.EndpointsList schema Description EndpointsList is a list of endpoints. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Endpoints) List of endpoints. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.68. io.k8s.api.core.v1.EnvVar schema Description EnvVar represents an environment variable present in a Container. Type object Required name Schema Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom EnvVarSource` Source for the environment variable's value. Cannot be used if value is not empty. 1.69. io.k8s.api.core.v1.EventList schema Description EventList is a list of events. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Event) List of events kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.70. io.k8s.api.core.v1.EventSource schema Description EventSource contains information for an event. Type object Schema Property Type Description component string Component from which the event is generated. host string Node name on which the event is generated. 1.71. io.k8s.api.core.v1.LimitRangeList schema Description LimitRangeList is a list of LimitRange items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (LimitRange) Items is a list of LimitRange objects. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.72. io.k8s.api.core.v1.LoadBalancerStatus schema Description LoadBalancerStatus represents the status of a load-balancer. Type object Schema Property Type Description ingress array (LoadBalancerIngress) Ingress is a list containing ingress points for the load-balancer. Traffic intended for the service should be sent to these ingress points. 1.73. io.k8s.api.core.v1.LocalObjectReference schema Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Schema Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 1.74. io.k8s.api.core.v1.NamespaceCondition schema Description NamespaceCondition contains details about state of namespace. Type object Required type status Schema Property Type Description lastTransitionTime Time message string reason string status string Status of the condition, one of True, False, Unknown. type string Type of namespace controller condition. 1.75. io.k8s.api.core.v1.NamespaceList schema Description NamespaceList is a list of Namespaces. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Namespace) Items is the list of Namespace objects in the list. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.76. io.k8s.api.core.v1.NodeList schema Description NodeList is the whole list of all Nodes which have been registered with master. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Node) List of nodes kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.77. io.k8s.api.core.v1.ObjectReference schema Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Schema Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 1.78. io.k8s.api.core.v1.PersistentVolumeClaim schema Description PersistentVolumeClaim is a user's request for and claim to a persistent volume Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes status object PersistentVolumeClaimStatus is the current status of a persistent volume claim. ..spec Description:: + PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. dataSourceRef object TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. resources object ResourceRequirements describes the compute resource requirements. selector LabelSelector selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. ..spec.dataSource Description:: + TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced ..spec.dataSourceRef Description:: + TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced ..spec.resources Description:: + ResourceRequirements describes the compute resource requirements. Type object Property Type Description limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ ..status Description:: + PersistentVolumeClaimStatus is the current status of a persistent volume claim. Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResources object (Quantity) allocatedResources is the storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity object (Quantity) capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. conditions[] object PersistentVolumeClaimCondition contails details about state of pvc phase string phase represents the current phase of PersistentVolumeClaim. Possible enum values: - "Bound" used for PersistentVolumeClaims that are bound - "Lost" used for PersistentVolumeClaims that lost their underlying PersistentVolume. The claim was bound to a PersistentVolume and this volume does not exist any longer and all data on it was lost. - "Pending" used for PersistentVolumeClaims that are not yet bound resizeStatus string resizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. ..status.conditions Description:: + conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. Type array ..status.conditions[] Description:: + PersistentVolumeClaimCondition contails details about state of pvc Type object Required type status Property Type Description lastProbeTime Time lastProbeTime is the time we probed the condition. lastTransitionTime Time lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized. status string type string 1.79. io.k8s.api.core.v1.PersistentVolumeClaimList schema Description PersistentVolumeClaimList is a list of PersistentVolumeClaim items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PersistentVolumeClaim) items is a list of persistent volume claims. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.80. io.k8s.api.core.v1.PersistentVolumeList schema Description PersistentVolumeList is a list of PersistentVolume items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PersistentVolume) items is a list of persistent volumes. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.81. io.k8s.api.core.v1.PersistentVolumeSpec schema Description PersistentVolumeSpec is the specification of a persistent volume. Type object Schema Property Type Description accessModes array (string) accessModes contains all ways the volume can be mounted. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes awsElasticBlockStore AWSElasticBlockStoreVolumeSource awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk AzureDiskVolumeSource azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile AzureFilePersistentVolumeSource azureFile represents an Azure File Service mount on the host and bind mount to the pod. capacity object (Quantity) capacity is the description of the persistent volume's resources and capacity. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity cephfs CephFSPersistentVolumeSource cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder CinderPersistentVolumeSource cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md claimRef ObjectReference claimRef is part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName is the authoritative bind between PV and PVC. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#binding csi CSIPersistentVolumeSource csi represents storage that is handled by an external CSI driver (Beta feature). fc FCVolumeSource fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume FlexPersistentVolumeSource flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker FlockerVolumeSource flocker represents a Flocker volume attached to a kubelet's host machine and exposed to the pod for its usage. This depends on the Flocker control service being running gcePersistentDisk GCEPersistentDiskVolumeSource gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. Provisioned by an admin. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk glusterfs GlusterfsPersistentVolumeSource glusterfs represents a Glusterfs volume that is attached to a host and exposed to the pod. Provisioned by an admin. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath HostPathVolumeSource hostPath represents a directory on the host. Provisioned by a developer or tester. This is useful for single-node development and testing only! On-host storage is not supported in any way and WILL NOT WORK in a multi-node cluster. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath iscsi ISCSIPersistentVolumeSource iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. Provisioned by an admin. local LocalVolumeSource local represents directly-attached storage with node affinity mountOptions array (string) mountOptions is the list of mount options, e.g. ["ro", "soft"]. Not validated - mount will simply fail if one is invalid. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options nfs NFSVolumeSource nfs represents an NFS mount on the host. Provisioned by an admin. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs nodeAffinity VolumeNodeAffinity nodeAffinity defines constraints that limit what nodes this volume can be accessed from. This field influences the scheduling of pods that use this volume. persistentVolumeReclaimPolicy string persistentVolumeReclaimPolicy defines what happens to a persistent volume when released from its claim. Valid options are Retain (default for manually created PersistentVolumes), Delete (default for dynamically provisioned PersistentVolumes), and Recycle (deprecated). Recycle must be supported by the volume plugin underlying this PersistentVolume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#reclaiming Possible enum values: - "Delete" means the volume will be deleted from Kubernetes on release from its claim. The volume plugin must support Deletion. - "Recycle" means the volume will be recycled back into the pool of unbound persistent volumes on release from its claim. The volume plugin must support Recycling. - "Retain" means the volume will be left in its current phase (Released) for manual reclamation by the administrator. The default policy is Retain. photonPersistentDisk PhotonPersistentDiskVolumeSource photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume PortworxVolumeSource portworxVolume represents a portworx volume attached and mounted on kubelets host machine quobyte QuobyteVolumeSource quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd RBDPersistentVolumeSource rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO ScaleIOPersistentVolumeSource scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. storageClassName string storageClassName is the name of StorageClass to which this persistent volume belongs. Empty value means that this volume does not belong to any StorageClass. storageos StorageOSPersistentVolumeSource storageOS represents a StorageOS volume that is attached to the kubelet's host machine and mounted into the pod More info: https://examples.k8s.io/volumes/storageos/README.md volumeMode string volumeMode defines if a volume is intended to be used with a formatted filesystem or to remain in raw block state. Value of Filesystem is implied when not included in spec. vsphereVolume VsphereVirtualDiskVolumeSource vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 1.82. io.k8s.api.core.v1.PodList schema Description PodList is a list of Pods. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Pod) List of pods. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.83. io.k8s.api.core.v1.PodTemplateList schema Description PodTemplateList is a list of PodTemplates. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodTemplate) List of pod templates kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.84. io.k8s.api.core.v1.PodTemplateSpec schema Description PodTemplateSpec describes the data a pod should have when created from a template Type object Schema Property Type Description metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec PodSpec Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 1.85. io.k8s.api.core.v1.ReplicationControllerList schema Description ReplicationControllerList is a collection of replication controllers. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ReplicationController) List of replication controllers. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.86. io.k8s.api.core.v1.ResourceQuotaList schema Description ResourceQuotaList is a list of ResourceQuota items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ResourceQuota) Items is a list of ResourceQuota objects. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.87. io.k8s.api.core.v1.ResourceQuotaSpec schema Description ResourceQuotaSpec defines the desired hard limits to enforce for Quota. Type object Schema Property Type Description hard object (Quantity) hard is the set of desired hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ scopeSelector ScopeSelector scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota but expressed using ScopeSelectorOperator in combination with possible values. For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched. scopes array (string) A collection of filters that must match each object tracked by a quota. If not specified, the quota matches all objects. 1.88. io.k8s.api.core.v1.ResourceQuotaStatus schema Description ResourceQuotaStatus defines the enforced hard limits and observed use. Type object Schema Property Type Description hard object (Quantity) Hard is the set of enforced hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ used object (Quantity) Used is the current observed total usage of the resource in the namespace. 1.89. io.k8s.api.core.v1.ResourceRequirements schema Description ResourceRequirements describes the compute resource requirements. Type object Schema Property Type Description limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 1.90. io.k8s.api.core.v1.Secret schema Description Secret holds secret data of a certain type. The total bytes of the values in the Data field must be less than MaxSecretSize bytes. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources data object (string) Data contains the secret data. Each key must consist of alphanumeric characters, '-', '_' or '.'. The serialized form of the secret data is a base64 encoded string, representing the arbitrary (possibly non-string) data value here. Described in https://tools.ietf.org/html/rfc4648#section-4 immutable boolean Immutable, if set to true, ensures that data stored in the Secret cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. Defaulted to nil. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata stringData object (string) stringData allows specifying non-binary secret data in string form. It is provided as a write-only input field for convenience. All keys and values are merged into the data field on write, overwriting any existing values. The stringData field is never output when reading from the API. type string Used to facilitate programmatic handling of secret data. More info: https://kubernetes.io/docs/concepts/configuration/secret/#secret-types 1.91. io.k8s.api.core.v1.SecretList schema Description SecretList is a list of Secret. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Secret) Items is a list of secret objects. More info: https://kubernetes.io/docs/concepts/configuration/secret kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.92. io.k8s.api.core.v1.SecretVolumeSource schema Description Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Type object Schema Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array (KeyToPath) items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 1.93. io.k8s.api.core.v1.ServiceAccountList schema Description ServiceAccountList is a list of ServiceAccount objects Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ServiceAccount) List of ServiceAccounts. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.94. io.k8s.api.core.v1.ServiceList schema Description ServiceList holds a list of services. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Service) List of services kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.95. io.k8s.api.core.v1.Toleration schema Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Schema Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - "NoExecute" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - "NoSchedule" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - "PreferNoSchedule" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. Possible enum values: - "Equal" - "Exists" tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 1.96. io.k8s.api.core.v1.TopologySelectorTerm schema Description A topology selector term represents the result of label queries. A null or empty topology selector term matches no objects. The requirements of them are ANDed. It provides a subset of functionality as NodeSelectorTerm. This is an alpha feature and may change in the future. Type object Schema Property Type Description matchLabelExpressions array (TopologySelectorLabelRequirement) A list of topology selector requirements by labels. 1.97. io.k8s.api.core.v1.TypedLocalObjectReference schema Description TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Schema Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 1.98. io.k8s.api.discovery.v1.EndpointSliceList schema Description EndpointSliceList represents a list of endpoint slices Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EndpointSlice) List of endpoint slices kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. 1.99. io.k8s.api.events.v1.EventList schema Description EventList is a list of Event objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Event) items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.100. io.k8s.api.flowcontrol.v1beta1.FlowSchemaList schema Description FlowSchemaList is a list of FlowSchema objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (FlowSchema) items is a list of FlowSchemas. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.101. io.k8s.api.flowcontrol.v1beta1.PriorityLevelConfigurationList schema Description PriorityLevelConfigurationList is a list of PriorityLevelConfiguration objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PriorityLevelConfiguration) items is a list of request-priorities. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.102. io.k8s.api.networking.v1.IngressClassList schema Description IngressClassList is a collection of IngressClasses. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (IngressClass) Items is the list of IngressClasses. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. 1.103. io.k8s.api.networking.v1.IngressList schema Description IngressList is a collection of Ingress. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Ingress) Items is the list of Ingress. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.104. io.k8s.api.networking.v1.NetworkPolicyList schema Description NetworkPolicyList is a list of NetworkPolicy objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (NetworkPolicy) Items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.105. io.k8s.api.node.v1.RuntimeClassList schema Description RuntimeClassList is a list of RuntimeClass objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RuntimeClass) Items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.106. io.k8s.api.policy.v1.PodDisruptionBudgetList schema Description PodDisruptionBudgetList is a collection of PodDisruptionBudgets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodDisruptionBudget) Items is a list of PodDisruptionBudgets kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.107. io.k8s.api.rbac.v1.AggregationRule schema Description AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole Type object Schema Property Type Description clusterRoleSelectors array (LabelSelector) ClusterRoleSelectors holds a list of selectors which will be used to find ClusterRoles and create the rules. If any of the selectors match, then the ClusterRole's permissions will be added 1.108. io.k8s.api.rbac.v1.ClusterRoleBindingList schema Description ClusterRoleBindingList is a collection of ClusterRoleBindings Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterRoleBinding) Items is a list of ClusterRoleBindings kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 1.109. io.k8s.api.rbac.v1.ClusterRoleList schema Description ClusterRoleList is a collection of ClusterRoles Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterRole) Items is a list of ClusterRoles kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 1.110. io.k8s.api.rbac.v1.RoleBindingList schema Description RoleBindingList is a collection of RoleBindings Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RoleBinding) Items is a list of RoleBindings kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 1.111. io.k8s.api.rbac.v1.RoleList schema Description RoleList is a collection of Roles Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Role) Items is a list of Roles kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 1.112. io.k8s.api.scheduling.v1.PriorityClassList schema Description PriorityClassList is a collection of priority classes. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PriorityClass) items is the list of PriorityClasses kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.113. io.k8s.api.storage.v1.CSIDriverList schema Description CSIDriverList is a collection of CSIDriver objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSIDriver) items is the list of CSIDriver kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.114. io.k8s.api.storage.v1.CSINodeList schema Description CSINodeList is a collection of CSINode objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSINode) items is the list of CSINode kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.115. io.k8s.api.storage.v1.CSIStorageCapacityList schema Description CSIStorageCapacityList is a collection of CSIStorageCapacity objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSIStorageCapacity) Items is the list of CSIStorageCapacity objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.116. io.k8s.api.storage.v1.StorageClassList schema Description StorageClassList is a collection of storage classes. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StorageClass) Items is the list of StorageClasses kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.117. io.k8s.api.storage.v1.VolumeAttachmentList schema Description VolumeAttachmentList is a collection of VolumeAttachment objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeAttachment) Items is the list of VolumeAttachments kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.118. io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionList schema Description CustomResourceDefinitionList is a list of CustomResourceDefinition objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CustomResourceDefinition) items list individual CustomResourceDefinition objects kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.119. io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.JSONSchemaProps schema Description JSONSchemaProps is a JSON-Schema following Specification Draft 4 ( http://json-schema.org/ ). Type object Schema Property Type Description USDref string USDschema string additionalItems `` additionalProperties `` allOf array (undefined) anyOf array (undefined) default JSON default is a default value for undefined object fields. Defaulting is a beta feature under the CustomResourceDefaulting feature gate. Defaulting requires spec.preserveUnknownFields to be false. definitions object (undefined) dependencies object (undefined) description string enum array (JSON) example JSON exclusiveMaximum boolean exclusiveMinimum boolean externalDocs ExternalDocumentation format string format is an OpenAPI v3 format string. Unknown formats are ignored. The following formats are validated: - bsonobjectid: a bson object ID, i.e. a 24 characters hex string - uri: an URI as parsed by Golang net/url.ParseRequestURI - email: an email address as parsed by Golang net/mail.ParseAddress - hostname: a valid representation for an Internet host name, as defined by RFC 1034, section 3.1 [RFC1034]. - ipv4: an IPv4 IP as parsed by Golang net.ParseIP - ipv6: an IPv6 IP as parsed by Golang net.ParseIP - cidr: a CIDR as parsed by Golang net.ParseCIDR - mac: a MAC address as parsed by Golang net.ParseMAC - uuid: an UUID that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{12}USD - uuid3: an UUID3 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?3[0-9a-f]{3}-?[0-9a-f]{4}-?[0-9a-f]{12}USD - uuid4: an UUID4 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?4[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}USD - uuid5: an UUID5 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?5[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}USD - isbn: an ISBN10 or ISBN13 number string like "0321751043" or "978-0321751041" - isbn10: an ISBN10 number string like "0321751043" - isbn13: an ISBN13 number string like "978-0321751041" - creditcard: a credit card number defined by the regex ^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\d{3})\d{11})USD with any non digit characters mixed in - ssn: a U.S. social security number following the regex ^\d{3}[- ]?\d{2}[- ]?\d{4}USD - hexcolor: an hexadecimal color code like " FFFFFF: following the regex ^ ?([0-9a-fA-F]{3}|[0-9a-fA-F]{6})USD - rgbcolor: an RGB color code like rgb like "rgb(255,255,2559" - byte: base64 encoded binary data - password: any kind of string - date: a date string like "2006-01-02" as defined by full-date in RFC3339 - duration: a duration string like "22 ns" as parsed by Golang time.ParseDuration or compatible with Scala duration format - datetime: a date time string like "2014-12-15T19:30:20.000Z" as defined by date-time in RFC3339. id string items `` maxItems integer maxLength integer maxProperties integer maximum number minItems integer minLength integer minProperties integer minimum number multipleOf number not `` nullable boolean oneOf array (undefined) pattern string patternProperties object (undefined) properties object (undefined) required array (string) title string type string uniqueItems boolean x-kubernetes-embedded-resource boolean x-kubernetes-embedded-resource defines that the value is an embedded Kubernetes runtime.Object, with TypeMeta and ObjectMeta. The type must be object. It is allowed to further restrict the embedded object. kind, apiVersion and metadata are validated automatically. x-kubernetes-preserve-unknown-fields is allowed to be true, but does not have to be if the object is fully specified (up to kind, apiVersion, metadata). x-kubernetes-int-or-string boolean x-kubernetes-int-or-string specifies that this value is either an integer or a string. If this is true, an empty type is allowed and type as child of anyOf is permitted if following one of the following patterns: 1) anyOf: - type: integer - type: string 2) allOf: - anyOf: - type: integer - type: string - ... zero or more x-kubernetes-list-map-keys array (string) x-kubernetes-list-map-keys annotates an array with the x-kubernetes-list-type map by specifying the keys used as the index of the map. This tag MUST only be used on lists that have the "x-kubernetes-list-type" extension set to "map". Also, the values specified for this attribute must be a scalar typed field of the child structure (no nesting is supported). The properties specified must either be required or have a default value, to ensure those properties are present for all list items. x-kubernetes-list-type string x-kubernetes-list-type annotates an array to further describe its topology. This extension must only be used on lists and may have 3 possible values: 1) atomic : the list is treated as a single entity, like a scalar. Atomic lists will be entirely replaced when updated. This extension may be used on any type of list (struct, scalar, ... ). 2) set : Sets are lists that must not have multiple items with the same value. Each value must be a scalar, an object with x-kubernetes-map-type atomic or an array with x-kubernetes-list-type atomic . 3) map : These lists are like maps in that their elements have a non-index key used to identify them. Order is preserved upon merge. The map tag must only be used on a list with elements of type object. Defaults to atomic for arrays. x-kubernetes-map-type string x-kubernetes-map-type annotates an object to further describe its topology. This extension must only be used when type is object and may have 2 possible values: 1) granular : These maps are actual maps (key-value pairs) and each fields are independent from each other (they can each be manipulated by separate actors). This is the default behaviour for all maps. 2) atomic : the list is treated as a single entity, like a scalar. Atomic maps will be entirely replaced when updated. x-kubernetes-preserve-unknown-fields boolean x-kubernetes-preserve-unknown-fields stops the API server decoding step from pruning fields which are not specified in the validation schema. This affects fields recursively, but switches back to normal pruning behaviour if nested properties or additionalProperties are specified in the schema. This can either be true or undefined. False is forbidden. x-kubernetes-validations array (ValidationRule) x-kubernetes-validations describes a list of validation rules written in the CEL expression language. This field is an alpha-level. Using this field requires the feature gate CustomResourceValidationExpressions to be enabled. 1.120. io.k8s.apimachinery.pkg.api.resource.Quantity schema Description Quantity is a fixed-point representation of a number. It provides convenient marshaling/unmarshaling in JSON and YAML, in addition to String() and AsInt64() accessors. The serialization format is: <digit> ::= 0 \| 1 \| ... \| 9 <digits> ::= <digit> \| <digit><digits> <number> ::= <digits> \| <digits>.<digits> \| <digits>. \| .<digits> <sign> ::= "+" \| "-" <signedNumber> ::= <number> \| <sign><number> <suffix> ::= <binarySI> \| <decimalExponent> \| <decimalSI> <binarySI> ::= Ki \| Mi \| Gi \| Ti \| Pi \| Ei <decimalSI> ::= m \| "" \| k \| M \| G \| T \| P \| E <decimalExponent> ::= "e" <signedNumber> \| "E" <signedNumber> No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities. When a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized. Before serializing, Quantity will be put in "canonical form". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that: No precision is lost - No fractional digits will be emitted - The exponent (or suffix) is as large as possible. The sign will be omitted unless the number is negative. Examples: 1.5 will be serialized as "1500m" - 1.5Gi will be serialized as "1536Mi" Note that the quantity will NEVER be internally represented by a floating point number. That is the whole point of this exercise. Non-canonical values will still parse as long as they are well formed, but will be re-emitted in their canonical form. (So always use canonical form, or don't diff.) This format is intended to make it difficult to use these numbers without writing some sort of special handling code in the hopes that that will cause implementors to also use a fixed point implementation. Type string 1.121. io.k8s.apimachinery.pkg.apis.meta.v1.Condition schema Description Condition contains details for one aspect of the current state of this API Resource. Type object Required type status lastTransitionTime reason message Schema Property Type Description lastTransitionTime Time lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 1.122. io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions schema Description DeleteOptions may be provided when deleting an API object. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dryRun array (string) When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. preconditions Preconditions Must be fulfilled before a deletion is carried out. If not possible, a 409 Conflict status will be returned. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. 1.123. io.k8s.apimachinery.pkg.apis.meta.v1.GroupVersionKind schema Description GroupVersionKind unambiguously identifies a kind. It doesn't anonymously include GroupVersion to avoid automatic coercion. It doesn't use a GroupVersion to avoid custom marshalling Type object Required group version kind Schema Property Type Description group string kind string version string 1.124. io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector schema Description A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. Type object Schema Property Type Description matchExpressions array (LabelSelectorRequirement) matchExpressions is a list of label selector requirements. The requirements are ANDed. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 1.125. io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta schema Description ListMeta describes metadata that synthetic resources must have, including lists and various status objects. A resource may have only one of {ObjectMeta, ListMeta}. Type object Schema Property Type Description continue string continue may be set if the user set a limit on the number of items returned, and indicates that the server has more data available. The value is opaque and may be used to issue another request to the endpoint that served this list to retrieve the set of available objects. Continuing a consistent list may not be possible if the server configuration has changed or more than a few minutes have passed. The resourceVersion field returned when using this continue value will be identical to the value in the first response, unless you have received this token from an error message. remainingItemCount integer remainingItemCount is the number of subsequent items in the list which are not included in this list response. If the list request contained label or field selectors, then the number of remaining items is unknown and the field will be left unset and omitted during serialization. If the list is complete (either because it is not chunking or because this is the last chunk), then there are no more remaining items and this field will be left unset and omitted during serialization. Servers older than v1.15 do not set this field. The intended use of the remainingItemCount is estimating the size of a collection. Clients should not rely on the remainingItemCount to be set or to be exact. resourceVersion string String that identifies the server's internal version of this object that can be used by clients to determine when objects have changed. Value must be treated as opaque by clients and passed unmodified back to the server. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency selfLink string Deprecated: selfLink is a legacy read-only field that is no longer populated by the system. 1.126. io.k8s.apimachinery.pkg.apis.meta.v1.MicroTime schema Description MicroTime is version of Time with microsecond level precision. Type string 1.127. io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta schema Description ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create. Type object Schema Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations creationTimestamp Time CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC. Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata deletionGracePeriodSeconds integer Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only. deletionTimestamp Time DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested. Populated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata finalizers array (string) Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list. generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will return a 409. Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency generation integer A sequence number representing a specific generation of the desired state. Populated by the system. Read-only. labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels managedFields array (ManagedFieldsEntry) ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like "ci-cd". The set of fields is always in the version that the workflow used when modifying the object. name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names namespace string Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces ownerReferences array (OwnerReference) List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. resourceVersion string An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources. Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency selfLink string Deprecated: selfLink is a legacy read-only field that is no longer populated by the system. uid string UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations. Populated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids 1.128. io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta_v2 schema Description ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create. Type object Schema Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations clusterName string Deprecated: ClusterName is a legacy field that was always cleared by the system and never used; it will be removed completely in 1.25. The name in the go struct is changed to help clients detect accidental use. creationTimestamp Time CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC. Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata deletionGracePeriodSeconds integer Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only. deletionTimestamp Time DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested. Populated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata finalizers array (string) Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list. generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will return a 409. Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency generation integer A sequence number representing a specific generation of the desired state. Populated by the system. Read-only. labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels managedFields array (ManagedFieldsEntry) ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like "ci-cd". The set of fields is always in the version that the workflow used when modifying the object. name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names namespace string Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces ownerReferences array (OwnerReference) List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. resourceVersion string An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources. Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency selfLink string Deprecated: selfLink is a legacy read-only field that is no longer populated by the system. uid string UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations. Populated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids 1.129. io.k8s.apimachinery.pkg.apis.meta.v1.Patch schema Description Patch is provided to give a concrete name and type to the Kubernetes PATCH request body. Type object 1.130. io.k8s.apimachinery.pkg.apis.meta.v1.Status schema Description Status is a return value for calls that don't return other objects. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources code integer Suggested HTTP return code for this status, 0 if not set. details StatusDetails Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds message string A human-readable description of the status of this operation. metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it. status string Status of the operation. One of: "Success" or "Failure". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 1.131. io.k8s.apimachinery.pkg.apis.meta.v1.Time schema Description Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers. Type string 1.132. io.k8s.apimachinery.pkg.apis.meta.v1.WatchEvent schema Description Event represents a single event to a watched resource. Type object Required type object Schema Property Type Description object RawExtension Object is: * If Type is Added or Modified: the new state of the object. * If Type is Deleted: the state of the object immediately before deletion. * If Type is Error: *Status is recommended; other types may make sense depending on context. type string 1.133. io.k8s.apimachinery.pkg.runtime.RawExtension schema Description RawExtension is used to hold extensions in external versions. To use this, make a field which has RawExtension as its type in your external, versioned struct, and Object in your internal struct. You also need to register your various plugin types. So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The step is to copy (using pkg/conversion) into the internal struct. The runtime package's DefaultScheme has conversion functions installed which will unpack the JSON stored in RawExtension, turning it into the correct object type, and storing it in the Object. (TODO: In the case where the object is of an unknown type, a runtime.Unknown object will be created and stored.) Type object 1.134. io.k8s.apimachinery.pkg.util.intstr.IntOrString schema Description IntOrString is a type that can hold an int32 or a string. When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type. This allows you to have, for example, a JSON field that can accept a name or number. Type string 1.135. io.k8s.kube-aggregator.pkg.apis.apiregistration.v1.APIServiceList schema Description APIServiceList is a list of APIService objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (APIService) Items is the list of APIService kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.136. io.k8s.migration.v1alpha1.StorageStateList schema Description StorageStateList is a list of StorageState Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StorageState) List of storagestates. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.137. io.k8s.migration.v1alpha1.StorageVersionMigrationList schema Description StorageVersionMigrationList is a list of StorageVersionMigration Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StorageVersionMigration) List of storageversionmigrations. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.138. io.k8s.storage.snapshot.v1.VolumeSnapshotClassList schema Description VolumeSnapshotClassList is a list of VolumeSnapshotClass Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeSnapshotClass) List of volumesnapshotclasses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.139. io.k8s.storage.snapshot.v1.VolumeSnapshotContentList schema Description VolumeSnapshotContentList is a list of VolumeSnapshotContent Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeSnapshotContent) List of volumesnapshotcontents. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.140. io.k8s.storage.snapshot.v1.VolumeSnapshotList schema Description VolumeSnapshotList is a list of VolumeSnapshot Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeSnapshot) List of volumesnapshots. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.141. io.metal3.v1alpha1.BareMetalHostList schema Description BareMetalHostList is a list of BareMetalHost Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (BareMetalHost) List of baremetalhosts. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.142. io.metal3.v1alpha1.BMCEventSubscriptionList schema Description BMCEventSubscriptionList is a list of BMCEventSubscription Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (BMCEventSubscription) List of bmceventsubscriptions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.143. io.metal3.v1alpha1.FirmwareSchemaList schema Description FirmwareSchemaList is a list of FirmwareSchema Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (FirmwareSchema) List of firmwareschemas. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.144. io.metal3.v1alpha1.HardwareDataList schema Description HardwareDataList is a list of HardwareData Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HardwareData) List of hardwaredata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.145. io.metal3.v1alpha1.HostFirmwareSettingsList schema Description HostFirmwareSettingsList is a list of HostFirmwareSettings Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HostFirmwareSettings) List of hostfirmwaresettings. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.146. io.metal3.v1alpha1.PreprovisioningImageList schema Description PreprovisioningImageList is a list of PreprovisioningImage Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PreprovisioningImage) List of preprovisioningimages. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.147. io.metal3.v1alpha1.ProvisioningList schema Description ProvisioningList is a list of Provisioning Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Provisioning) List of provisionings. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.148. io.openshift.apiserver.v1.APIRequestCountList schema Description APIRequestCountList is a list of APIRequestCount Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (APIRequestCount) List of apirequestcounts. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.149. io.openshift.authorization.v1.RoleBindingRestrictionList schema Description RoleBindingRestrictionList is a list of RoleBindingRestriction Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RoleBindingRestriction) List of rolebindingrestrictions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.150. io.openshift.autoscaling.v1.ClusterAutoscalerList schema Description ClusterAutoscalerList is a list of ClusterAutoscaler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterAutoscaler) List of clusterautoscalers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.151. io.openshift.autoscaling.v1beta1.MachineAutoscalerList schema Description MachineAutoscalerList is a list of MachineAutoscaler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineAutoscaler) List of machineautoscalers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.152. io.openshift.cloudcredential.v1.CredentialsRequestList schema Description CredentialsRequestList is a list of CredentialsRequest Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CredentialsRequest) List of credentialsrequests. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.153. io.openshift.config.v1.APIServerList schema Description APIServerList is a list of APIServer Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (APIServer) List of apiservers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.154. io.openshift.config.v1.AuthenticationList schema Description AuthenticationList is a list of Authentication Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Authentication) List of authentications. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.155. io.openshift.config.v1.BuildList schema Description BuildList is a list of Build Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Build) List of builds. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.156. io.openshift.config.v1.ClusterOperatorList schema Description ClusterOperatorList is a list of ClusterOperator Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterOperator) List of clusteroperators. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.157. io.openshift.config.v1.ClusterVersionList schema Description ClusterVersionList is a list of ClusterVersion Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterVersion) List of clusterversions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.158. io.openshift.config.v1.ConsoleList schema Description ConsoleList is a list of Console Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Console) List of consoles. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.159. io.openshift.config.v1.DNSList schema Description DNSList is a list of DNS Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DNS) List of dnses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.160. io.openshift.config.v1.FeatureGateList schema Description FeatureGateList is a list of FeatureGate Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (FeatureGate) List of featuregates. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.161. io.openshift.config.v1.ImageContentPolicyList schema Description ImageContentPolicyList is a list of ImageContentPolicy Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageContentPolicy) List of imagecontentpolicies. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.162. io.openshift.config.v1.ImageList schema Description ImageList is a list of Image Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Image) List of images. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.163. io.openshift.config.v1.InfrastructureList schema Description InfrastructureList is a list of Infrastructure Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Infrastructure) List of infrastructures. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.164. io.openshift.config.v1.IngressList schema Description IngressList is a list of Ingress Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Ingress) List of ingresses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.165. io.openshift.config.v1.NetworkList schema Description NetworkList is a list of Network Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Network) List of networks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.166. io.openshift.config.v1.NodeList schema Description NodeList is a list of Node Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Node) List of nodes. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.167. io.openshift.config.v1.OAuthList schema Description OAuthList is a list of OAuth Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuth) List of oauths. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.168. io.openshift.config.v1.OperatorHubList schema Description OperatorHubList is a list of OperatorHub Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OperatorHub) List of operatorhubs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.169. io.openshift.config.v1.ProjectList schema Description ProjectList is a list of Project Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Project) List of projects. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.170. io.openshift.config.v1.ProxyList schema Description ProxyList is a list of Proxy Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Proxy) List of proxies. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.171. io.openshift.config.v1.SchedulerList schema Description SchedulerList is a list of Scheduler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Scheduler) List of schedulers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.172. io.openshift.console.v1.ConsoleCLIDownloadList schema Description ConsoleCLIDownloadList is a list of ConsoleCLIDownload Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleCLIDownload) List of consoleclidownloads. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.173. io.openshift.console.v1.ConsoleExternalLogLinkList schema Description ConsoleExternalLogLinkList is a list of ConsoleExternalLogLink Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleExternalLogLink) List of consoleexternalloglinks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.174. io.openshift.console.v1.ConsoleLinkList schema Description ConsoleLinkList is a list of ConsoleLink Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleLink) List of consolelinks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.175. io.openshift.console.v1.ConsoleNotificationList schema Description ConsoleNotificationList is a list of ConsoleNotification Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleNotification) List of consolenotifications. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.176. io.openshift.console.v1.ConsolePluginList schema Description ConsolePluginList is a list of ConsolePlugin Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsolePlugin) List of consoleplugins. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.177. io.openshift.console.v1.ConsoleQuickStartList schema Description ConsoleQuickStartList is a list of ConsoleQuickStart Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleQuickStart) List of consolequickstarts. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.178. io.openshift.console.v1.ConsoleYAMLSampleList schema Description ConsoleYAMLSampleList is a list of ConsoleYAMLSample Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleYAMLSample) List of consoleyamlsamples. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.179. io.openshift.helm.v1beta1.HelmChartRepositoryList schema Description HelmChartRepositoryList is a list of HelmChartRepository Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HelmChartRepository) List of helmchartrepositories. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.180. io.openshift.helm.v1beta1.ProjectHelmChartRepositoryList schema Description ProjectHelmChartRepositoryList is a list of ProjectHelmChartRepository Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ProjectHelmChartRepository) List of projecthelmchartrepositories. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.181. io.openshift.machine.v1.ControlPlaneMachineSetList schema Description ControlPlaneMachineSetList is a list of ControlPlaneMachineSet Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ControlPlaneMachineSet) List of controlplanemachinesets. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.182. io.openshift.machine.v1beta1.MachineHealthCheckList schema Description MachineHealthCheckList is a list of MachineHealthCheck Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineHealthCheck) List of machinehealthchecks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.183. io.openshift.machine.v1beta1.MachineList schema Description MachineList is a list of Machine Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Machine) List of machines. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.184. io.openshift.machine.v1beta1.MachineSetList schema Description MachineSetList is a list of MachineSet Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineSet) List of machinesets. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.185. io.openshift.machineconfiguration.v1.ContainerRuntimeConfigList schema Description ContainerRuntimeConfigList is a list of ContainerRuntimeConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ContainerRuntimeConfig) List of containerruntimeconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.186. io.openshift.machineconfiguration.v1.ControllerConfigList schema Description ControllerConfigList is a list of ControllerConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ControllerConfig) List of controllerconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.187. io.openshift.machineconfiguration.v1.KubeletConfigList schema Description KubeletConfigList is a list of KubeletConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeletConfig) List of kubeletconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.188. io.openshift.machineconfiguration.v1.MachineConfigList schema Description MachineConfigList is a list of MachineConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineConfig) List of machineconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.189. io.openshift.machineconfiguration.v1.MachineConfigPoolList schema Description MachineConfigPoolList is a list of MachineConfigPool Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineConfigPool) List of machineconfigpools. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.190. io.openshift.network.cloud.v1.CloudPrivateIPConfigList schema Description CloudPrivateIPConfigList is a list of CloudPrivateIPConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CloudPrivateIPConfig) List of cloudprivateipconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.191. io.openshift.operator.controlplane.v1alpha1.PodNetworkConnectivityCheckList schema Description PodNetworkConnectivityCheckList is a list of PodNetworkConnectivityCheck Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodNetworkConnectivityCheck) List of podnetworkconnectivitychecks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.192. io.openshift.operator.imageregistry.v1.ConfigList schema Description ConfigList is a list of Config Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Config) List of configs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.193. io.openshift.operator.imageregistry.v1.ImagePrunerList schema Description ImagePrunerList is a list of ImagePruner Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImagePruner) List of imagepruners. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.194. io.openshift.operator.ingress.v1.DNSRecordList schema Description DNSRecordList is a list of DNSRecord Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DNSRecord) List of dnsrecords. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.195. io.openshift.operator.network.v1.EgressRouterList schema Description EgressRouterList is a list of EgressRouter Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressRouter) List of egressrouters. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.196. io.openshift.operator.network.v1.OperatorPKIList schema Description OperatorPKIList is a list of OperatorPKI Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OperatorPKI) List of operatorpkis. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.197. io-openshift-operator-samples-v1-ConfigList schema Description ConfigList is a list of Config Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Config) List of configs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.198. io.openshift.operator.v1.AuthenticationList schema Description AuthenticationList is a list of Authentication Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Authentication) List of authentications. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.199. io.openshift.operator.v1.CloudCredentialList schema Description CloudCredentialList is a list of CloudCredential Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CloudCredential) List of cloudcredentials. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.200. io.openshift.operator.v1.ClusterCSIDriverList schema Description ClusterCSIDriverList is a list of ClusterCSIDriver Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterCSIDriver) List of clustercsidrivers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.201. io.openshift.operator.v1.ConfigList schema Description ConfigList is a list of Config Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Config) List of configs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.202. io.openshift.operator.v1.ConsoleList schema Description ConsoleList is a list of Console Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Console) List of consoles. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.203. io.openshift.operator.v1.CSISnapshotControllerList schema Description CSISnapshotControllerList is a list of CSISnapshotController Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSISnapshotController) List of csisnapshotcontrollers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.204. io.openshift.operator.v1.DNSList schema Description DNSList is a list of DNS Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DNS) List of dnses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.205. io.openshift.operator.v1.EtcdList schema Description EtcdList is a list of Etcd Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Etcd) List of etcds. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.206. io.openshift.operator.v1.IngressControllerList schema Description IngressControllerList is a list of IngressController Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (IngressController) List of ingresscontrollers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.207. io.openshift.operator.v1.InsightsOperatorList schema Description InsightsOperatorList is a list of InsightsOperator Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (InsightsOperator) List of insightsoperators. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.208. io.openshift.operator.v1.KubeAPIServerList schema Description KubeAPIServerList is a list of KubeAPIServer Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeAPIServer) List of kubeapiservers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.209. io.openshift.operator.v1.KubeControllerManagerList schema Description KubeControllerManagerList is a list of KubeControllerManager Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeControllerManager) List of kubecontrollermanagers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.210. io.openshift.operator.v1.KubeSchedulerList schema Description KubeSchedulerList is a list of KubeScheduler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeScheduler) List of kubeschedulers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.211. io.openshift.operator.v1.KubeStorageVersionMigratorList schema Description KubeStorageVersionMigratorList is a list of KubeStorageVersionMigrator Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeStorageVersionMigrator) List of kubestorageversionmigrators. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.212. io.openshift.operator.v1.NetworkList schema Description NetworkList is a list of Network Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Network) List of networks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.213. io.openshift.operator.v1.OpenShiftAPIServerList schema Description OpenShiftAPIServerList is a list of OpenShiftAPIServer Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OpenShiftAPIServer) List of openshiftapiservers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.214. io.openshift.operator.v1.OpenShiftControllerManagerList schema Description OpenShiftControllerManagerList is a list of OpenShiftControllerManager Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OpenShiftControllerManager) List of openshiftcontrollermanagers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.215. io.openshift.operator.v1.ServiceCAList schema Description ServiceCAList is a list of ServiceCA Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ServiceCA) List of servicecas. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.216. io.openshift.operator.v1.StorageList schema Description StorageList is a list of Storage Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Storage) List of storages. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.217. io.openshift.operator.v1alpha1.ImageContentSourcePolicyList schema Description ImageContentSourcePolicyList is a list of ImageContentSourcePolicy Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageContentSourcePolicy) List of imagecontentsourcepolicies. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.218. io.openshift.performance.v2.PerformanceProfileList schema Description PerformanceProfileList is a list of PerformanceProfile Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PerformanceProfile) List of performanceprofiles. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.219. io.openshift.quota.v1.ClusterResourceQuotaList schema Description ClusterResourceQuotaList is a list of ClusterResourceQuota Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterResourceQuota) List of clusterresourcequotas. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.220. io.openshift.security.v1.SecurityContextConstraintsList schema Description SecurityContextConstraintsList is a list of SecurityContextConstraints Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (SecurityContextConstraints) List of securitycontextconstraints. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.221. io.openshift.tuned.v1.ProfileList schema Description ProfileList is a list of Profile Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Profile) List of profiles. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.222. io.openshift.tuned.v1.TunedList schema Description TunedList is a list of Tuned Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Tuned) List of tuneds. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.223. org.ovn.k8s.v1.EgressFirewallList schema Description EgressFirewallList is a list of EgressFirewall Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressFirewall) List of egressfirewalls. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.224. org.ovn.k8s.v1.EgressIPList schema Description EgressIPList is a list of EgressIP Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressIP) List of egressips. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.225. org.ovn.k8s.v1.EgressQoSList schema Description EgressQoSList is a list of EgressQoS Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressQoS) List of egressqoses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds | [
"<quantity> ::= <signedNumber><suffix>",
"(Note that <suffix> may be empty, from the \"\" case in <decimalSI>.)",
"(International System of units; See: http://physics.nist.gov/cuu/Units/binary.html)",
"(Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)",
"type MyAPIObject struct { runtime.TypeMeta `json:\",inline\"` MyPlugin runtime.Object `json:\"myPlugin\"` }",
"type PluginA struct { AOption string `json:\"aOption\"` }",
"type MyAPIObject struct { runtime.TypeMeta `json:\",inline\"` MyPlugin runtime.RawExtension `json:\"myPlugin\"` }",
"type PluginA struct { AOption string `json:\"aOption\"` }",
"{ \"kind\":\"MyAPIObject\", \"apiVersion\":\"v1\", \"myPlugin\": { \"kind\":\"PluginA\", \"aOption\":\"foo\", }, }"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/common_object_reference/api-object-reference |
Chapter 9. Deprecated functionality | Chapter 9. Deprecated functionality This part provides an overview of functionality that has been deprecated in Red Hat Enterprise Linux 8. Deprecated devices are fully supported, which means that they are tested and maintained, and their support status remains unchanged within Red Hat Enterprise Linux 8. However, these devices will likely not be supported in the major version release, and are not recommended for new deployments on the current or future major versions of RHEL. For the most recent list of deprecated functionality within a particular major release, see the latest version of release documentation. For information about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux Application Streams Life Cycle . A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from the product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations. For information regarding functionality that is present in RHEL 7 but has been removed in RHEL 8, see Considerations in adopting RHEL 8 . For information regarding functionality that is present in RHEL 8 but has been removed in RHEL 9, see Considerations in adopting RHEL 9 . 9.1. Installer and image creation Several Kickstart commands and options have been deprecated Using the following commands and options in RHEL 8 Kickstart files will print a warning in the logs: auth or authconfig device deviceprobe dmraid install lilo lilocheck mouse multipath bootloader --upgrade ignoredisk --interactive partition --active reboot --kexec Where only specific options are listed, the base command and its other options are still available and not deprecated. For more details and related changes in Kickstart, see the Kickstart changes section of the Considerations in adopting RHEL 8 document. (BZ#1642765) The --interactive option of the ignoredisk Kickstart command has been deprecated Using the --interactive option in future releases of Red Hat Enterprise Linux will result in a fatal installation error. It is recommended that you modify your Kickstart file to remove the option. (BZ#1637872) The Kickstart autostep command has been deprecated The autostep command has been deprecated. The related section about this command has been removed from the RHEL 8 documentation . (BZ#1904251) 9.2. Software management rpmbuild --sign is deprecated The rpmbuild --sign command is deprecated since RHEL 8.1. Using this command in future releases of Red Hat Enterprise Linux can result in an error. It is recommended that you use the rpmsign command instead. ( BZ#1688849 ) 9.3. Shells and command-line tools The OpenEXR component has been deprecated The OpenEXR component has been deprecated. Hence, the support for the EXR image format has been dropped from the imagecodecs module. ( BZ#1886310 ) The dump utility from the dump package has been deprecated The dump utility used for backup of file systems has been deprecated and will not be available in RHEL 9. In RHEL 9, Red Hat recommends using the tar , dd , or bacula , backup utility, based on type of usage, which provides full and safe backups on ext2, ext3, and ext4 file systems. Note that the restore utility from the dump package remains available and supported in RHEL 9 and is available as the restore package. (BZ#1997366) The ABRT tool has been deprecated The Automatic Bug Reporting Tool (ABRT) for detecting and reporting application crashes has been deprecated in RHEL 8. As a replacement, use the systemd-coredump tool to log and store core dumps, which are automatically generated files after a program crashes. (BZ#2055826) The ReaR crontab has been deprecated The /etc/cron.d/rear crontab from the rear package has been deprecated in RHEL 8 and will not be available in RHEL 9. The crontab checks every night whether the disk layout has changed, and runs rear mkrescue command if a change happened. If you require this functionality, after an upgrade to RHEL 9, configure periodic runs of ReaR manually. ( BZ#2083301 ) The hidepid=n mount option is not supported in RHEL 8 systemd The mount option hidepid=n , which controls who can access information in /proc/[pid] directories, is not compatible with systemd infrastructure provided in RHEL 8. In addition, using this option might cause certain services started by systemd to produce SELinux AVC denial messages and prevent other operations from completing. For more information, see the related Is mounting /proc with "hidepid=2" recommended with RHEL7 and RHEL8? . ( BZ#2038929 ) The /usr/lib/udev/rename_device utility has been deprecated The udev helper utility /usr/lib/udev/rename_device for renaming network interfaces has been deprecated. ( BZ#1875485 ) 9.4. Security NSS SEED ciphers are deprecated The Mozilla Network Security Services ( NSS ) library will not support TLS cipher suites that use a SEED cipher in a future release. To ensure smooth transition of deployments that rely on SEED ciphers when NSS removes support, Red Hat recommends enabling support for other cipher suites. Note that SEED ciphers are already disabled by default in RHEL. ( BZ#1817533 ) TLS 1.0 and TLS 1.1 are deprecated The TLS 1.0 and TLS 1.1 protocols are disabled in the DEFAULT system-wide cryptographic policy level. If your scenario, for example, a video conferencing application in the Firefox web browser, requires using the deprecated protocols, switch the system-wide cryptographic policy to the LEGACY level: For more information, see the Strong crypto defaults in RHEL 8 and deprecation of weak crypto algorithms Knowledgebase article on the Red Hat Customer Portal and the update-crypto-policies(8) man page. ( BZ#1660839 ) DSA is deprecated in RHEL 8 The Digital Signature Algorithm (DSA) is considered deprecated in Red Hat Enterprise Linux 8. Authentication mechanisms that depend on DSA keys do not work in the default configuration. Note that OpenSSH clients do not accept DSA host keys even in the LEGACY system-wide cryptographic policy level. (BZ#1646541) SSL2 Client Hello has been deprecated in NSS The Transport Layer Security ( TLS ) protocol version 1.2 and earlier allow to start a negotiation with a Client Hello message formatted in a way that is backward compatible with the Secure Sockets Layer ( SSL ) protocol version 2. Support for this feature in the Network Security Services ( NSS ) library has been deprecated and it is disabled by default. Applications that require support for this feature need to use the new SSL_ENABLE_V2_COMPATIBLE_HELLO API to enable it. Support for this feature may be removed completely in future releases of Red Hat Enterprise Linux 8. (BZ#1645153) TPM 1.2 is deprecated The Trusted Platform Module (TPM) secure cryptoprocessor standard version was updated to version 2.0 in 2016. TPM 2.0 provides many improvements over TPM 1.2, and it is not backward compatible with the version. TPM 1.2 is deprecated in RHEL 8, and it might be removed in the major release. (BZ#1657927) crypto-policies derived properties are now deprecated With the introduction of scopes for crypto-policies directives in custom policies, the following derived properties have been deprecated: tls_cipher , ssh_cipher , ssh_group , ike_protocol , and sha1_in_dnssec . Additionally, the use of the protocol property without specifying a scope is now deprecated as well. See the crypto-policies(7) man page for recommended replacements. ( BZ#2011208 ) Runtime disabling SELinux using /etc/selinux/config is now deprecated Runtime disabling SELinux using the SELINUX=disabled option in the /etc/selinux/config file has been deprecated. In RHEL 9, when you disable SELinux only through /etc/selinux/config , the system starts with SELinux enabled but with no policy loaded. If your scenario really requires to completely disable SELinux, Red Hat recommends disabling SELinux by adding the selinux=0 parameter to the kernel command line as described in the Changing SELinux modes at boot time section of the Using SELinux title. ( BZ#1932222 ) The ipa SELinux module removed from selinux-policy The ipa SELinux module has been removed from the selinux-policy package because it is no longer maintained. The functionality is now included in the ipa-selinux subpackage. If your scenario requires the use of types or interfaces from the ipa module in a local SELinux policy, install the ipa-selinux package. (BZ#1461914) fapolicyd.rules is deprecated The /etc/fapolicyd/rules.d/ directory for files containing allow and deny execution rules replaces the /etc/fapolicyd/fapolicyd.rules file. The fagenrules script now merges all component rule files in this directory to the /etc/fapolicyd/compiled.rules file. Rules in /etc/fapolicyd/fapolicyd.trust are still processed by the fapolicyd framework but only for ensuring backward compatibility. ( BZ#2054741 ) 9.5. Networking Network scripts are deprecated in RHEL 8 Network scripts are deprecated in Red Hat Enterprise Linux 8 and they are no longer provided by default. The basic installation provides a new version of the ifup and ifdown scripts which call the NetworkManager service through the nmcli tool. In Red Hat Enterprise Linux 8, to run the ifup and the ifdown scripts, NetworkManager must be running. Note that custom commands in /sbin/ifup-local , ifdown-pre-local and ifdown-local scripts are not executed. If any of these scripts are required, the installation of the deprecated network scripts in the system is still possible with the following command: The ifup and ifdown scripts link to the installed legacy network scripts. Calling the legacy network scripts shows a warning about their deprecation. (BZ#1647725) The dropwatch tool is deprecated The dropwatch tool has been deprecated. The tool will not be supported in future releases, thus it is not recommended for new deployments. As a replacement of this package, Red Hat recommends to use the perf command line tool. For more information on using the perf command line tool, see the Getting started with Perf section on the Red Hat customer portal or the perf man page. ( BZ#1929173 ) The cgdcbxd package is deprecated Control group data center bridging exchange daemon ( cgdcbxd ) is a service to monitor data center bridging (DCB) netlink events and manage the net_prio control group subsystem. Starting with RHEL 8.5, the cgdcbxd package is deprecated and will be removed in the major RHEL release. ( BZ#2006665 ) The xinetd service has been deprecated The xinetd service has been deprecated and will be removed in RHEL 9. As a replacement, use systemd . For further details, see How to convert xinetd service to systemd . (BZ#2009113) The WEP Wi-Fi connection method is deprecated The insecure wired equivalent privacy (WEP) Wi-Fi connection method is deprecated in RHEL 8.6 and will be removed in RHEL 9.0. For secure Wi-Fi connections, use the Wi-Fi Protected Access 3 (WPA3) or WPA2 connection methods. ( BZ#2029338 ) The unsupported xt_u32 module is now deprecated Using the unsupported xt_u32 module, users of iptables can match arbitrary 32 bits in the packet header or payload. In RHEL 8.6, the xt_u32 module is deprecated and will be removed in RHEL 9. If you use xt_u32 , migrate to the nftables packet filtering framework. For example, first change your firewall to use iptables with native matches to incrementally replace individual rules, and later use the iptables-translate and accompanying utilities to migrate to nftables . If no native match exists in nftables , use the raw payload matching feature of nftables . For details, see the raw payload expression section in the nft(8) man page. ( BZ#2061288 ) The term slaves is deprecated in the nmstate API Red Hat is committed to using conscious language. Therefore the slaves term is deprecated in the Nmstate API. Use the term port when you use nmstatectl . (JIRA:RHELDOCS-17641) 9.6. Kernel Kernel live patching now covers all RHEL minor releases Since RHEL 8.1, kernel live patches have been provided for selected minor release streams of RHEL covered under the Extended Update Support (EUS) policy to remediate Critical and Important Common Vulnerabilities and Exposures (CVEs). To accommodate the maximum number of concurrently covered kernels and use cases, the support window for each live patch will be decreased from 12 to 6 months for every minor, major and zStream version of the kernel. It means that on the day a kernel live patch is released, it will cover every minor release and scheduled errata kernel delivered in the past 6 months. For example, 8.4.x will have a one-year support window, but 8.4.x+1 will have 6 months. For more information about this feature, see Applying patches with kernel live patching . For details about available kernel live patches, see Kernel Live Patch life cycles . ( BZ#1958250 ) Installing RHEL for Real Time 8 using diskless boot is now deprecated Diskless booting allows multiple systems to share a root file system through the network. While convenient, diskless boot is prone to introducing network latency in real-time workloads. With a future minor update of RHEL for Real Time 8, the diskless booting feature will no longer be supported. ( BZ#1748980 ) The Linux firewire sub-system and its associated user-space components are deprecated in RHEL 8 The firewire sub-system provides interfaces to use and maintain any resources on the IEEE 1394 bus. In RHEL 9, firewire will no longer be supported in the kernel package. Note that firewire contains several user-space components provided by the libavc1394 , libdc1394 , libraw1394 packages. These packages are subject to the deprecation as well. (BZ#1871863) The rdma_rxe Soft-RoCE driver is deprecated Software Remote Direct Memory Access over Converged Ethernet (Soft-RoCE), also known as RXE, is a feature that emulates Remote Direct Memory Access (RDMA). In RHEL 8, the Soft-RoCE feature is available as an unsupported Technology Preview. However, due to stability issues, this feature has been deprecated and will be removed in RHEL 9. (BZ#1878207) 9.7. Boot loader The kernelopts environment variable has been deprecated In RHEL 8, the kernel command-line parameters for systems using the GRUB2 bootloader were defined in the kernelopts environment variable. The variable was stored in the /boot/grub2/grubenv file for each kernel boot entry. However, storing the kernel command-line parameters using kernelopts was not robust. Therefore, with a future major update of RHEL, kernelopts will be removed and the kernel command-line parameters will be stored in the Boot Loader Specification (BLS) snippet instead. ( BZ#2060759 ) 9.8. File systems and storage VDO write modes other than async are deprecated VDO supports several write modes in RHEL 8: sync async async-unsafe auto Starting with RHEL 8.4, the following write modes are deprecated: sync Devices above the VDO layer cannot recognize if VDO is synchronous, and consequently, the devices cannot take advantage of the VDO sync mode. async-unsafe VDO added this write mode as a workaround for the reduced performance of async mode, which complies to Atomicity, Consistency, Isolation, and Durability (ACID). Red Hat does not recommend async-unsafe for most use cases and is not aware of any users who rely on it. auto This write mode only selects one of the other write modes. It is no longer necessary when VDO supports only a single write mode. These write modes will be removed in a future major RHEL release. The recommended VDO write mode is now async . For more information on VDO write modes, see Selecting a VDO write mode . (JIRA:RHELPLAN-70700) NFSv3 over UDP has been disabled The NFS server no longer opens or listens on a User Datagram Protocol (UDP) socket by default. This change affects only NFS version 3 because version 4 requires the Transmission Control Protocol (TCP). NFS over UDP is no longer supported in RHEL 8. (BZ#1592011) cramfs has been deprecated Due to lack of users, the cramfs kernel module is deprecated. squashfs is recommended as an alternative solution. (BZ#1794513) VDO manager has been deprecated The python-based VDO management software has been deprecated and will be removed from RHEL 9. In RHEL 9, it will be replaced by the LVM-VDO integration. Therefore, it is recommended to create VDO volumes using the lvcreate command. The existing volumes created using the VDO management software can be converted using the /usr/sbin/lvm_import_vdo script, provided by the lvm2 package. For more information on the LVM-VDO implementation, see Deduplicating and compressing logical volumes on RHEL . ( BZ#1949163 ) The elevator kernel command line parameter is deprecated The elevator kernel command line parameter was used in earlier RHEL releases to set the disk scheduler for all devices. In RHEL 8, the parameter is deprecated. The upstream Linux kernel has removed support for the elevator parameter, but it is still available in RHEL 8 for compatibility reasons. Note that the kernel selects a default disk scheduler based on the type of device. This is typically the optimal setting. If you require a different scheduler, Red Hat recommends that you use udev rules or the TuneD service to configure it. Match the selected devices and switch the scheduler only for those devices. For more information, see Setting the disk scheduler . (BZ#1665295) LVM mirror is deprecated The LVM mirror segment type is now deprecated. Support for mirror will be removed in a future major release of RHEL. Red Hat recommends that you use LVM RAID 1 devices with a segment type of raid1 instead of mirror . The raid1 segment type is the default RAID configuration type and replaces mirror as the recommended solution. To convert mirror devices to raid1 , see Converting a mirrored LVM device to a RAID1 logical volume . LVM mirror has several known issues. For details, see known issues in file systems and storage . (BZ#1827628) peripety is deprecated The peripety package is deprecated since RHEL 8.3. The Peripety storage event notification daemon parses system storage logs into structured storage events. It helps you investigate storage issues. ( BZ#1871953 ) 9.9. High availability and clusters pcs commands that support the clufter tool have been deprecated The pcs commands that support the clufter tool for analyzing cluster configuration formats have been deprecated. These commands now print a warning that the command has been deprecated and sections related to these commands have been removed from the pcs help display and the pcs(8) man page. The following commands have been deprecated: pcs config import-cman for importing CMAN / RHEL6 HA cluster configuration pcs config export for exporting cluster configuration to a list of pcs commands which recreate the same cluster (BZ#1851335) 9.10. Dynamic programming languages, web and database servers The mod_php module provided with PHP for use with the Apache HTTP Server has been deprecated The mod_php module provided with PHP for use with the Apache HTTP Server in RHEL 8 is available but not enabled in the default configuration. The module is no longer available in RHEL 9. Since RHEL 8, PHP scripts are run using the FastCGI Process Manager ( php-fpm ) by default. For more information, see Using PHP with the Apache HTTP Server . ( BZ#2225332 ) 9.11. Compilers and development tools libdwarf has been deprecated The libdwarf library has been deprecated in RHEL 8. The library will likely not be supported in future major releases. Instead, use the elfutils and libdw libraries for applications that wish to process ELF/DWARF files. Alternatives for the libdwarf-tools dwarfdump program are the binutils readelf program or the elfutils eu-readelf program, both used by passing the --debug-dump flag. ( BZ#1920624 ) The gdb.i686 packages are deprecated In RHEL 8.1, the 32-bit versions of the GNU Debugger (GDB), gdb.i686 , were shipped due to a dependency problem in another package. Because RHEL 8 does not support 32-bit hardware, the gdb.i686 packages are deprecated since RHEL 8.4. The 64-bit versions of GDB, gdb.x86_64 , are fully capable of debugging 32-bit applications. If you use gdb.i686 , note the following important issues: The gdb.i686 packages will no longer be updated. Users must install gdb.x86_64 instead. If you have gdb.i686 installed, installing gdb.x86_64 will cause dnf to report package gdb-8.2-14.el8.x86_64 obsoletes gdb < 8.2-14.el8 provided by gdb-8.2-12.el8.i686 . This is expected. Either uninstall gdb.i686 or pass dnf the --allowerasing option to remove gdb.i686 and install gdb.x8_64 . Users will no longer be able to install the gdb.i686 packages on 64-bit systems, that is, those with the libc.so.6()(64-bit) packages. (BZ#1853140) 9.12. Identity Management openssh-ldap has been deprecated The openssh-ldap subpackage has been deprecated in Red Hat Enterprise Linux 8 and will be removed in RHEL 9. As the openssh-ldap subpackage is not maintained upstream, Red Hat recommends using SSSD and the sss_ssh_authorizedkeys helper, which integrate better with other IdM solutions and are more secure. By default, the SSSD ldap and ipa providers read the sshPublicKey LDAP attribute of the user object, if available. Note that you cannot use the default SSSD configuration for the ad provider or IdM trusted domains to retrieve SSH public keys from Active Directory (AD), since AD does not have a default LDAP attribute to store a public key. To allow the sss_ssh_authorizedkeys helper to get the key from SSSD, enable the ssh responder by adding ssh to the services option in the sssd.conf file. See the sssd.conf(5) man page for details. To allow sshd to use sss_ssh_authorizedkeys , add the AuthorizedKeysCommand /usr/bin/sss_ssh_authorizedkeys and AuthorizedKeysCommandUser nobody options to the /etc/ssh/sshd_config file as described by the sss_ssh_authorizedkeys(1) man page. ( BZ#1871025 ) DES and 3DES encryption types have been removed Due to security reasons, the Data Encryption Standard (DES) algorithm has been deprecated and disabled by default since RHEL 7. With the recent rebase of Kerberos packages, single-DES (DES) and triple-DES (3DES) encryption types have been removed from RHEL 8. If you have configured services or users to only use DES or 3DES encryption, you might experience service interruptions such as: Kerberos authentication errors unknown enctype encryption errors Kerberos Distribution Centers (KDCs) with DES-encrypted Database Master Keys ( K/M ) fail to start Perform the following actions to prepare for the upgrade: Check if your KDC uses DES or 3DES encryption with the krb5check open source Python scripts. See krb5check on GitHub. If you are using DES or 3DES encryption with any Kerberos principals, re-key them with a supported encryption type, such as Advanced Encryption Standard (AES). For instructions on re-keying, see Retiring DES from MIT Kerberos Documentation. Test independence from DES and 3DES by temporarily setting the following Kerberos options before upgrading: In /var/kerberos/krb5kdc/kdc.conf on the KDC, set supported_enctypes and do not include des or des3 . For every host, in /etc/krb5.conf and any files in /etc/krb5.conf.d , set allow_weak_crypto to false . It is false by default. For every host, in /etc/krb5.conf and any files in /etc/krb5.conf.d , set permitted_enctypes , default_tgs_enctypes , and default_tkt_enctypes , and do not include des or des3 . If you do not experience any service interruptions with the test Kerberos settings from the step, remove them and upgrade. You do not need those settings after upgrading to the latest Kerberos packages. ( BZ#1877991 ) Standalone use of the ctdb service has been deprecated Since RHEL 8.4, customers are advised to use the ctdb clustered Samba service only when both of the following conditions apply: The ctdb service is managed as a pacemaker resource with the resource-agent ctdb . The ctdb service uses storage volumes that contain either a GlusterFS file system provided by the Red Hat Gluster Storage product or a GFS2 file system. The stand-alone use case of the ctdb service has been deprecated and will not be included in a major release of Red Hat Enterprise Linux. For further information on support policies for Samba, see the Knowledgebase article Support Policies for RHEL Resilient Storage - ctdb General Policies . (BZ#1916296) Running Samba as a PDC or BDC is deprecated The classic domain controller mode that enabled administrators to run Samba as an NT4-like primary domain controller (PDC) and backup domain controller (BDC) is deprecated. The code and settings to configure these modes will be removed in a future Samba release. As long as the Samba version in RHEL 8 provides the PDC and BDC modes, Red Hat supports these modes only in existing installations with Windows versions which support NT4 domains. Red Hat recommends not setting up a new Samba NT4 domain, because Microsoft operating systems later than Windows 7 and Windows Server 2008 R2 do not support NT4 domains. If you use the PDC to authenticate only Linux users, Red Hat suggests migrating to Red Hat Identity Management (IdM) that is included in RHEL subscriptions. However, you cannot join Windows systems to an IdM domain. Note that Red Hat continues supporting the PDC functionality IdM uses in the background. Red Hat does not support running Samba as an AD domain controller (DC). ( BZ#1926114 ) Indirect AD integration with IdM via WinSync has been deprecated WinSync is no longer actively developed in RHEL 8 due to several functional limitations: WinSync supports only one Active Directory (AD) domain. Password synchronization requires installing additional software on AD Domain Controllers. For a more robust solution with better resource and security separation, Red Hat recommends using a cross-forest trust for indirect integration with Active Directory. See the Indirect integration documentation. (JIRA:RHELPLAN-100400) The SSSD version of libwbclient has been removed The SSSD implementation of the libwbclient package was deprecated in RHEL 8.4. As it cannot be used with recent versions of Samba, the SSSD implementation of libwbclient has now been removed. ( BZ#1947671 ) The SMB1 protocol is deprecated in Samba Starting with Samba 4.11, the insecure Server Message Block version 1 (SMB1) protocol is deprecated and will be removed in a future release. To improve the security, by default, SMB1 is disabled in the Samba server and client utilities. Jira:RHELDOCS-16612 Limited support for FreeRADIUS In RHEL 8, the following external authentication modules are deprecated as part of the FreeRADIUS offering: The MySQL, PostgreSQL, SQlite, and unixODBC database connectors The Perl language module The REST API module Note The PAM authentication module and other authentication modules that are provided as part of the base package are not affected. You can find replacements for the deprecated modules in community-supported packages, for example in the Fedora project. In addition, the scope of support for the freeradius package will be limited to the following use cases in future RHEL releases: Using FreeRADIUS as an authentication provider with Identity Management (IdM) as the backend source of authentication. The authentication occurs through the krb5 and LDAP authentication packages or as PAM authentication in the main FreeRADIUS package. Using FreeRADIUS to provide a source-of-truth for authentication in IdM, through the Python 3 authentication package. In contrast to these deprecations, Red Hat will strengthen the support of the following external authentication modules with FreeRADIUS: Authentication based on krb5 and LDAP Python 3 authentication The focus on these integration options is in close alignment with the strategic direction of Red Hat IdM. Jira:RHELDOCS-17573 9.13. Desktop The libgnome-keyring library has been deprecated The libgnome-keyring library has been deprecated in favor of the libsecret library, as libgnome-keyring is not maintained upstream, and does not follow the necessary cryptographic policies for RHEL. The new libsecret library is the replacement that follows the necessary security standards. (BZ#1607766) 9.14. Graphics infrastructures AGP graphics cards are no longer supported Graphics cards using the Accelerated Graphics Port (AGP) bus are not supported in Red Hat Enterprise Linux 8. Use the graphics cards with PCI-Express bus as the recommended replacement. (BZ#1569610) Motif has been deprecated The Motif widget toolkit has been deprecated in RHEL, because development in the upstream Motif community is inactive. The following Motif packages have been deprecated, including their development and debugging variants: motif openmotif openmotif21 openmotif22 Additionally, the motif-static package has been removed. Red Hat recommends using the GTK toolkit as a replacement. GTK is more maintainable and provides new features compared to Motif. (JIRA:RHELPLAN-98983) 9.15. The web console The web console no longer supports incomplete translations The RHEL web console no longer provides translations for languages that have translations available for less than 50 % of the Console's translatable strings. If the browser requests translation to such a language, the user interface will be in English instead. ( BZ#1666722 ) The remotectl command is deprecated The remotectl command has been deprecated and will not be available in future releases of RHEL. You can use the cockpit-certificate-ensure command as a replacement. However, note that cockpit-certificate-ensure does not have feature parity with remotectl . It does not support bundled certificates and keychain files and requires them to be split out. (JIRA:RHELPLAN-147538) 9.16. Red Hat Enterprise Linux system roles The networking system role displays a deprecation warning when configuring teams on RHEL 9 nodes The network teaming capabilities have been deprecated in RHEL 9. As a result, using the networking RHEL system role on an RHEL 8 controller to configure a network team on RHEL 9 nodes, shows a warning about its deprecation. ( BZ#2021685 ) Ansible Engine has been deprecated versions of RHEL 8 provided access to an Ansible Engine repository, with a limited scope of support, to enable supported RHEL Automation use cases, such as RHEL system roles and Insights remedations. Ansible Engine has been deprecated, and Ansible Engine 2.9 will have no support after September 29, 2023. For more details on the supported use cases, see Scope of support for the Ansible Core package included in the RHEL 9 AppStream . Users must manually migrate their systems from Ansible Engine to Ansible Core. For that, follow the steps: Procedure Check if the system is running RHEL 8.6: Uninstall Ansible Engine 2.9: Disable the ansible-2-for-rhel-8-x86_64-rpms repository: Install the Ansible Core package from the RHEL 8 AppStream repository: For more details, see: Using Ansible in RHEL 8.6 and later . ( BZ#2006081 ) The geoipupdate package has been deprecated The geoipupdate package requires a third-party subscription and it also downloads proprietary content. Therefore, the geoipupdate package has been deprecated, and will be removed in the major RHEL version. (BZ#1874892) 9.17. Virtualization SPICE has been deprecated The SPICE remote display protocol has become deprecated. As a result, SPICE will remain supported in RHEL 8, but Red Hat recommends using alternate solutions for remote display streaming: For remote console access, use the VNC protocol. For advanced remote display functions, use third party tools such as RDP, HP RGS, or Mechdyne TGX. Note that the QXL graphics device, which is used by SPICE, has become deprecated as well. (BZ#1849563) virsh iface-* commands have become deprecated The virsh iface-* commands, such as virsh iface-start and virsh iface-destroy , are now deprecated, and will be removed in a future major version of RHEL. In addition, these commands frequently fail due to configuration dependencies. Therefore, it is recommended not to use virsh iface-* commands for configuring and managing host network connections. Instead, use the NetworkManager program and its related management applications, such as nmcli . (BZ#1664592) virt-manager has been deprecated The Virtual Machine Manager application, also known as virt-manager , has been deprecated. The RHEL web console, also known as Cockpit , is intended to become its replacement in a subsequent release. It is, therefore, recommended that you use the web console for managing virtualization in a GUI. Note, however, that some features available in virt-manager may not be yet available in the RHEL web console. (JIRA:RHELPLAN-10304) Limited support for virtual machine snapshots Creating snapshots of virtual machines (VMs) is currently only supported for VMs not using the UEFI firmware. In addition, during the snapshot operation, the QEMU monitor may become blocked, which negatively impacts the hypervisor performance for certain workloads. Also note that the current mechanism of creating VM snapshots has been deprecated, and Red Hat does not recommend using VM snapshots in a production environment. ( BZ#1686057 ) The Cirrus VGA virtual GPU type has been deprecated With a future major update of Red Hat Enterprise Linux, the Cirrus VGA GPU device will no longer be supported in KVM virtual machines. Therefore, Red Hat recommends using the stdvga or virtio-vga devices instead of Cirrus VGA . (BZ#1651994) KVM on IBM POWER has been deprecated Using KVM virtualization on IBM POWER hardware has become deprecated. As a result, KVM on IBM POWER is still supported in RHEL 8, but will become unsupported in a future major release of RHEL. (JIRA:RHELPLAN-71200) SecureBoot image verification using SHA1-based signatures is deprecated Performing SecureBoot image verification using SHA1-based signatures on UEFI (PE/COFF) executables has become deprecated. Instead, Red Hat recommends using signatures based on the SHA2 algorithm, or later. (BZ#1935497) Using SPICE to attach smart card readers to virtual machines has been deprecated The SPICE remote display protocol has been deprecated in RHEL 8. Since the only recommended way to attach smart card readers to virtual machines (VMs) depends on the SPICE protocol, the usage of smart cards in VMs has also become deprecated in RHEL 8. In a future major version of RHEL, the functionality of attaching smart card readers to VMs will only be supported by third party remote visualization solutions. ( BZ#2059626 ) 9.18. Containers The Podman varlink-based API v1.0 has been removed The Podman varlink-based API v1.0 was deprecated in a release of RHEL 8. Podman v2.0 introduced a new Podman v2.0 RESTful API. With the release of Podman v3.0, the varlink-based API v1.0 has been completely removed. (JIRA:RHELPLAN-45858) container-tools:1.0 has been deprecated The container-tools:1.0 module has been deprecated and will no longer receive security updates. It is recommended to use a newer supported stable module stream, such as container-tools:2.0 or container-tools:3.0 . (JIRA:RHELPLAN-59825) The container-tools:2.0 module has been deprecated The container-tools:2.0 module has been deprecated and will no longer receive security updates. It is recommended to use a newer supported stable module stream, such as container-tools:3.0 . (JIRA:RHELPLAN-85066) 9.19. Deprecated packages This section lists packages that have been deprecated and will probably not be included in a future major release of Red Hat Enterprise Linux. For changes to packages between RHEL 7 and RHEL 8, see Changes to packages in the Considerations in adopting RHEL 8 document. The following packages have been deprecated and remain supported until the end of life of RHEL 8: 389-ds-base-legacy-tools abrt abrt-addon-ccpp abrt-addon-kerneloops abrt-addon-pstoreoops abrt-addon-vmcore abrt-addon-xorg abrt-cli abrt-console-notification abrt-dbus abrt-desktop abrt-gui abrt-gui-libs abrt-libs abrt-tui adobe-source-sans-pro-fonts adwaita-qt alsa-plugins-pulseaudio amanda amanda-client amanda-libs amanda-server ant-contrib antlr3 antlr32 aopalliance apache-commons-collections apache-commons-compress apache-commons-exec apache-commons-jxpath apache-commons-parent apache-ivy apache-parent apache-resource-bundles apache-sshd apiguardian aspnetcore-runtime-3.0 aspnetcore-runtime-3.1 aspnetcore-runtime-5.0 aspnetcore-targeting-pack-3.0 aspnetcore-targeting-pack-3.1 aspnetcore-targeting-pack-5.0 assertj-core authd auto autoconf213 autogen autogen-libopts awscli base64coder batik bea-stax bea-stax-api bind-export-devel bind-export-libs bind-libs-lite bind-pkcs11 bind-pkcs11-devel bind-pkcs11-libs bind-pkcs11-utils bind-sdb bind-sdb bind-sdb-chroot bluez-hid2hci boost-jam boost-signals bouncycastle bpg-algeti-fonts bpg-chveulebrivi-fonts bpg-classic-fonts bpg-courier-fonts bpg-courier-s-fonts bpg-dedaena-block-fonts bpg-dejavu-sans-fonts bpg-elite-fonts bpg-excelsior-caps-fonts bpg-excelsior-condenced-fonts bpg-excelsior-fonts bpg-fonts-common bpg-glaho-fonts bpg-gorda-fonts bpg-ingiri-fonts bpg-irubaqidze-fonts bpg-mikhail-stephan-fonts bpg-mrgvlovani-caps-fonts bpg-mrgvlovani-fonts bpg-nateli-caps-fonts bpg-nateli-condenced-fonts bpg-nateli-fonts bpg-nino-medium-cond-fonts bpg-nino-medium-fonts bpg-sans-fonts bpg-sans-medium-fonts bpg-sans-modern-fonts bpg-sans-regular-fonts bpg-serif-fonts bpg-serif-modern-fonts bpg-ucnobi-fonts brlapi-java bsh buildnumber-maven-plugin byaccj cal10n cbi-plugins cdparanoia cdparanoia-devel cdparanoia-libs cdrdao cmirror codehaus-parent codemodel compat-exiv2-026 compat-guile18 compat-hwloc1 compat-libpthread-nonshared compat-libtiff3 compat-openssl10 compat-sap-c++-11 compat-sap-c++-10 compat-sap-c++-9 createrepo_c-devel ctags ctags-etags custodia cyrus-imapd-vzic dbus-c++ dbus-c++-devel dbus-c++-glib dbxtool dhcp-libs dirsplit dleyna-connector-dbus dleyna-core dleyna-renderer dleyna-server dnssec-trigger dnssec-trigger-panel dotnet-apphost-pack-3.0 dotnet-apphost-pack-3.1 dotnet-apphost-pack-5.0 dotnet-host-fxr-2.1 dotnet-host-fxr-2.1 dotnet-hostfxr-3.0 dotnet-hostfxr-3.1 dotnet-hostfxr-5.0 dotnet-runtime-2.1 dotnet-runtime-3.0 dotnet-runtime-3.1 dotnet-runtime-5.0 dotnet-sdk-2.1 dotnet-sdk-2.1.5xx dotnet-sdk-3.0 dotnet-sdk-3.1 dotnet-sdk-5.0 dotnet-targeting-pack-3.0 dotnet-targeting-pack-3.1 dotnet-targeting-pack-5.0 dotnet-templates-3.0 dotnet-templates-3.1 dotnet-templates-5.0 dotnet5.0-build-reference-packages dptfxtract drpm drpm-devel dump dvd+rw-tools dyninst-static eclipse-ecf eclipse-emf eclipse-license ed25519-java ee4j-parent elfutils-devel-static elfutils-libelf-devel-static enca enca-devel environment-modules-compat evince-browser-plugin exec-maven-plugin farstream02 felix-osgi-compendium felix-osgi-core felix-osgi-foundation felix-parent file-roller fipscheck fipscheck-devel fipscheck-lib firewire fonts-tweak-tool forge-parent freeradius-mysql freeradius-perl freeradius-postgresql freeradius-sqlite freeradius-unixODBC fuse-sshfs fusesource-pom future gamin gamin-devel gavl gcc-toolset-10 gcc-toolset-10-annobin gcc-toolset-10-binutils gcc-toolset-10-binutils-devel gcc-toolset-10-build gcc-toolset-10-dwz gcc-toolset-10-dyninst gcc-toolset-10-dyninst-devel gcc-toolset-10-elfutils gcc-toolset-10-elfutils-debuginfod-client gcc-toolset-10-elfutils-debuginfod-client-devel gcc-toolset-10-elfutils-devel gcc-toolset-10-elfutils-libelf gcc-toolset-10-elfutils-libelf-devel gcc-toolset-10-elfutils-libs gcc-toolset-10-gcc gcc-toolset-10-gcc-c++ gcc-toolset-10-gcc-gdb-plugin gcc-toolset-10-gcc-gfortran gcc-toolset-10-gdb gcc-toolset-10-gdb-doc gcc-toolset-10-gdb-gdbserver gcc-toolset-10-libasan-devel gcc-toolset-10-libatomic-devel gcc-toolset-10-libitm-devel gcc-toolset-10-liblsan-devel gcc-toolset-10-libquadmath-devel gcc-toolset-10-libstdc++-devel gcc-toolset-10-libstdc++-docs gcc-toolset-10-libtsan-devel gcc-toolset-10-libubsan-devel gcc-toolset-10-ltrace gcc-toolset-10-make gcc-toolset-10-make-devel gcc-toolset-10-perftools gcc-toolset-10-runtime gcc-toolset-10-strace gcc-toolset-10-systemtap gcc-toolset-10-systemtap-client gcc-toolset-10-systemtap-devel gcc-toolset-10-systemtap-initscript gcc-toolset-10-systemtap-runtime gcc-toolset-10-systemtap-sdt-devel gcc-toolset-10-systemtap-server gcc-toolset-10-toolchain gcc-toolset-10-valgrind gcc-toolset-10-valgrind-devel gcc-toolset-9 gcc-toolset-9-annobin gcc-toolset-9-build gcc-toolset-9-perftools gcc-toolset-9-runtime gcc-toolset-9-toolchain gcc-toolset-11-make-devel GConf2 GConf2-devel gegl genisoimage genwqe-tools genwqe-vpd genwqe-zlib genwqe-zlib-devel geoipupdate geronimo-annotation geronimo-jms geronimo-jpa geronimo-parent-poms gfbgraph gflags gflags-devel glassfish-annotation-api glassfish-el glassfish-fastinfoset glassfish-jaxb-core glassfish-jaxb-txw2 glassfish-jsp glassfish-jsp-api glassfish-legal glassfish-master-pom glassfish-servlet-api glew-devel glib2-fam glog glog-devel gmock gmock-devel gnome-abrt gnome-boxes gnome-menus-devel gnome-online-miners gnome-shell-extension-disable-screenshield gnome-shell-extension-horizontal-workspaces gnome-shell-extension-no-hot-corner gnome-shell-extension-window-grouper gnome-themes-standard gnu-free-fonts-common gnu-free-mono-fonts gnu-free-sans-fonts gnu-free-serif-fonts gnupg2-smime gnuplot gnuplot-common gobject-introspection-devel google-gson google-noto-sans-syriac-eastern-fonts google-noto-sans-syriac-estrangela-fonts google-noto-sans-syriac-western-fonts google-noto-sans-tibetan-fonts google-noto-sans-ui-fonts gphoto2 gsl-devel gssntlmssp gtest gtest-devel gtkmm24 gtkmm24-devel gtkmm24-docs gtksourceview3 gtksourceview3-devel gtkspell gtkspell-devel gtkspell3 guile gutenprint-gimp gutenprint-libs-ui gvfs-afc gvfs-afp gvfs-archive hamcrest-core hawtjni hawtjni hawtjni-runtime highlight-gui hivex-devel hostname hplip-gui httpcomponents-project hwloc-plugins hyphen-fo hyphen-grc hyphen-hsb hyphen-ia hyphen-is hyphen-ku hyphen-mi hyphen-mn hyphen-sa hyphen-tk ibus-sayura icedax icu4j idm-console-framework iptables ipython isl isl-devel isorelax istack-commons-runtime istack-commons-tools iwl3945-firmware iwl4965-firmware iwl6000-firmware jacoco jaf jakarta-oro janino jansi-native jarjar java-1.8.0-ibm java-1.8.0-ibm-demo java-1.8.0-ibm-devel java-1.8.0-ibm-headless java-1.8.0-ibm-jdbc java-1.8.0-ibm-plugin java-1.8.0-ibm-src java-1.8.0-ibm-webstart java-1.8.0-openjdk-accessibility java-1.8.0-openjdk-accessibility-slowdebug java_cup java-atk-wrapper javacc javacc-maven-plugin javaewah javaparser javapoet javassist javassist-javadoc jaxen jboss-annotations-1.2-api jboss-interceptors-1.2-api jboss-logmanager jboss-parent jctools jdepend jdependency jdom jdom2 jetty jffi jflex jgit jline jnr-netdb jolokia-jvm-agent js-uglify jsch json_simple jss-javadoc jtidy junit5 jvnet-parent jzlib kernel-cross-headers ksc kurdit-unikurd-web-fonts kyotocabinet-libs ldapjdk-javadoc lensfun lensfun-devel lftp-scripts libaec libaec-devel libappindicator-gtk3 libappindicator-gtk3-devel libatomic-static libavc1394 libblocksruntime libcacard libcacard-devel libcgroup libcgroup-tools libchamplain libchamplain-devel libchamplain-gtk libcroco libcroco-devel libcxl libcxl-devel libdap libdap-devel libdazzle-devel libdbusmenu libdbusmenu-devel libdbusmenu-doc libdbusmenu-gtk3 libdbusmenu-gtk3-devel libdc1394 libdnet libdnet-devel libdv libdwarf libdwarf-devel libdwarf-static libdwarf-tools libeasyfc libeasyfc-gobject libepubgen-devel libertas-sd8686-firmware libertas-usb8388-firmware libertas-usb8388-olpc-firmware libgdither libGLEW libgovirt libguestfs-benchmarking libguestfs-devel libguestfs-gfs2 libguestfs-gobject libguestfs-gobject-devel libguestfs-java libguestfs-java-devel libguestfs-javadoc libguestfs-man-pages-ja libguestfs-man-pages-uk libguestfs-tools libguestfs-tools-c libhugetlbfs libhugetlbfs-devel libhugetlbfs-utils libIDL libIDL-devel libidn libiec61883 libindicator-gtk3 libindicator-gtk3-devel libiscsi-devel libjose-devel libkkc libkkc-common libkkc-data libldb-devel liblogging libluksmeta-devel libmalaga libmcpp libmemcached libmemcached-libs libmetalink libmodulemd1 libmongocrypt libmtp-devel libmusicbrainz5 libmusicbrainz5-devel libnbd-devel liboauth liboauth-devel libpfm-static libpng12 libpurple libpurple-devel libraw1394 libreport-plugin-mailx libreport-plugin-rhtsupport libreport-plugin-ureport libreport-rhel libreport-rhel-bugzilla librpmem librpmem-debug librpmem-devel libsass libsass-devel libselinux-python libsqlite3x libtalloc-devel libtar libtdb-devel libtevent-devel libtpms-devel libunwind libusal libvarlink libverto-libevent libvirt-admin libvirt-bash-completion libvirt-daemon-driver-storage-gluster libvirt-daemon-driver-storage-iscsi-direct libvirt-devel libvirt-docs libvirt-gconfig libvirt-gobject libvirt-lock-sanlock libvirt-wireshark libvmem libvmem-debug libvmem-devel libvmmalloc libvmmalloc-debug libvmmalloc-devel libvncserver libwinpr-devel libwmf libwmf-devel libwmf-lite libXNVCtrl libyami log4j12 log4j12-javadoc lohit-malayalam-fonts lohit-nepali-fonts lorax-composer lua-guestfs lucene mailman mailx make-devel malaga malaga-suomi-voikko marisa maven-antrun-plugin maven-assembly-plugin maven-clean-plugin maven-dependency-analyzer maven-dependency-plugin maven-doxia maven-doxia-sitetools maven-install-plugin maven-invoker maven-invoker-plugin maven-parent maven-plugins-pom maven-reporting-api maven-reporting-impl maven-resolver-api maven-resolver-connector-basic maven-resolver-impl maven-resolver-spi maven-resolver-transport-wagon maven-resolver-util maven-scm maven-script-interpreter maven-shade-plugin maven-shared maven-verifier maven-wagon-file maven-wagon-http maven-wagon-http-shared maven-wagon-provider-api maven2 meanwhile mercurial mercurial-hgk metis metis-devel mingw32-bzip2 mingw32-bzip2-static mingw32-cairo mingw32-expat mingw32-fontconfig mingw32-freetype mingw32-freetype-static mingw32-gstreamer1 mingw32-harfbuzz mingw32-harfbuzz-static mingw32-icu mingw32-libjpeg-turbo mingw32-libjpeg-turbo-static mingw32-libpng mingw32-libpng-static mingw32-libtiff mingw32-libtiff-static mingw32-openssl mingw32-readline mingw32-sqlite mingw32-sqlite-static mingw64-adwaita-icon-theme mingw64-bzip2 mingw64-bzip2-static mingw64-cairo mingw64-expat mingw64-fontconfig mingw64-freetype mingw64-freetype-static mingw64-gstreamer1 mingw64-harfbuzz mingw64-harfbuzz-static mingw64-icu mingw64-libjpeg-turbo mingw64-libjpeg-turbo-static mingw64-libpng mingw64-libpng-static mingw64-libtiff mingw64-libtiff-static mingw64-nettle mingw64-openssl mingw64-readline mingw64-sqlite mingw64-sqlite-static modello mojo-parent mongo-c-driver mousetweaks mozjs52 mozjs52-devel mozjs60 mozjs60-devel mozvoikko msv-javadoc msv-manual munge-maven-plugin mythes-mi mythes-ne nafees-web-naskh-fonts nbd nbdkit-devel nbdkit-example-plugins nbdkit-gzip-plugin nbdkit-plugin-python-common nbdkit-plugin-vddk ncompress ncurses-compat-libs net-tools netcf netcf-devel netcf-libs network-scripts network-scripts-ppp nkf nss_nis nss-pam-ldapd objectweb-asm objectweb-asm-javadoc objectweb-pom ocaml-bisect-ppx ocaml-camlp4 ocaml-camlp4-devel ocaml-lwt ocaml-mmap ocaml-ocplib-endian ocaml-ounit ocaml-result ocaml-seq opencryptoki-tpmtok opencv-contrib opencv-core opencv-devel openhpi openhpi-libs OpenIPMI-perl openssh-cavs openssh-ldap openssl-ibmpkcs11 opentest4j os-maven-plugin pakchois pandoc paps-libs paranamer parfait parfait-examples parfait-javadoc pcp-parfait-agent pcp-pmda-rpm pcp-pmda-vmware pcsc-lite-doc peripety perl-B-Debug perl-B-Lint perl-Class-Factory-Util perl-Class-ISA perl-DateTime-Format-HTTP perl-DateTime-Format-Mail perl-File-CheckTree perl-homedir perl-libxml-perl perl-Locale-Codes perl-Mozilla-LDAP perl-NKF perl-Object-HashBase-tools perl-Package-DeprecationManager perl-Pod-LaTeX perl-Pod-Plainer perl-prefork perl-String-CRC32 perl-SUPER perl-Sys-Virt perl-tests perl-YAML-Syck phodav php-recode php-xmlrpc pidgin pidgin-devel pidgin-sipe pinentry-emacs pinentry-gtk pipewire0.2-devel pipewire0.2-libs platform-python-coverage plexus-ant-factory plexus-bsh-factory plexus-cli plexus-component-api plexus-component-factories-pom plexus-components-pom plexus-i18n plexus-interactivity plexus-pom plexus-velocity plymouth-plugin-throbgress powermock prometheus-jmx-exporter prometheus-jmx-exporter-openjdk11 ptscotch-mpich ptscotch-mpich-devel ptscotch-mpich-devel-parmetis ptscotch-openmpi ptscotch-openmpi-devel purple-sipe pygobject2-doc pygtk2 pygtk2-codegen pygtk2-devel pygtk2-doc python-nose-docs python-nss-doc python-podman-api python-psycopg2-doc python-pymongo-doc python-redis python-schedutils python-slip python-sqlalchemy-doc python-varlink python-virtualenv-doc python2-backports python2-backports-ssl_match_hostname python2-bson python2-coverage python2-docs python2-docs-info python2-funcsigs python2-ipaddress python2-mock python2-nose python2-numpy-doc python2-psycopg2-debug python2-psycopg2-tests python2-pymongo python2-pymongo-gridfs python2-pytest-mock python2-sqlalchemy python2-tools python2-virtualenv python3-bson python3-click python3-coverage python3-cpio python3-custodia python3-docs python3-flask python3-gevent python3-gobject-base python3-hivex python3-html5lib python3-hypothesis python3-ipatests python3-itsdangerous python3-jwt python3-libguestfs python3-mock python3-networkx-core python3-nose python3-nss python3-openipmi python3-pillow python3-ptyprocess python3-pydbus python3-pymongo python3-pymongo-gridfs python3-pyOpenSSL python3-pytoml python3-reportlab python3-schedutils python3-scons python3-semantic_version python3-slip python3-slip-dbus python3-sqlalchemy python3-syspurpose python3-virtualenv python3-webencodings python3-werkzeug python38-asn1crypto python38-numpy-doc python38-psycopg2-doc python38-psycopg2-tests python39-numpy-doc python39-psycopg2-doc python39-psycopg2-tests qemu-kvm-block-gluster qemu-kvm-block-iscsi qemu-kvm-block-ssh qemu-kvm-hw-usbredir qemu-kvm-tests qpdf qpdf-doc qpid-proton qrencode qrencode-devel qrencode-libs qt5-qtcanvas3d qt5-qtcanvas3d-examples rarian rarian-compat re2c recode redhat-menus redhat-support-lib-python redhat-support-tool reflections regexp relaxngDatatype rhsm-gtk rpm-plugin-prioreset rpmemd rsyslog-udpspoof ruby-hivex ruby-libguestfs rubygem-abrt rubygem-abrt-doc rubygem-bson rubygem-bson-doc rubygem-mongo rubygem-mongo-doc s390utils-cmsfs samba-pidl samba-test samba-test-libs samyak-devanagari-fonts samyak-fonts-common samyak-gujarati-fonts samyak-malayalam-fonts samyak-odia-fonts samyak-tamil-fonts sane-frontends sanlk-reset scala scotch scotch-devel SDL_sound selinux-policy-minimum sendmail sgabios sgabios-bin shrinkwrap sisu-inject sisu-mojos sisu-plexus skkdic SLOF smc-anjalioldlipi-fonts smc-dyuthi-fonts smc-fonts-common smc-kalyani-fonts smc-raghumalayalam-fonts smc-suruma-fonts softhsm-devel sonatype-oss-parent sonatype-plugins-parent sos-collector sparsehash-devel spax spec-version-maven-plugin spice spice-client-win-x64 spice-client-win-x86 spice-glib spice-glib-devel spice-gtk spice-gtk-tools spice-gtk3 spice-gtk3-devel spice-gtk3-vala spice-parent spice-protocol spice-qxl-wddm-dod spice-server spice-server-devel spice-qxl-xddm spice-server spice-streaming-agent spice-vdagent-win-x64 spice-vdagent-win-x86 sssd-libwbclient star stax-ex stax2-api stringtemplate stringtemplate4 subscription-manager-initial-setup-addon subscription-manager-migration subscription-manager-migration-data subversion-javahl SuperLU SuperLU-devel supermin-devel swig swig-doc swig-gdb swtpm-devel swtpm-tools-pkcs11 system-storage-manager tcl-brlapi testng tibetan-machine-uni-fonts timedatex tpm-quote-tools tpm-tools tpm-tools-pkcs11 treelayout trousers trousers-lib tuned-profiles-compat tuned-profiles-nfv-host-bin tuned-utils-systemtap tycho uglify-js unbound-devel univocity-output-tester univocity-parsers usbguard-notifier usbredir-devel utf8cpp uthash velocity vinagre vino virt-dib virt-p2v-maker vm-dump-metrics-devel weld-parent wodim woodstox-core wqy-microhei-fonts wqy-unibit-fonts xdelta xmlgraphics-commons xmlstreambuffer xinetd xorg-x11-apps xorg-x11-drv-qxl xorg-x11-server-Xspice xpp3 xsane-gimp xsom xz-java xz-java-javadoc yajl-devel yp-tools ypbind ypserv 9.20. Deprecated and unmaintained devices This section lists devices (drivers, adapters) that continue to be supported until the end of life of RHEL 8 but will likely not be supported in future major releases of this product and are not recommended for new deployments. Support for devices other than those listed remains unchanged. These are deprecated devices. are available but are no longer being tested or updated on a routine basis in RHEL 8. Red Hat may fix serious bugs, including security bugs, at its discretion. These devices should no longer be used in production, and it is likely they will be disabled in the major release. These are unmaintained devices. PCI device IDs are in the format of vendor:device:subvendor:subdevice . If no device ID is listed, all devices associated with the corresponding driver have been deprecated. To check the PCI IDs of the hardware on your system, run the lspci -nn command. Table 9.1. Deprecated devices Device ID Driver Device name bnx2 QLogic BCM5706/5708/5709/5716 Driver hpsa Hewlett-Packard Company: Smart Array Controllers 0x10df:0x0724 lpfc Emulex Corporation: OneConnect FCoE Initiator (Skyhawk) 0x10df:0xe200 lpfc Emulex Corporation: LPe15000/LPe16000 Series 8Gb/16Gb Fibre Channel Adapter 0x10df:0xf011 lpfc Emulex Corporation: Saturn: LightPulse Fibre Channel Host Adapter 0x10df:0xf015 lpfc Emulex Corporation: Saturn: LightPulse Fibre Channel Host Adapter 0x10df:0xf100 lpfc Emulex Corporation: LPe12000 Series 8Gb Fibre Channel Adapter 0x10df:0xfc40 lpfc Emulex Corporation: Saturn-X: LightPulse Fibre Channel Host Adapter 0x10df:0xe220 be2net Emulex Corporation: OneConnect NIC (Lancer) 0x1000:0x005b megaraid_sas Broadcom / LSI: MegaRAID SAS 2208 [Thunderbolt] 0x1000:0x006E mpt3sas Broadcom / LSI: SAS2308 PCI-Express Fusion-MPT SAS-2 0x1000:0x0080 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0081 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0082 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0083 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0084 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0085 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0086 mpt3sas Broadcom / LSI: SAS2308 PCI-Express Fusion-MPT SAS-2 0x1000:0x0087 mpt3sas Broadcom / LSI: SAS2308 PCI-Express Fusion-MPT SAS-2 myri10ge Myricom 10G driver (10GbE) netxen_nic QLogic/NetXen (1/10) GbE Intelligent Ethernet Driver 0x1077:0x2031 qla2xxx QLogic Corp.: ISP8324-based 16Gb Fibre Channel to PCI Express Adapter 0x1077:0x2532 qla2xxx QLogic Corp.: ISP2532-based 8Gb Fibre Channel to PCI Express HBA 0x1077:0x8031 qla2xxx QLogic Corp.: 8300 Series 10GbE Converged Network Adapter (FCoE) qla3xxx QLogic ISP3XXX Network Driver v2.03.00-k5 0x1924:0x0803 sfc Solarflare Communications: SFC9020 10G Ethernet Controller 0x1924:0x0813 sfc Solarflare Communications: SFL9021 10GBASE-T Ethernet Controller Soft-RoCE (rdma_rxe) HNS-RoCE HNS GE/10GE/25GE/50GE/100GE RDMA Network Controller liquidio Cavium LiquidIO Intelligent Server Adapter Driver liquidio_vf Cavium LiquidIO Intelligent Server Adapter Virtual Function Driver Table 9.2. Unmaintained devices Device ID Driver Device name e1000 Intel(R) PRO/1000 Network Driver mptbase Fusion MPT SAS Host driver mptsas Fusion MPT SAS Host driver mptscsih Fusion MPT SCSI Host driver mptspi Fusion MPT SAS Host driver 0x1000:0x0071 [a] megaraid_sas Broadcom / LSI: MR SAS HBA 2004 0x1000:0x0073 [a] megaraid_sas Broadcom / LSI: MegaRAID SAS 2008 [Falcon] 0x1000:0x0079 [a] megaraid_sas Broadcom / LSI: MegaRAID SAS 2108 [Liberator] nvmet_tcp NVMe/TCP target driver [a] Disabled in RHEL 8.0, re-enabled in RHEL 8.4 due to customer requests. | [
"update-crypto-policies --set LEGACY",
"~]# yum install network-scripts",
"cat /etc/redhat-release",
"yum remove ansible",
"subscription-manager repos --disable ansible-2-for-rhel-8-x86_64-rpms",
"yum install ansible-core"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.6_release_notes/deprecated_functionality |
Chapter 18. System and Subscription Management | Chapter 18. System and Subscription Management New payload_gpgcheck option added to yum With this update, the new configuration option payload_gpgcheck has been added to the yum utility. This option enables a GNU Privacy Guard (GPG) signature check on the payload sections of packages, thus enhancing the security and integrity when installing packages. Previously, when gpgcheck option was enabled, yum only performed a GPG signature check on headers. Consequently, if the payload data were tampered with or corrupted, RPM unpacking error occurred, and the package was left in a partly installed state. This might have put the operating system into an inconsistent and vulnerable state. You can use the new payload_gpgcheck option in conjunction with the gpgcheck or localpkg_gpgcheck options to prevent this problem. As a result, when payload_gpgcheck is enabled, yum performs a GPG signature check on the payload and aborts the transaction if it is not verified. Using payload_gpgcheck is equivalent to manually running rpm -K on downloaded packages. (BZ# 1343690 ) A no-proxy configuration is available for virt-who With this update, the virt-who service can be set to ignore proxy network settings. This enables virt-who to work properly on environments that use a proxy connection with one-way communication. To set up this functionality, add the NO_PROXY environment variable to the /etc/sysconfig/virt-who file. Alternatively, you can add the no_proxy variable to the [server] section of the /etc/rhsm/rhsm.conf file. Note that the NO_PROXY setting does not work when synchronizing the hypervisor using Red Hat Satellite 5. (BZ#1299643) virt-who respects independent interval settings With this update, the virt-who command reports each interval on all sources that have updates. In addition, if virt-who is configured to send updates to more than one destination, for example to an Red Hat Satellite instance and the Red Hat Subscription Management (RHSM), the interval for each is maintained separately. This means that all updates can be sent to each configured destination, regardless of the state of communication with other destinations. (BZ# 1436811 ) Password options added to virt-who-password With this update, the -p and --password options have been added to the virt-who-password utility. This enables the utility to be used in scripts. (BZ# 1426058 ) Regular expressions and wildcards can be used in some virt-who configuration parameters With this update, regular expressions and wildcards can be used in the filter_hosts and exclude_hosts configuration parameters. This enables users of virt-who to maintain a list of hosts to report on with much more ease. By using regular expressions and wildcards to specify which hosts to report on or exclude, the hosts list can be much more concise. (BZ# 1405967 ) virt-who configuration files are easier to manage The virt-who service now only uses configuration files in the /etc/virt-who.d/ directory that end with the .conf extension. This enables easier management of virt-who configuration files, for example for testing or backup purposes. (BZ# 1369107 ) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/new_features_system_and_subscription_management |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/proc_providing-feedback-on-red-hat-documentation |
4.136. libhbaapi | 4.136. libhbaapi 4.136.1. RHBA-2011:1605 - libhbaapi bug fix and enhancement update An updated libhbaapi package that fixes multiple bugs and adds various enhancements is now available for Red Hat Enterprise Linux 6. The Host Bus Adapter API is a C-level project to manage Fibre Channel Host Bus Adapters. The package has been upgraded to upstream version 2.2, which provides a number of bug fixes and enhancements over the version. (BZ# 719585 ) Users are advised to upgrade to this updated libhbaapi package, which fixes these bugs and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libhbaapi |
Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director | Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director Red Hat OpenStack Platform 17.1 Configure director to deploy and use a Red Hat Ceph Storage cluster OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_red_hat_ceph_storage_and_red_hat_openstack_platform_together_with_director/index |
1.2. Installing Pacemaker configuration tools | 1.2. Installing Pacemaker configuration tools You can use the following yum install command to install the Red Hat High Availability Add-On software packages along with all available fence agents from the High Availability channel. Alternately, you can install the Red Hat High Availability Add-On software packages along with only the fence agent that you require with the following command. The following command displays a listing of the available fence agents. The lvm2-cluster and gfs2-utils packages are part of ResilientStorage channel. You can install them, as needed, with the following command. Warning After you install the Red Hat High Availability Add-On packages, you should ensure that your software update preferences are set so that nothing is installed automatically. Installation on a running cluster can cause unexpected behaviors. | [
"yum install pcs pacemaker fence-agents-all",
"yum install pcs pacemaker fence-agents- model",
"rpm -q -a | grep fence fence-agents-rhevm-4.0.2-3.el7.x86_64 fence-agents-ilo-mp-4.0.2-3.el7.x86_64 fence-agents-ipmilan-4.0.2-3.el7.x86_64",
"yum install lvm2-cluster gfs2-utils"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-installation-haar |
Chapter 2. How to set up cost management | Chapter 2. How to set up cost management To get started with cost management, complete the following four steps, plus any applicable substeps: Access and log in to cost management. Connect and view your cost data: Cost management can analyze cost data from on-premise instances of OpenShift or cloud-based instances of OpenShift. If your organization has an on-premise instance of OpenShift, complete the following additional steps: Install the Metrics Operator. Set up OpenShift tags. Create a Red Hat OpenShift cost model. Cost management also supports AWS, Google Cloud, Oracle Cloud, and Microsoft Azure. To set up cost management for OpenShift that is running on a cloud provider, complete the following additional steps: Install the Metrics Operator. Add an integration for your cloud provider. Set up tags. Create a cloud cost model. Finally, finish getting set up for both on-premise and cloud with the following steps: Control your permissions Analyze your results 2.1. Sign up Cost management is part of the Red Hat Insights portfolio of services. The Red Hat Insights suite of advanced analytical tools helps you to identify and prioritize impacts on your operations, security, and business. You can access cost management in the Hybrid Cloud Console . To get started, click OpenShift Cost Management . After you sign up, configure a user with Cloud Administrator access that can add cloud or OpenShift integrations to your cost management. For more information, see Configuring cloud integrations for Red Hat services . 2.2. Connect and view your cost data To begin analyzing your cost data, you need to enter information about your costs. The steps you will take depend on if your organization set up an on-premise cluster with OpenShift, or integrated with a cloud provider. 2.2.1. Option 1: On-premise To get started with an on-premise cluster, complete the following steps: 2.2.1.1. Install the Metrics Operator Red Hat(R) OpenShift(R) Operators automate the creation, configuration, and management of instances of Kubernetes-native applications. Your OpenShift cluster should already be set up, but you additionally need to set up the Metrics Operator. To install the Metrics Operator, follow the instructions in Installing a cost operator . 2.2.1.2. Setting up OpenShift tags Tags, also called labels, are strings of custom metadata that you assign to resources. You can use tags to differentiate and allocate costs between various parts of your environment. To learn about the different use cases for tags and how to set them up, see Managing cost data using tagging . 2.2.1.3. Create a Red Hat OpenShift cost model Finally, you must add a cost model to accurately analyze your costs. A cost model is a framework that uses raw costs and metrics to define calculations for your costs. You can record, categorize, and distribute the costs that the cost model generates to specific customers, business units, or projects. To learn how to set up a cost model, see Using cost models . 2.2.2. Option 2: Cloud Cost management supports AWS, Google Cloud, Oracle Cloud, and Microsoft Azure. Unlike an on-premise cluster, you need to set up an integration to connect to your cloud provider. To integrate cost management with your cloud provider, complete the following steps: 2.2.2.1. Install the Metrics Operator Red Hat(R) OpenShift(R) Operators automate the creation, configuration, and management of instances of Kubernetes-native applications. Your OpenShift cluster should already be set up, but you additionally need to set up the Metrics Operator. To install the Metrics Operator, follow the instructions in Installing a cost operator . 2.2.2.2. Add an integration for your cloud provider To enable cost management to monitor your costs with a cloud provider such as AWS, Google, Oracle, or Azure, you need to set up an integration. An integration is a provider account that cost management connects to and monitors. The process to set up an integration for each provider varies. To learn how to add your specific integration to cost management, see the following guides: Integrating OpenShift Container Platform data into cost management Integrating Amazon Web Services (AWS) data into cost management Integrating Google Cloud data into cost management Integrating Microsoft Azure data into cost management Integrating Oracle Cloud data into cost management 2.2.2.3. Setting up OpenShift tags Tags, also called labels, are strings of custom metadata that you assign to resources. You can use tags to differentiate and allocate costs between various parts of your environment. To learn about the different use cases for tags and how to set them up, see Managing cost data using tagging . 2.2.2.4. Create a cloud cost model Finally, depending on your cloud provider, you need to add either an AWS, Google, or Azure cost model to accurately analyze your costs. A cost model is a framework that uses raw costs and metrics to define calculations for your costs. You can record, categorize, and distribute the costs that the cost model generates to specific customers, business units, or projects. To learn how to set up a cloud cost model, see Using cost models . 2.3. Control your permissions You might want to limit access to your data to only specific users or organizations. To learn how to control permissions, see Limiting access to cost management resources . 2.4. Analyze your results Now that your cost data is generated, you can analyze your results and make changes in your business. To learn more about cost analysis, go to Visualizing your costs using Cost Explorer . = | null | https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/getting_started_with_cost_management/steps-to-cost-management |
Chapter 24. OVN-Kubernetes network plugin | Chapter 24. OVN-Kubernetes network plugin 24.1. About the OVN-Kubernetes network plugin The OpenShift Container Platform cluster uses a virtualized network for pod and service networks. Part of Red Hat OpenShift Networking, the OVN-Kubernetes network plugin is the default network provider for OpenShift Container Platform. OVN-Kubernetes is based on Open Virtual Network (OVN) and provides an overlay-based networking implementation. A cluster that uses the OVN-Kubernetes plugin also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration. Note OVN-Kubernetes is the default networking solution for OpenShift Container Platform and single-node OpenShift deployments. OVN-Kubernetes, which arose from the OVS project, uses many of the same constructs, such as open flow rules, to determine how packets travel through the network. For more information, see the Open Virtual Network website . OVN-Kubernetes is a series of daemons for OVS that translate virtual network configurations into OpenFlow rules. OpenFlow is a protocol for communicating with network switches and routers, providing a means for remotely controlling the flow of network traffic on a network device, allowing network administrators to configure, manage, and monitor the flow of network traffic. OVN-Kubernetes provides more of the advanced functionality not available with OpenFlow . OVN supports distributed virtual routing, distributed logical switches, access control, DHCP and DNS. OVN implements distributed virtual routing within logic flows which equate to open flows. So for example if you have a pod that sends out a DHCP request on the network, it sends out that broadcast looking for DHCP address there will be a logic flow rule that matches that packet, and it responds giving it a gateway, a DNS server an IP address and so on. OVN-Kubernetes runs a daemon on each node. There are daemon sets for the databases and for the OVN controller that run on every node. The OVN controller programs the Open vSwitch daemon on the nodes to support the network provider features; egress IPs, firewalls, routers, hybrid networking, IPSEC encryption, IPv6, network policy, network policy logs, hardware offloading and multicast. 24.1.1. OVN-Kubernetes purpose The OVN-Kubernetes network plugin is an open-source, fully-featured Kubernetes CNI plugin that uses Open Virtual Network (OVN) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution. The OVN-Kubernetes network plugin: Uses OVN (Open Virtual Network) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution. Implements Kubernetes network policy support, including ingress and egress rules. Uses the Geneve (Generic Network Virtualization Encapsulation) protocol rather than VXLAN to create an overlay network between nodes. The OVN-Kubernetes network plugin provides the following advantages over OpenShift SDN. Full support for IPv6 single-stack and IPv4/IPv6 dual-stack networking on supported platforms Support for hybrid clusters with both Linux and Microsoft Windows workloads Optional IPsec encryption of intra-cluster communications Offload of network data processing from host CPU to compatible network cards and data processing units (DPUs) 24.1.2. Supported network plugin feature matrix Red Hat OpenShift Networking offers two options for the network plugin, OpenShift SDN and OVN-Kubernetes, for the network plugin. The following table summarizes the current feature support for both network plugins: Table 24.1. Default CNI network plugin feature comparison Feature OpenShift SDN OVN-Kubernetes Egress IPs Supported Supported Egress firewall Supported Supported [1] Egress router Supported Supported [2] Hybrid networking Not supported Supported IPsec encryption for intra-cluster communication Not supported Supported IPv4 single-stack Supported Supported IPv6 single-stack Not supported Supported [3] IPv4/IPv6 dual-stack Not Supported Supported [4] IPv6/IPv4 dual-stack Not supported Supported [5] Kubernetes network policy Supported Supported Kubernetes network policy logs Not supported Supported Hardware offloading Not supported Supported Multicast Supported Supported Egress firewall is also known as egress network policy in OpenShift SDN. This is not the same as network policy egress. Egress router for OVN-Kubernetes supports only redirect mode. IPv6 single-stack networking on a bare-metal platform. IPv4/IPv6 dual-stack networking on bare-metal, VMware vSphere (installer-provisioned infrastructure installations only), IBM Power(R), IBM Z(R), and RHOSP platforms. Dual-stack networking on RHOSP is a Technology Preview feature. IPv6/IPv4 dual-stack networking on bare-metal, VMware vSphere (installer-provisioned infrastructure installations only), and IBM Power(R) platforms. 24.1.3. OVN-Kubernetes IPv6 and dual-stack limitations The OVN-Kubernetes network plugin has the following limitations: For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway. If this requirement is not met, pods on the host in the ovnkube-node daemon set enter the CrashLoopBackOff state. If you display a pod with a command such as oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml , the status field contains more than one message about the default gateway, as shown in the following output: I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4 The only resolution is to reconfigure the host networking so that both IP families use the same network interface for the default gateway. For clusters configured for dual-stack networking, both the IPv4 and IPv6 routing tables must contain the default gateway. If this requirement is not met, pods on the host in the ovnkube-node daemon set enter the CrashLoopBackOff state. If you display a pod with a command such as oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml , the status field contains more than one message about the default gateway, as shown in the following output: I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface The only resolution is to reconfigure the host networking so that both IP families contain the default gateway. 24.1.4. Session affinity Session affinity is a feature that applies to Kubernetes Service objects. You can use session affinity if you want to ensure that each time you connect to a <service_VIP>:<Port>, the traffic is always load balanced to the same back end. For more information, including how to set session affinity based on a client's IP address, see Session affinity . Stickiness timeout for session affinity The OVN-Kubernetes network plugin for OpenShift Container Platform calculates the stickiness timeout for a session from a client based on the last packet. For example, if you run a curl command 10 times, the sticky session timer starts from the tenth packet not the first. As a result, if the client is continuously contacting the service, then the session never times out. The timeout starts when the service has not received a packet for the amount of time set by the timeoutSeconds parameter. Additional resources Configuring an egress firewall for a project About network policy Logging network policy events Enabling multicast for a project Configuring IPsec encryption Network [operator.openshift.io/v1] 24.2. OVN-Kubernetes architecture 24.2.1. Introduction to OVN-Kubernetes architecture The following diagram shows the OVN-Kubernetes architecture. Figure 24.1. OVK-Kubernetes architecture The key components are: Cloud Management System (CMS) - A platform specific client for OVN that provides a CMS specific plugin for OVN integration. The plugin translates the cloud management system's concept of the logical network configuration, stored in the CMS configuration database in a CMS-specific format, into an intermediate representation understood by OVN. OVN Northbound database ( nbdb ) container - Stores the logical network configuration passed by the CMS plugin. OVN Southbound database ( sbdb ) container - Stores the physical and logical network configuration state for Open vSwitch (OVS) system on each node, including tables that bind them. OVN north daemon ( ovn-northd ) - This is the intermediary client between nbdb container and sbdb container. It translates the logical network configuration in terms of conventional network concepts, taken from the nbdb container, into logical data path flows in the sbdb container. The container name for ovn-northd daemon is northd and it runs in the ovnkube-node pods. ovn-controller - This is the OVN agent that interacts with OVS and hypervisors, for any information or update that is needed for sbdb container. The ovn-controller reads logical flows from the sbdb container, translates them into OpenFlow flows and sends them to the node's OVS daemon. The container name is ovn-controller and it runs in the ovnkube-node pods. The OVN northd, northbound database, and southbound database run on each node in the cluster and mostly contain and process information that is local to that node. The OVN northbound database has the logical network configuration passed down to it by the cloud management system (CMS). The OVN northbound database contains the current desired state of the network, presented as a collection of logical ports, logical switches, logical routers, and more. The ovn-northd ( northd container) connects to the OVN northbound database and the OVN southbound database. It translates the logical network configuration in terms of conventional network concepts, taken from the OVN northbound database, into logical data path flows in the OVN southbound database. The OVN southbound database has physical and logical representations of the network and binding tables that link them together. It contains the chassis information of the node and other constructs like remote transit switch ports that are required to connect to the other nodes in the cluster. The OVN southbound database also contains all the logic flows. The logic flows are shared with the ovn-controller process that runs on each node and the ovn-controller turns those into OpenFlow rules to program Open vSwitch (OVS). The Kubernetes control plane nodes contain two ovnkube-control-plane pods on separate nodes, which perform the central IP address management (IPAM) allocation for each node in the cluster. At any given time, a single ovnkube-control-plane pod is the leader. 24.2.2. Listing all resources in the OVN-Kubernetes project Finding the resources and containers that run in the OVN-Kubernetes project is important to help you understand the OVN-Kubernetes networking implementation. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift CLI ( oc ) installed. Procedure Run the following command to get all resources, endpoints, and ConfigMaps in the OVN-Kubernetes project: USD oc get all,ep,cm -n openshift-ovn-kubernetes Example output Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ NAME READY STATUS RESTARTS AGE pod/ovnkube-control-plane-65c6f55656-6d55h 2/2 Running 0 114m pod/ovnkube-control-plane-65c6f55656-fd7vw 2/2 Running 2 (104m ago) 114m pod/ovnkube-node-bcvts 8/8 Running 0 113m pod/ovnkube-node-drgvv 8/8 Running 0 113m pod/ovnkube-node-f2pxt 8/8 Running 0 113m pod/ovnkube-node-frqsb 8/8 Running 0 105m pod/ovnkube-node-lbxkk 8/8 Running 0 105m pod/ovnkube-node-tt7bx 8/8 Running 1 (102m ago) 105m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ovn-kubernetes-control-plane ClusterIP None <none> 9108/TCP 114m service/ovn-kubernetes-node ClusterIP None <none> 9103/TCP,9105/TCP 114m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/ovnkube-node 6 6 6 6 6 beta.kubernetes.io/os=linux 114m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ovnkube-control-plane 3/3 3 3 114m NAME DESIRED CURRENT READY AGE replicaset.apps/ovnkube-control-plane-65c6f55656 3 3 3 114m NAME ENDPOINTS AGE endpoints/ovn-kubernetes-control-plane 10.0.0.3:9108,10.0.0.4:9108,10.0.0.5:9108 114m endpoints/ovn-kubernetes-node 10.0.0.3:9105,10.0.0.4:9105,10.0.0.5:9105 + 9 more... 114m NAME DATA AGE configmap/control-plane-status 1 113m configmap/kube-root-ca.crt 1 114m configmap/openshift-service-ca.crt 1 114m configmap/ovn-ca 1 114m configmap/ovnkube-config 1 114m configmap/signer-ca 1 114m There is one ovnkube-node pod for each node in the cluster. The ovnkube-config config map has the OpenShift Container Platform OVN-Kubernetes configurations. List all of the containers in the ovnkube-node pods by running the following command: USD oc get pods ovnkube-node-bcvts -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes Expected output ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller The ovnkube-node pod is made up of several containers. It is responsible for hosting the northbound database ( nbdb container), the southbound database ( sbdb container), the north daemon ( northd container), ovn-controller and the ovnkube-controller container. The ovnkube-controller container watches for API objects like pods, egress IPs, namespaces, services, endpoints, egress firewall, and network policies. It is also responsible for allocating pod IP from the available subnet pool for that node. List all the containers in the ovnkube-control-plane pods by running the following command: USD oc get pods ovnkube-control-plane-65c6f55656-6d55h -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes Expected output kube-rbac-proxy ovnkube-cluster-manager The ovnkube-control-plane pod has a container ( ovnkube-cluster-manager ) that resides on each OpenShift Container Platform node. The ovnkube-cluster-manager container allocates pod subnet, transit switch subnet IP and join switch subnet IP to each node in the cluster. The kube-rbac-proxy container monitors metrics for the ovnkube-cluster-manager container. 24.2.3. Listing the OVN-Kubernetes northbound database contents Each node is controlled by the ovnkube-controller container running in the ovnkube-node pod on that node. To understand the OVN logical networking entities you need to examine the northbound database that is running as a container inside the ovnkube-node pod on that node to see what objects are in the node you wish to see. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift CLI ( oc ) installed. Procedure To run ovn nbctl or sbctl commands in a cluster you must open a remote shell into the nbdb or sbdb containers on the relevant node List pods by running the following command: USD oc get po -n openshift-ovn-kubernetes Example output NAME READY STATUS RESTARTS AGE ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m ovnkube-node-55xs2 8/8 Running 0 26m ovnkube-node-7r84r 8/8 Running 0 16m ovnkube-node-bqq8p 8/8 Running 0 17m ovnkube-node-mkj4f 8/8 Running 0 26m ovnkube-node-mlr8k 8/8 Running 0 26m ovnkube-node-wqn2m 8/8 Running 0 16m Optional: To list the pods with node information, run the following command: USD oc get pods -n openshift-ovn-kubernetes -owide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-55xs2 8/8 Running 0 26m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-7r84r 8/8 Running 0 17m 10.0.128.3 ci-ln-t487nnb-72292-mdcnq-worker-b-wbz7z <none> <none> ovnkube-node-bqq8p 8/8 Running 0 17m 10.0.128.2 ci-ln-t487nnb-72292-mdcnq-worker-a-lh7ms <none> <none> ovnkube-node-mkj4f 8/8 Running 0 27m 10.0.0.5 ci-ln-t487nnb-72292-mdcnq-master-0 <none> <none> ovnkube-node-mlr8k 8/8 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-node-wqn2m 8/8 Running 0 17m 10.0.128.4 ci-ln-t487nnb-72292-mdcnq-worker-c-przlm <none> <none> Navigate into a pod to look at the northbound database by running the following command: USD oc rsh -c nbdb -n openshift-ovn-kubernetes ovnkube-node-55xs2 Run the following command to show all the objects in the northbound database: USD ovn-nbctl show The output is too long to list here. The list includes the NAT rules, logical switches, load balancers and so on. You can narrow down and focus on specific components by using some of the following optional commands: Run the following command to show the list of logical routers: USD oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 \ -c northd -- ovn-nbctl lr-list Example output 45339f4f-7d0b-41d0-b5f9-9fca9ce40ce6 (GR_ci-ln-t487nnb-72292-mdcnq-master-2) 96a0a0f0-e7ed-4fec-8393-3195563de1b8 (ovn_cluster_router) Note From this output you can see there is router on each node plus an ovn_cluster_router . Run the following command to show the list of logical switches: USD oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 \ -c nbdb -- ovn-nbctl ls-list Example output bdd7dc3d-d848-4a74-b293-cc15128ea614 (ci-ln-t487nnb-72292-mdcnq-master-2) b349292d-ee03-4914-935f-1940b6cb91e5 (ext_ci-ln-t487nnb-72292-mdcnq-master-2) 0aac0754-ea32-4e33-b086-35eeabf0a140 (join) 992509d7-2c3f-4432-88db-c179e43592e5 (transit_switch) Note From this output you can see there is an ext switch for each node plus switches with the node name itself and a join switch. Run the following command to show the list of load balancers: USD oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 \ -c nbdb -- ovn-nbctl lb-list Example output UUID LB PROTO VIP IPs 7c84c673-ed2a-4436-9a1f-9bc5dd181eea Service_default/ tcp 172.30.0.1:443 10.0.0.3:6443,169.254.169.2:6443,10.0.0.5:6443 4d663fd9-ddc8-4271-b333-4c0e279e20bb Service_default/ tcp 172.30.0.1:443 10.0.0.3:6443,10.0.0.4:6443,10.0.0.5:6443 292eb07f-b82f-4962-868a-4f541d250bca Service_openshif tcp 172.30.105.247:443 10.129.0.12:8443 034b5a7f-bb6a-45e9-8e6d-573a82dc5ee3 Service_openshif tcp 172.30.192.38:443 10.0.0.3:10259,10.0.0.4:10259,10.0.0.5:10259 a68bb53e-be84-48df-bd38-bdd82fcd4026 Service_openshif tcp 172.30.161.125:8443 10.129.0.32:8443 6cc21b3d-2c54-4c94-8ff5-d8e017269c2e Service_openshif tcp 172.30.3.144:443 10.129.0.22:8443 37996ffd-7268-4862-a27f-61cd62e09c32 Service_openshif tcp 172.30.181.107:443 10.129.0.18:8443 81d4da3c-f811-411f-ae0c-bc6713d0861d Service_openshif tcp 172.30.228.23:443 10.129.0.29:8443 ac5a4f3b-b6ba-4ceb-82d0-d84f2c41306e Service_openshif tcp 172.30.14.240:9443 10.129.0.36:9443 c88979fb-1ef5-414b-90ac-43b579351ac9 Service_openshif tcp 172.30.231.192:9001 10.128.0.5:9001,10.128.2.5:9001,10.129.0.5:9001,10.129.2.4:9001,10.130.0.3:9001,10.131.0.3:9001 fcb0a3fb-4a77-4230-a84a-be45dce757e8 Service_openshif tcp 172.30.189.92:443 10.130.0.17:8440 67ef3e7b-ceb9-4bf0-8d96-b43bde4c9151 Service_openshif tcp 172.30.67.218:443 10.129.0.9:8443 d0032fba-7d5e-424a-af25-4ab9b5d46e81 Service_openshif tcp 172.30.102.137:2379 10.0.0.3:2379,10.0.0.4:2379,10.0.0.5:2379 tcp 172.30.102.137:9979 10.0.0.3:9979,10.0.0.4:9979,10.0.0.5:9979 7361c537-3eec-4e6c-bc0c-0522d182abd4 Service_openshif tcp 172.30.198.215:9001 10.0.0.3:9001,10.0.0.4:9001,10.0.0.5:9001,10.0.128.2:9001,10.0.128.3:9001,10.0.128.4:9001 0296c437-1259-410b-a6fd-81c310ad0af5 Service_openshif tcp 172.30.198.215:9001 10.0.0.3:9001,169.254.169.2:9001,10.0.0.5:9001,10.0.128.2:9001,10.0.128.3:9001,10.0.128.4:9001 5d5679f5-45b8-479d-9f7c-08b123c688b8 Service_openshif tcp 172.30.38.253:17698 10.128.0.52:17698,10.129.0.84:17698,10.130.0.60:17698 2adcbab4-d1c9-447d-9573-b5dc9f2efbfa Service_openshif tcp 172.30.148.52:443 10.0.0.4:9202,10.0.0.5:9202 tcp 172.30.148.52:444 10.0.0.4:9203,10.0.0.5:9203 tcp 172.30.148.52:445 10.0.0.4:9204,10.0.0.5:9204 tcp 172.30.148.52:446 10.0.0.4:9205,10.0.0.5:9205 2a33a6d7-af1b-4892-87cc-326a380b809b Service_openshif tcp 172.30.67.219:9091 10.129.2.16:9091,10.131.0.16:9091 tcp 172.30.67.219:9092 10.129.2.16:9092,10.131.0.16:9092 tcp 172.30.67.219:9093 10.129.2.16:9093,10.131.0.16:9093 tcp 172.30.67.219:9094 10.129.2.16:9094,10.131.0.16:9094 f56f59d7-231a-4974-99b3-792e2741ec8d Service_openshif tcp 172.30.89.212:443 10.128.0.41:8443,10.129.0.68:8443,10.130.0.44:8443 08c2c6d7-d217-4b96-b5d8-c80c4e258116 Service_openshif tcp 172.30.102.137:2379 10.0.0.3:2379,169.254.169.2:2379,10.0.0.5:2379 tcp 172.30.102.137:9979 10.0.0.3:9979,169.254.169.2:9979,10.0.0.5:9979 60a69c56-fc6a-4de6-bd88-3f2af5ba5665 Service_openshif tcp 172.30.10.193:443 10.129.0.25:8443 ab1ef694-0826-4671-a22c-565fc2d282ec Service_openshif tcp 172.30.196.123:443 10.128.0.33:8443,10.129.0.64:8443,10.130.0.37:8443 b1fb34d3-0944-4770-9ee3-2683e7a630e2 Service_openshif tcp 172.30.158.93:8443 10.129.0.13:8443 95811c11-56e2-4877-be1e-c78ccb3a82a9 Service_openshif tcp 172.30.46.85:9001 10.130.0.16:9001 4baba1d1-b873-4535-884c-3f6fc07a50fd Service_openshif tcp 172.30.28.87:443 10.129.0.26:8443 6c2e1c90-f0ca-484e-8a8e-40e71442110a Service_openshif udp 172.30.0.10:53 10.128.0.13:5353,10.128.2.6:5353,10.129.0.39:5353,10.129.2.6:5353,10.130.0.11:5353,10.131.0.9:5353 Note From this truncated output you can see there are many OVN-Kubernetes load balancers. Load balancers in OVN-Kubernetes are representations of services. Run the following command to display the options available with the command ovn-nbctl : USD oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 \ -c nbdb ovn-nbctl --help 24.2.4. Command line arguments for ovn-nbctl to examine northbound database contents The following table describes the command line arguments that can be used with ovn-nbctl to examine the contents of the northbound database. Note Open a remote shell in the pod you want to view the contents of and then run the ovn-nbctl commands. Table 24.2. Command line arguments to examine northbound database contents Argument Description ovn-nbctl show An overview of the northbound database contents as seen from a specific node. ovn-nbctl show <switch_or_router> Show the details associated with the specified switch or router. ovn-nbctl lr-list Show the logical routers. ovn-nbctl lrp-list <router> Using the router information from ovn-nbctl lr-list to show the router ports. ovn-nbctl lr-nat-list <router> Show network address translation details for the specified router. ovn-nbctl ls-list Show the logical switches ovn-nbctl lsp-list <switch> Using the switch information from ovn-nbctl ls-list to show the switch port. ovn-nbctl lsp-get-type <port> Get the type for the logical port. ovn-nbctl lb-list Show the load balancers. 24.2.5. Listing the OVN-Kubernetes southbound database contents Each node is controlled by the ovnkube-controller container running in the ovnkube-node pod on that node. To understand the OVN logical networking entities you need to examine the northbound database that is running as a container inside the ovnkube-node pod on that node to see what objects are in the node you wish to see. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift CLI ( oc ) installed. Procedure To run ovn nbctl or sbctl commands in a cluster you must open a remote shell into the nbdb or sbdb containers on the relevant node List the pods by running the following command: USD oc get po -n openshift-ovn-kubernetes Example output NAME READY STATUS RESTARTS AGE ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m ovnkube-node-55xs2 8/8 Running 0 26m ovnkube-node-7r84r 8/8 Running 0 16m ovnkube-node-bqq8p 8/8 Running 0 17m ovnkube-node-mkj4f 8/8 Running 0 26m ovnkube-node-mlr8k 8/8 Running 0 26m ovnkube-node-wqn2m 8/8 Running 0 16m Optional: To list the pods with node information, run the following command: USD oc get pods -n openshift-ovn-kubernetes -owide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-55xs2 8/8 Running 0 26m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-7r84r 8/8 Running 0 17m 10.0.128.3 ci-ln-t487nnb-72292-mdcnq-worker-b-wbz7z <none> <none> ovnkube-node-bqq8p 8/8 Running 0 17m 10.0.128.2 ci-ln-t487nnb-72292-mdcnq-worker-a-lh7ms <none> <none> ovnkube-node-mkj4f 8/8 Running 0 27m 10.0.0.5 ci-ln-t487nnb-72292-mdcnq-master-0 <none> <none> ovnkube-node-mlr8k 8/8 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-node-wqn2m 8/8 Running 0 17m 10.0.128.4 ci-ln-t487nnb-72292-mdcnq-worker-c-przlm <none> <none> Navigate into a pod to look at the southbound database: USD oc rsh -c sbdb -n openshift-ovn-kubernetes ovnkube-node-55xs2 Run the following command to show all the objects in the southbound database: USD ovn-sbctl show Example output Chassis "5db31703-35e9-413b-8cdf-69e7eecb41f7" hostname: ci-ln-9gp362t-72292-v2p94-worker-a-8bmwz Encap geneve ip: "10.0.128.4" options: {csum="true"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-worker-a-8bmwz Chassis "070debed-99b7-4bce-b17d-17e720b7f8bc" hostname: ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Encap geneve ip: "10.0.128.2" options: {csum="true"} Port_Binding k8s-ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding rtoe-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding openshift-monitoring_alertmanager-main-1 Port_Binding rtoj-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding etor-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding cr-rtos-ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding openshift-e2e-loki_loki-promtail-qcrcz Port_Binding jtor-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding openshift-multus_network-metrics-daemon-mkd4t Port_Binding openshift-ingress-canary_ingress-canary-xtvj4 Port_Binding openshift-ingress_router-default-6c76cbc498-pvlqk Port_Binding openshift-dns_dns-default-zz582 Port_Binding openshift-monitoring_thanos-querier-57585899f5-lbf4f Port_Binding openshift-network-diagnostics_network-check-target-tn228 Port_Binding openshift-monitoring_prometheus-k8s-0 Port_Binding openshift-image-registry_image-registry-68899bd877-xqxjj Chassis "179ba069-0af1-401c-b044-e5ba90f60fea" hostname: ci-ln-9gp362t-72292-v2p94-master-0 Encap geneve ip: "10.0.0.5" options: {csum="true"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-master-0 Chassis "68c954f2-5a76-47be-9e84-1cb13bd9dab9" hostname: ci-ln-9gp362t-72292-v2p94-worker-c-mjf9w Encap geneve ip: "10.0.128.3" options: {csum="true"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-worker-c-mjf9w Chassis "2de65d9e-9abf-4b6e-a51d-a1e038b4d8af" hostname: ci-ln-9gp362t-72292-v2p94-master-2 Encap geneve ip: "10.0.0.4" options: {csum="true"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-master-2 Chassis "1d371cb8-5e21-44fd-9025-c4b162cc4247" hostname: ci-ln-9gp362t-72292-v2p94-master-1 Encap geneve ip: "10.0.0.3" options: {csum="true"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-master-1 This detailed output shows the chassis and the ports that are attached to the chassis which in this case are all of the router ports and anything that runs like host networking. Any pods communicate out to the wider network using source network address translation (SNAT). Their IP address is translated into the IP address of the node that the pod is running on and then sent out into the network. In addition to the chassis information the southbound database has all the logic flows and those logic flows are then sent to the ovn-controller running on each of the nodes. The ovn-controller translates the logic flows into open flow rules and ultimately programs OpenvSwitch so that your pods can then follow open flow rules and make it out of the network. Run the following command to display the options available with the command ovn-sbctl : USD oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 \ -c sbdb ovn-sbctl --help 24.2.6. Command line arguments for ovn-sbctl to examine southbound database contents The following table describes the command line arguments that can be used with ovn-sbctl to examine the contents of the southbound database. Note Open a remote shell in the pod you wish to view the contents of and then run the ovn-sbctl commands. Table 24.3. Command line arguments to examine southbound database contents Argument Description ovn-sbctl show An overview of the southbound database contents as seen from a specific node. ovn-sbctl list Port_Binding <port> List the contents of southbound database for a the specified port . ovn-sbctl dump-flows List the logical flows. 24.2.7. OVN-Kubernetes logical architecture OVN is a network virtualization solution. It creates logical switches and routers. These switches and routers are interconnected to create any network topologies. When you run ovnkube-trace with the log level set to 2 or 5 the OVN-Kubernetes logical components are exposed. The following diagram shows how the routers and switches are connected in OpenShift Container Platform. Figure 24.2. OVN-Kubernetes router and switch components The key components involved in packet processing are: Gateway routers Gateway routers sometimes called L3 gateway routers, are typically used between the distributed routers and the physical network. Gateway routers including their logical patch ports are bound to a physical location (not distributed), or chassis. The patch ports on this router are known as l3gateway ports in the ovn-southbound database ( ovn-sbdb ). Distributed logical routers Distributed logical routers and the logical switches behind them, to which virtual machines and containers attach, effectively reside on each hypervisor. Join local switch Join local switches are used to connect the distributed router and gateway routers. It reduces the number of IP addresses needed on the distributed router. Logical switches with patch ports Logical switches with patch ports are used to virtualize the network stack. They connect remote logical ports through tunnels. Logical switches with localnet ports Logical switches with localnet ports are used to connect OVN to the physical network. They connect remote logical ports by bridging the packets to directly connected physical L2 segments using localnet ports. Patch ports Patch ports represent connectivity between logical switches and logical routers and between peer logical routers. A single connection has a pair of patch ports at each such point of connectivity, one on each side. l3gateway ports l3gateway ports are the port binding entries in the ovn-sbdb for logical patch ports used in the gateway routers. They are called l3gateway ports rather than patch ports just to portray the fact that these ports are bound to a chassis just like the gateway router itself. localnet ports localnet ports are present on the bridged logical switches that allows a connection to a locally accessible network from each ovn-controller instance. This helps model the direct connectivity to the physical network from the logical switches. A logical switch can only have a single localnet port attached to it. 24.2.7.1. Installing network-tools on local host Install network-tools on your local host to make a collection of tools available for debugging OpenShift Container Platform cluster network issues. Procedure Clone the network-tools repository onto your workstation with the following command: USD git clone [email protected]:openshift/network-tools.git Change into the directory for the repository you just cloned: USD cd network-tools Optional: List all available commands: USD ./debug-scripts/network-tools -h 24.2.7.2. Running network-tools Get information about the logical switches and routers by running network-tools . Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster as a user with cluster-admin privileges. You have installed network-tools on local host. Procedure List the routers by running the following command: USD ./debug-scripts/network-tools ovn-db-run-command ovn-nbctl lr-list Example output 944a7b53-7948-4ad2-a494-82b55eeccf87 (GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99) 84bd4a4c-4b0b-4a47-b0cf-a2c32709fc53 (ovn_cluster_router) List the localnet ports by running the following command: USD ./debug-scripts/network-tools ovn-db-run-command \ ovn-sbctl find Port_Binding type=localnet Example output _uuid : d05298f5-805b-4838-9224-1211afc2f199 additional_chassis : [] additional_encap : [] chassis : [] datapath : f3c2c959-743b-4037-854d-26627902597c encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : br-ex_ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [unknown] mirror_rules : [] nat_addresses : [] options : {network_name=physnet} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 2 type : localnet up : false virtual_parent : [] [...] List the l3gateway ports by running the following command: USD ./debug-scripts/network-tools ovn-db-run-command \ ovn-sbctl find Port_Binding type=l3gateway Example output _uuid : 5207a1f3-1cf3-42f1-83e9-387bbb06b03c additional_chassis : [] additional_encap : [] chassis : ca6eb600-3a10-4372-a83e-e0d957c4cd92 datapath : f3c2c959-743b-4037-854d-26627902597c encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : etor-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : ["42:01:0a:00:80:04"] mirror_rules : [] nat_addresses : ["42:01:0a:00:80:04 10.0.128.4"] options : {l3gateway-chassis="84737c36-b383-4c83-92c5-2bd5b3c7e772", peer=rtoe-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : l3gateway up : true virtual_parent : [] _uuid : 6088d647-84f2-43f2-b53f-c9d379042679 additional_chassis : [] additional_encap : [] chassis : ca6eb600-3a10-4372-a83e-e0d957c4cd92 datapath : dc9cea00-d94a-41b8-bdb0-89d42d13aa2e encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : jtor-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [router] mirror_rules : [] nat_addresses : [] options : {l3gateway-chassis="84737c36-b383-4c83-92c5-2bd5b3c7e772", peer=rtoj-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 2 type : l3gateway up : true virtual_parent : [] [...] List the patch ports by running the following command: USD ./debug-scripts/network-tools ovn-db-run-command \ ovn-sbctl find Port_Binding type=patch Example output _uuid : 785fb8b6-ee5a-4792-a415-5b1cb855dac2 additional_chassis : [] additional_encap : [] chassis : [] datapath : f1ddd1cc-dc0d-43b4-90ca-12651305acec encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : stor-ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [router] mirror_rules : [] nat_addresses : ["0a:58:0a:80:02:01 10.128.2.1 is_chassis_resident(\"cr-rtos-ci-ln-54932yb-72292-kd676-worker-c-rzj99\")"] options : {peer=rtos-ci-ln-54932yb-72292-kd676-worker-c-rzj99} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : patch up : false virtual_parent : [] _uuid : c01ff587-21a5-40b4-8244-4cd0425e5d9a additional_chassis : [] additional_encap : [] chassis : [] datapath : f6795586-bf92-4f84-9222-efe4ac6a7734 encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : rtoj-ovn_cluster_router mac : ["0a:58:64:40:00:01 100.64.0.1/16"] mirror_rules : [] nat_addresses : [] options : {peer=jtor-ovn_cluster_router} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : patch up : false virtual_parent : [] [...] 24.2.8. Additional resources Tracing Openflow with ovnkube-trace OVN architecture ovn-nbctl linux manual page ovn-sbctl linux manual page 24.3. Troubleshooting OVN-Kubernetes OVN-Kubernetes has many sources of built-in health checks and logs. Follow the instructions in these sections to examine your cluster. If a support case is necessary, follow the support guide to collect additional information through a must-gather . Only use the -- gather_network_logs when instructed by support. 24.3.1. Monitoring OVN-Kubernetes health by using readiness probes The ovnkube-control-plane and ovnkube-node pods have containers configured with readiness probes. Prerequisites Access to the OpenShift CLI ( oc ). You have access to the cluster with cluster-admin privileges. You have installed jq . Procedure Review the details of the ovnkube-node readiness probe by running the following command: USD oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node \ -o json | jq '.items[0].spec.containers[] | .name,.readinessProbe' The readiness probe for the northbound and southbound database containers in the ovnkube-node pod checks for the health of the databases and the ovnkube-controller container. The ovnkube-controller container in the ovnkube-node pod has a readiness probe to verify the presence of the OVN-Kubernetes CNI configuration file, the absence of which would indicate that the pod is not running or is not ready to accept requests to configure pods. Show all events including the probe failures, for the namespace by using the following command: USD oc get events -n openshift-ovn-kubernetes Show the events for just a specific pod: USD oc describe pod ovnkube-node-9lqfk -n openshift-ovn-kubernetes Show the messages and statuses from the cluster network operator: USD oc get co/network -o json | jq '.status.conditions[]' Show the ready status of each container in ovnkube-node pods by running the following script: USD for p in USD(oc get pods --selector app=ovnkube-node -n openshift-ovn-kubernetes \ -o jsonpath='{range.items[*]}{" "}{.metadata.name}'); do echo === USDp ===; \ oc get pods -n openshift-ovn-kubernetes USDp -o json | jq '.status.containerStatuses[] | .name, .ready'; \ done Note The expectation is all container statuses are reporting as true . Failure of a readiness probe sets the status to false . Additional resources Monitoring application health by using health checks 24.3.2. Viewing OVN-Kubernetes alerts in the console The Alerting UI provides detailed information about alerts and their governing alerting rules and silences. Prerequisites You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for. Procedure (UI) In the Administrator perspective, select Observe Alerting . The three main pages in the Alerting UI in this perspective are the Alerts , Silences , and Alerting Rules pages. View the rules for OVN-Kubernetes alerts by selecting Observe Alerting Alerting Rules . 24.3.3. Viewing OVN-Kubernetes alerts in the CLI You can get information about alerts and their governing alerting rules and silences from the command line. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift CLI ( oc ) installed. You have installed jq . Procedure View active or firing alerts by running the following commands. Set the alert manager route environment variable by running the following command: USD ALERT_MANAGER=USD(oc get route alertmanager-main -n openshift-monitoring \ -o jsonpath='{@.spec.host}') Issue a curl request to the alert manager route API by running the following command, replacing USDALERT_MANAGER with the URL of your Alertmanager instance: USD curl -s -k -H "Authorization: Bearer USD(oc create token prometheus-k8s -n openshift-monitoring)" https://USDALERT_MANAGER/api/v1/alerts | jq '.data[] | "\(.labels.severity) \(.labels.alertname) \(.labels.pod) \(.labels.container) \(.labels.endpoint) \(.labels.instance)"' View alerting rules by running the following command: USD oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -s 'http://localhost:9090/api/v1/rules' | jq '.data.groups[].rules[] | select(((.name|contains("ovn")) or (.name|contains("OVN")) or (.name|contains("Ovn")) or (.name|contains("North")) or (.name|contains("South"))) and .type=="alerting")' 24.3.4. Viewing the OVN-Kubernetes logs using the CLI You can view the logs for each of the pods in the ovnkube-master and ovnkube-node pods using the OpenShift CLI ( oc ). Prerequisites Access to the cluster as a user with the cluster-admin role. Access to the OpenShift CLI ( oc ). You have installed jq . Procedure View the log for a specific pod: USD oc logs -f <pod_name> -c <container_name> -n <namespace> where: -f Optional: Specifies that the output follows what is being written into the logs. <pod_name> Specifies the name of the pod. <container_name> Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name. <namespace> Specify the namespace the pod is running in. For example: USD oc logs ovnkube-node-5dx44 -n openshift-ovn-kubernetes USD oc logs -f ovnkube-node-5dx44 -c ovnkube-controller -n openshift-ovn-kubernetes The contents of log files are printed out. Examine the most recent entries in all the containers in the ovnkube-node pods: USD for p in USD(oc get pods --selector app=ovnkube-node -n openshift-ovn-kubernetes \ -o jsonpath='{range.items[*]}{" "}{.metadata.name}'); \ do echo === USDp ===; for container in USD(oc get pods -n openshift-ovn-kubernetes USDp \ -o json | jq -r '.status.containerStatuses[] | .name');do echo ---USDcontainer---; \ oc logs -c USDcontainer USDp -n openshift-ovn-kubernetes --tail=5; done; done View the last 5 lines of every log in every container in an ovnkube-node pod using the following command: USD oc logs -l app=ovnkube-node -n openshift-ovn-kubernetes --all-containers --tail 5 24.3.5. Viewing the OVN-Kubernetes logs using the web console You can view the logs for each of the pods in the ovnkube-master and ovnkube-node pods in the web console. Prerequisites Access to the OpenShift CLI ( oc ). Procedure In the OpenShift Container Platform console, navigate to Workloads Pods or navigate to the pod through the resource you want to investigate. Select the openshift-ovn-kubernetes project from the drop-down menu. Click the name of the pod you want to investigate. Click Logs . By default for the ovnkube-master the logs associated with the northd container are displayed. Use the down-down menu to select logs for each container in turn. 24.3.5.1. Changing the OVN-Kubernetes log levels The default log level for OVN-Kubernetes is 4. To debug OVN-Kubernetes, set the log level to 5. Follow this procedure to increase the log level of the OVN-Kubernetes to help you debug an issue. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Run the following command to get detailed information for all pods in the OVN-Kubernetes project: USD oc get po -o wide -n openshift-ovn-kubernetes Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-65497d4548-9ptdr 2/2 Running 2 (128m ago) 147m 10.0.0.3 ci-ln-3njdr9b-72292-5nwkp-master-0 <none> <none> ovnkube-control-plane-65497d4548-j6zfk 2/2 Running 0 147m 10.0.0.5 ci-ln-3njdr9b-72292-5nwkp-master-2 <none> <none> ovnkube-node-5dx44 8/8 Running 0 146m 10.0.0.3 ci-ln-3njdr9b-72292-5nwkp-master-0 <none> <none> ovnkube-node-dpfn4 8/8 Running 0 146m 10.0.0.4 ci-ln-3njdr9b-72292-5nwkp-master-1 <none> <none> ovnkube-node-kwc9l 8/8 Running 0 134m 10.0.128.2 ci-ln-3njdr9b-72292-5nwkp-worker-a-2fjcj <none> <none> ovnkube-node-mcrhl 8/8 Running 0 134m 10.0.128.4 ci-ln-3njdr9b-72292-5nwkp-worker-c-v9x5v <none> <none> ovnkube-node-nsct4 8/8 Running 0 146m 10.0.0.5 ci-ln-3njdr9b-72292-5nwkp-master-2 <none> <none> ovnkube-node-zrj9f 8/8 Running 0 134m 10.0.128.3 ci-ln-3njdr9b-72292-5nwkp-worker-b-v78h7 <none> <none> Create a ConfigMap file similar to the following example and use a filename such as env-overrides.yaml : Example ConfigMap file kind: ConfigMap apiVersion: v1 metadata: name: env-overrides namespace: openshift-ovn-kubernetes data: ci-ln-3njdr9b-72292-5nwkp-master-0: | 1 # This sets the log level for the ovn-kubernetes node process: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for ovn-controller: OVN_LOG_LEVEL=dbg ci-ln-3njdr9b-72292-5nwkp-master-2: | # This sets the log level for the ovn-kubernetes node process: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for ovn-controller: OVN_LOG_LEVEL=dbg _master: | 2 # This sets the log level for the ovn-kubernetes master process as well as the ovn-dbchecker: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for northd, nbdb and sbdb on all masters: OVN_LOG_LEVEL=dbg 1 Specify the name of the node you want to set the debug log level on. 2 Specify _master to set the log levels of ovnkube-master components. Apply the ConfigMap file by using the following command: USD oc apply -n openshift-ovn-kubernetes -f env-overrides.yaml Example output configmap/env-overrides.yaml created Restart the ovnkube pods to apply the new log level by using the following commands: USD oc delete pod -n openshift-ovn-kubernetes \ --field-selector spec.nodeName=ci-ln-3njdr9b-72292-5nwkp-master-0 -l app=ovnkube-node USD oc delete pod -n openshift-ovn-kubernetes \ --field-selector spec.nodeName=ci-ln-3njdr9b-72292-5nwkp-master-2 -l app=ovnkube-node USD oc delete pod -n openshift-ovn-kubernetes -l app=ovnkube-node To verify that the `ConfigMap`file has been applied to all nodes for a specific pod, run the following command: USD oc logs -n openshift-ovn-kubernetes --all-containers --prefix ovnkube-node-<xxxx> | grep -E -m 10 '(Logging config:|vconsole|DBG)' where: <XXXX> Specifies the random sequence of letters for a pod from the step. Example output [pod/ovnkube-node-2cpjc/sbdb] + exec /usr/share/ovn/scripts/ovn-ctl --no-monitor '--ovn-sb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_sb_ovsdb [pod/ovnkube-node-2cpjc/ovnkube-controller] I1012 14:39:59.984506 35767 config.go:2247] Logging config: {File: CNIFile:/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log LibovsdbFile:/var/log/ovnkube/libovsdb.log Level:5 LogFileMaxSize:100 LogFileMaxBackups:5 LogFileMaxAge:0 ACLLoggingRateLimit:20} [pod/ovnkube-node-2cpjc/northd] + exec ovn-northd --no-chdir -vconsole:info -vfile:off '-vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' --pidfile /var/run/ovn/ovn-northd.pid --n-threads=1 [pod/ovnkube-node-2cpjc/nbdb] + exec /usr/share/ovn/scripts/ovn-ctl --no-monitor '--ovn-nb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_nb_ovsdb [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.552Z|00002|hmap|DBG|lib/shash.c:114: 1 bucket with 6+ nodes, including 1 bucket with 6 nodes (32 nodes total across 32 buckets) [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00003|hmap|DBG|lib/shash.c:114: 1 bucket with 6+ nodes, including 1 bucket with 6 nodes (64 nodes total across 64 buckets) [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00004|hmap|DBG|lib/shash.c:114: 1 bucket with 6+ nodes, including 1 bucket with 7 nodes (32 nodes total across 32 buckets) [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00005|reconnect|DBG|unix:/var/run/openvswitch/db.sock: entering BACKOFF [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00007|reconnect|DBG|unix:/var/run/openvswitch/db.sock: entering CONNECTING [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00008|ovsdb_cs|DBG|unix:/var/run/openvswitch/db.sock: SERVER_SCHEMA_REQUESTED -> SERVER_SCHEMA_REQUESTED at lib/ovsdb-cs.c:423 Optional: Check the ConfigMap file has been applied by running the following command: for f in USD(oc -n openshift-ovn-kubernetes get po -l 'app=ovnkube-node' --no-headers -o custom-columns=N:.metadata.name) ; do echo "---- USDf ----" ; oc -n openshift-ovn-kubernetes exec -c ovnkube-controller USDf -- pgrep -a -f init-ovnkube-controller | grep -P -o '^.*loglevel\s+\d' ; done Example output ---- ovnkube-node-2dt57 ---- 60981 /usr/bin/ovnkube --init-ovnkube-controller xpst8-worker-c-vmh5n.c.openshift-qe.internal --init-node xpst8-worker-c-vmh5n.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-4zznh ---- 178034 /usr/bin/ovnkube --init-ovnkube-controller xpst8-master-2.c.openshift-qe.internal --init-node xpst8-master-2.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-548sx ---- 77499 /usr/bin/ovnkube --init-ovnkube-controller xpst8-worker-a-fjtnb.c.openshift-qe.internal --init-node xpst8-worker-a-fjtnb.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-6btrf ---- 73781 /usr/bin/ovnkube --init-ovnkube-controller xpst8-worker-b-p8rww.c.openshift-qe.internal --init-node xpst8-worker-b-p8rww.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-fkc9r ---- 130707 /usr/bin/ovnkube --init-ovnkube-controller xpst8-master-0.c.openshift-qe.internal --init-node xpst8-master-0.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 5 ---- ovnkube-node-tk9l4 ---- 181328 /usr/bin/ovnkube --init-ovnkube-controller xpst8-master-1.c.openshift-qe.internal --init-node xpst8-master-1.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 24.3.6. Checking the OVN-Kubernetes pod network connectivity The connectivity check controller, in OpenShift Container Platform 4.10 and later, orchestrates connection verification checks in your cluster. These include Kubernetes API, OpenShift API and individual nodes. The results for the connection tests are stored in PodNetworkConnectivity objects in the openshift-network-diagnostics namespace. Connection tests are performed every minute in parallel. Prerequisites Access to the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. You have installed jq . Procedure To list the current PodNetworkConnectivityCheck objects, enter the following command: USD oc get podnetworkconnectivitychecks -n openshift-network-diagnostics View the most recent success for each connection object by using the following command: USD oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \ -o json | jq '.items[]| .spec.targetEndpoint,.status.successes[0]' View the most recent failures for each connection object by using the following command: USD oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \ -o json | jq '.items[]| .spec.targetEndpoint,.status.failures[0]' View the most recent outages for each connection object by using the following command: USD oc get podnetworkconnectivitychecks -n openshift-network-diagnostics \ -o json | jq '.items[]| .spec.targetEndpoint,.status.outages[0]' The connectivity check controller also logs metrics from these checks into Prometheus. View all the metrics by running the following command: USD oc exec prometheus-k8s-0 -n openshift-monitoring -- \ promtool query instant http://localhost:9090 \ '{component="openshift-network-diagnostics"}' View the latency between the source pod and the openshift api service for the last 5 minutes: USD oc exec prometheus-k8s-0 -n openshift-monitoring -- \ promtool query instant http://localhost:9090 \ '{component="openshift-network-diagnostics"}' 24.3.7. Additional resources Gathering data about your cluster for Red Hat Support Implementation of connection health checks Verifying network connectivity for an endpoint 24.4. OVN-Kubernetes network policy Important The AdminNetworkPolicy resource is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Kubernetes offers two features that users can use to enforce network security. One feature that allows users to enforce network policy is the NetworkPolicy API that is designed mainly for application developers and namespace tenants to protect their namespaces by creating namespace-scoped policies. For more information, see About network policy . The second feature is AdminNetworkPolicy which is comprised of two API: the AdminNetworkPolicy (ANP) API and the BaselineAdminNetworkPolicy (BANP) API. ANP and BANP are designed for cluster and network administrators to protect their entire cluster by creating cluster-scoped policies. Cluster administrators can use ANPs to enforce non-overridable policies that take precedence over NetworkPolicy objects. Administrators can use BANP to setup and enforce optional cluster-scoped network policy rules that are overridable by users using NetworkPolicy objects if need be. When used together ANP and BANP can create multi-tenancy policy that administrators can use to secure their cluster. OVN-Kubernetes CNI in OpenShift Container Platform implements these network policies using Access Control List (ACLs) Tiers to evaluate and apply them. ACLs are evaluated in descending order from Tier 1 to Tier 3. Tier 1 evaluates AdminNetworkPolicy (ANP) objects. Tier 2 evaluates NetworkPolicy objects. Tier 3 evaluates BaselineAdminNetworkPolicy (BANP) objects. Figure 24.3. OVK-Kubernetes Access Control List (ACL) If traffic matches an ANP rule, the rules in that ANP will be evaluated first. If the match is an ANP allow or deny rule, any existing NetworkPolicies and BaselineAdminNetworkPolicy (BANP) in the cluster will be intentionally skipped from evaluation. If the match is an ANP pass rule, then evaluation moves from tier 1 of the ACLs to tier 2 where the NetworkPolicy policy is evaluated. 24.4.1. AdminNetworkPolicy An AdminNetworkPolicy (ANP) is a cluster-scoped custom resource definition (CRD). As a OpenShift Container Platform administrator, you can use ANP to secure your network by creating network policies before creating namespaces. Additionally, you can create network policies on a cluster-scoped level that is non-overridable by NetworkPolicy objects. The key difference between AdminNetworkPolicy and NetworkPolicy objects are that the former is for administrators and is cluster scoped while the latter is for tenant owners and is namespace scoped. An ANP allows administrators to specify the following: A priority value that determines the order of its evaluation. The lower the value the higher the precedence. A subject that consists of a set of namespaces or namespace.. A list of ingress rules to be applied for all ingress traffic towards the subject . A list of egress rules to be applied for all egress traffic from the subject . Note The AdminNetworkPolicy resource is a TechnologyPreviewNoUpgrade feature that can be enabled on test clusters that are not in production. For more information on feature gates and TechnologyPreviewNoUpgrade features, see "Enabling features using feature gates" in the "Additional resources" of this section. AdminNetworkPolicy example Example 24.1. Example YAML file for an ANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: sample-anp-deny-pass-rules 1 spec: priority: 50 2 subject: namespaces: matchLabels: kubernetes.io/metadata.name: example.name 3 ingress: 4 - name: "deny-all-ingress-tenant-1" 5 action: "Deny" from: - pods: namespaces: 6 namespaceSelector: matchLabels: custom-anp: tenant-1 podSelector: matchLabels: custom-anp: tenant-1 7 egress: 8 - name: "pass-all-egress-to-tenant-1" action: "Pass" to: - pods: namespaces: namespaceSelector: matchLabels: custom-anp: tenant-1 podSelector: matchLabels: custom-anp: tenant-1 1 Specify a name for your ANP. 2 The spec.priority field supports a maximum of 100 ANP in the values of 0-99 in a cluster. The lower the value the higher the precedence. Creating AdminNetworkPolicy with the same priority creates a nondeterministic outcome. 3 Specify the namespace to apply the ANP resource. 4 ANP have both ingress and egress rules. ANP rules for spec.ingress field accepts values of Pass , Deny , and Allow for the action field. 5 Specify a name for the ingress.name . 6 Specify the namespaces to select the pods from to apply the ANP resource. 7 Specify podSelector.matchLabels name of the pods to apply the ANP resource. 8 ANP have both ingress and egress rules. ANP rules for spec.egress field accepts values of Pass , Deny , and Allow for the action field. Additional resources Enabling features using feature gates Network Policy API Working Group 24.4.1.1. AdminNetworkPolicy actions for rules As an administrator, you can set Allow , Deny , or Pass as the action field for your AdminNetworkPolicy rules. Because OVN-Kubernetes uses a tiered ACLs to evaluate network traffic rules, ANP allow you to set very strong policy rules that can only be changed by an administrator modifying them, deleting the rule, or overriding them by setting a higher priority rule. AdminNetworkPolicy Allow example The following ANP that is defined at priority 9 ensures all ingress traffic is allowed from the monitoring namespace towards any tenant (all other namespaces) in the cluster. Example 24.2. Example YAML file for a strong Allow ANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: allow-monitoring spec: priority: 9 subject: namespaces: {} ingress: - name: "allow-ingress-from-monitoring" action: "Allow" from: - namespaces: namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring # ... This is an example of a strong Allow ANP because it is non-overridable by all the parties involved. No tenants can block themselves from being monitored using NetworkPolicy objects and the monitoring tenant also has no say in what it can or cannot monitor. AdminNetworkPolicy Deny example The following ANP that is defined at priority 5 ensures all ingress traffic from the monitoring namespace is blocked towards restricted tenants (namespaces that have labels security: restricted ). Example 24.3. Example YAML file for a strong Deny ANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: block-monitoring spec: priority: 5 subject: namespaces: matchLabels: security: restricted ingress: - name: "deny-ingress-from-monitoring" action: "Deny" from: - namespaces: namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring # ... This is a strong Deny ANP that is non-overridable by all the parties involved. The restricted tenant owners cannot authorize themselves to allow monitoring traffic, and the infrastructure's monitoring service cannot scrape anything from these sensitive namespaces. When combined with the strong Allow example, the block-monitoring ANP has a lower priority value giving it higher precedence, which ensures restricted tenants are never monitored. AdminNetworkPolicy Pass example TThe following ANP that is defined at priority 7 ensures all ingress traffic from the monitoring namespace towards internal infrastructure tenants (namespaces that have labels security: internal ) are passed on to tier 2 of the ACLs and evaluated by the namespaces' NetworkPolicy objects. Example 24.4. Example YAML file for a strong Pass ANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: pass-monitoring spec: priority: 7 subject: namespaces: matchLabels: security: internal ingress: - name: "pass-ingress-from-monitoring" action: "Pass" from: - namespaces: namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring # ... This example is a strong Pass action ANP because it delegates the decision to NetworkPolicy objects defined by tenant owners. This pass-monitoring ANP allows all tenant owners grouped at security level internal to choose if their metrics should be scraped by the infrastructures' monitoring service using namespace scoped NetworkPolicy objects. 24.4.2. BaselineAdminNetworkPolicy BaselineAdminNetworkPolicy (BANP) is a cluster-scoped custom resource definition (CRD). As a OpenShift Container Platform administrator, you can use BANP to setup and enforce optional baseline network policy rules that are overridable by users using NetworkPolicy objects if need be. Rule actions for BANP are allow or deny . The BaselineAdminNetworkPolicy resource is a cluster singleton object that can be used as a guardrail policy incase a passed traffic policy does not match any NetworkPolicy objects in the cluster. A BANP can also be used as a default security model that provides guardrails that intra-cluster traffic is blocked by default and a user will need to use NetworkPolicy objects to allow known traffic. You must use default as the name when creating a BANP resource. A BANP allows administrators to specify: A subject that consists of a set of namespaces or namespace. A list of ingress rules to be applied for all ingress traffic towards the subject . A list of egress rules to be applied for all egress traffic from the subject . Note BaselineAdminNetworkPolicy is a TechnologyPreviewNoUpgrade feature that can be enabled on test clusters that are not in production. BaselineAdminNetworkPolicy example Example 24.5. Example YAML file for BANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default 1 spec: subject: namespaces: matchLabels: kubernetes.io/metadata.name: example.name 2 ingress: 3 - name: "deny-all-ingress-from-tenant-1" 4 action: "Deny" from: - pods: namespaces: namespaceSelector: matchLabels: custom-banp: tenant-1 5 podSelector: matchLabels: custom-banp: tenant-1 6 egress: - name: "allow-all-egress-to-tenant-1" action: "Allow" to: - pods: namespaces: namespaceSelector: matchLabels: custom-banp: tenant-1 podSelector: matchLabels: custom-banp: tenant-1 1 The policy name must be default because BANP is a singleton object. 2 Specify the namespace to apply the ANP to. 3 BANP have both ingress and egress rules. BANP rules for spec.ingress and spec.egress fields accepts values of Deny and Allow for the action field. 4 Specify a name for the ingress.name 5 Specify the namespaces to select the pods from to apply the BANP resource. 6 Specify podSelector.matchLabels name of the pods to apply the BANP resource. BaselineAdminNetworkPolicy Deny example The following BANP singleton ensures that the administrator has set up a default deny policy for all ingress monitoring traffic coming into the tenants at internal security level. When combined with the "AdminNetworkPolicy Pass example", this deny policy acts as a guardrail policy for all ingress traffic that is passed by the ANP pass-monitoring policy. Example 24.6. Example YAML file for a guardrail Deny rule apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default spec: subject: namespaces: matchLabels: security: internal ingress: - name: "deny-ingress-from-monitoring" action: "Deny" from: - namespaces: namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring # ... You can use an AdminNetworkPolicy resource with a Pass value for the action field in conjunction with the BaselineAdminNetworkPolicy resource to create a multi-tenant policy. This multi-tenant policy allows one tenant to collect monitoring data on their application while simultaneously not collecting data from a second tenant. As an administrator, if you apply both the "AdminNetworkPolicy Pass action example" and the "BaselineAdminNetwork Policy Deny example", tenants are then left with the ability to choose to create a NetworkPolicy resource that will be evaluated before the BANP. For example, Tenant 1 can set up the following NetworkPolicy resource to monitor ingress traffic: Example 24.7. Example NetworkPolicy apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-monitoring namespace: tenant 1 spec: podSelector: policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring # ... In this scenario, Tenant 1's policy would be evaluated after the "AdminNetworkPolicy Pass action example" and before the "BaselineAdminNetwork Policy Deny example", which denies all ingress monitoring traffic coming into tenants with security level internal . With Tenant 1's NetworkPolicy object in place, they will be able to collect data on their application. Tenant 2, however, who does not have any NetworkPolicy objects in place, will not be able to collect data. As an administrator, you have not by default monitored internal tenants, but instead, you created a BANP that allows tenants to use NetworkPolicy objects to override the default behavior of your BANP. 24.5. Tracing Openflow with ovnkube-trace OVN and OVS traffic flows can be simulated in a single utility called ovnkube-trace . The ovnkube-trace utility runs ovn-trace , ovs-appctl ofproto/trace and ovn-detrace and correlates that information in a single output. You can execute the ovnkube-trace binary from a dedicated container. For releases after OpenShift Container Platform 4.7, you can also copy the binary to a local host and execute it from that host. 24.5.1. Installing the ovnkube-trace on local host The ovnkube-trace tool traces packet simulations for arbitrary UDP or TCP traffic between points in an OVN-Kubernetes driven OpenShift Container Platform cluster. Copy the ovnkube-trace binary to your local host making it available to run against the cluster. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. Procedure Create a pod variable by using the following command: USD POD=USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-control-plane -o name | head -1 | awk -F '/' '{print USDNF}') Run the following command on your local host to copy the binary from the ovnkube-control-plane pods: USD oc cp -n openshift-ovn-kubernetes USDPOD:/usr/bin/ovnkube-trace -c ovnkube-cluster-manager ovnkube-trace Note If you are using Red Hat Enterprise Linux (RHEL) 8 to run the ovnkube-trace tool, you must copy the file /usr/lib/rhel8/ovnkube-trace to your local host. Make ovnkube-trace executable by running the following command: USD chmod +x ovnkube-trace Display the options available with ovnkube-trace by running the following command: USD ./ovnkube-trace -help Expected output Usage of ./ovnkube-trace: -addr-family string Address family (ip4 or ip6) to be used for tracing (default "ip4") -dst string dest: destination pod name -dst-ip string destination IP address (meant for tests to external targets) -dst-namespace string k8s namespace of dest pod (default "default") -dst-port string dst-port: destination port (default "80") -kubeconfig string absolute path to the kubeconfig file -loglevel string loglevel: klog level (default "0") -ovn-config-namespace string namespace used by ovn-config itself -service string service: destination service name -skip-detrace skip ovn-detrace command -src string src: source pod name -src-namespace string k8s namespace of source pod (default "default") -tcp use tcp transport protocol -udp use udp transport protocol The command-line arguments supported are familiar Kubernetes constructs, such as namespaces, pods, services so you do not need to find the MAC address, the IP address of the destination nodes, or the ICMP type. The log levels are: 0 (minimal output) 2 (more verbose output showing results of trace commands) 5 (debug output) 24.5.2. Running ovnkube-trace Run ovn-trace to simulate packet forwarding within an OVN logical network. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You have installed ovnkube-trace on local host Example: Testing that DNS resolution works from a deployed pod This example illustrates how to test the DNS resolution from a deployed pod to the core DNS pod that runs in the cluster. Procedure Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=quay.io/openshifttest/nginx --labels="app=web" --expose --port=80 List the pods running in the openshift-dns namespace: oc get pods -n openshift-dns Example output NAME READY STATUS RESTARTS AGE dns-default-8s42x 2/2 Running 0 5h8m dns-default-mdw6r 2/2 Running 0 4h58m dns-default-p8t5h 2/2 Running 0 4h58m dns-default-rl6nk 2/2 Running 0 5h8m dns-default-xbgqx 2/2 Running 0 5h8m dns-default-zv8f6 2/2 Running 0 4h58m node-resolver-62jjb 1/1 Running 0 5h8m node-resolver-8z4cj 1/1 Running 0 4h59m node-resolver-bq244 1/1 Running 0 5h8m node-resolver-hc58n 1/1 Running 0 4h59m node-resolver-lm6z4 1/1 Running 0 5h8m node-resolver-zfx5k 1/1 Running 0 5h Run the following ovnkube-trace command to verify DNS resolution is working: USD ./ovnkube-trace \ -src-namespace default \ 1 -src web \ 2 -dst-namespace openshift-dns \ 3 -dst dns-default-p8t5h \ 4 -udp -dst-port 53 \ 5 -loglevel 0 6 1 Namespace of the source pod 2 Source pod name 3 Namespace of destination pod 4 Destination pod name 5 Use the udp transport protocol. Port 53 is the port the DNS service uses. 6 Set the log level to 0 (0 is minimal and 5 is debug) Example output if the src&dst pod lands on the same node: ovn-trace source pod to destination pod indicates success from web to dns-default-p8t5h ovn-trace destination pod to source pod indicates success from dns-default-p8t5h to web ovs-appctl ofproto/trace source pod to destination pod indicates success from web to dns-default-p8t5h ovs-appctl ofproto/trace destination pod to source pod indicates success from dns-default-p8t5h to web ovn-detrace source pod to destination pod indicates success from web to dns-default-p8t5h ovn-detrace destination pod to source pod indicates success from dns-default-p8t5h to web Example output if the src&dst pod lands on a different node: ovn-trace source pod to destination pod indicates success from web to dns-default-8s42x ovn-trace (remote) source pod to destination pod indicates success from web to dns-default-8s42x ovn-trace destination pod to source pod indicates success from dns-default-8s42x to web ovn-trace (remote) destination pod to source pod indicates success from dns-default-8s42x to web ovs-appctl ofproto/trace source pod to destination pod indicates success from web to dns-default-8s42x ovs-appctl ofproto/trace destination pod to source pod indicates success from dns-default-8s42x to web ovn-detrace source pod to destination pod indicates success from web to dns-default-8s42x ovn-detrace destination pod to source pod indicates success from dns-default-8s42x to web The ouput indicates success from the deployed pod to the DNS port and also indicates that it is successful going back in the other direction. So you know bi-directional traffic is supported on UDP port 53 if my web pod wants to do dns resolution from core DNS. If for example that did not work and you wanted to get the ovn-trace , the ovs-appctl of proto/trace and ovn-detrace , and more debug type information increase the log level to 2 and run the command again as follows: USD ./ovnkube-trace \ -src-namespace default \ -src web \ -dst-namespace openshift-dns \ -dst dns-default-467qw \ -udp -dst-port 53 \ -loglevel 2 The output from this increased log level is too much to list here. In a failure situation the output of this command shows which flow is dropping that traffic. For example an egress or ingress network policy may be configured on the cluster that does not allow that traffic. Example: Verifying by using debug output a configured default deny This example illustrates how to identify by using the debug output that an ingress default deny policy blocks traffic. Procedure Create the following YAML that defines a deny-by-default policy to deny ingress from all pods in all namespaces. Save the YAML in the deny-by-default.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default spec: podSelector: {} ingress: [] Apply the policy by entering the following command: USD oc apply -f deny-by-default.yaml Example output networkpolicy.networking.k8s.io/deny-by-default created Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=quay.io/openshifttest/nginx --labels="app=web" --expose --port=80 Run the following command to create the prod namespace: USD oc create namespace prod Run the following command to label the prod namespace: USD oc label namespace/prod purpose=production Run the following command to deploy an alpine image in the prod namespace and start a shell: USD oc run test-6459 --namespace=prod --rm -i -t --image=alpine -- sh Open another terminal session. In this new terminal session run ovn-trace to verify the failure in communication between the source pod test-6459 running in namespace prod and destination pod running in the default namespace: USD ./ovnkube-trace \ -src-namespace prod \ -src test-6459 \ -dst-namespace default \ -dst web \ -tcp -dst-port 80 \ -loglevel 0 Example output ovn-trace source pod to destination pod indicates failure from test-6459 to web Increase the log level to 2 to expose the reason for the failure by running the following command: USD ./ovnkube-trace \ -src-namespace prod \ -src test-6459 \ -dst-namespace default \ -dst web \ -tcp -dst-port 80 \ -loglevel 2 Example output ... ------------------------------------------------ 3. ls_out_acl_hint (northd.c:7454): !ct.new && ct.est && !ct.rpl && ct_mark.blocked == 0, priority 4, uuid 12efc456 reg0[8] = 1; reg0[10] = 1; ; 5. ls_out_acl_action (northd.c:7835): reg8[30..31] == 0, priority 500, uuid 69372c5d reg8[30..31] = 1; (4); 5. ls_out_acl_action (northd.c:7835): reg8[30..31] == 1, priority 500, uuid 2fa0af89 reg8[30..31] = 2; (4); 4. ls_out_acl_eval (northd.c:7691): reg8[30..31] == 2 && reg0[10] == 1 && (outport == @a16982411286042166782_ingressDefaultDeny), priority 2000, uuid 447d0dab reg8[17] = 1; ct_commit { ct_mark.blocked = 1; }; 1 ; ... 1 Ingress traffic is blocked due to the default deny policy being in place. Create a policy that allows traffic from all pods in a particular namespaces with a label purpose=production . Save the YAML in the web-allow-prod.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production Apply the policy by entering the following command: USD oc apply -f web-allow-prod.yaml Run ovnkube-trace to verify that traffic is now allowed by entering the following command: USD ./ovnkube-trace \ -src-namespace prod \ -src test-6459 \ -dst-namespace default \ -dst web \ -tcp -dst-port 80 \ -loglevel 0 Expected output ovn-trace source pod to destination pod indicates success from test-6459 to web ovn-trace destination pod to source pod indicates success from web to test-6459 ovs-appctl ofproto/trace source pod to destination pod indicates success from test-6459 to web ovs-appctl ofproto/trace destination pod to source pod indicates success from web to test-6459 ovn-detrace source pod to destination pod indicates success from test-6459 to web ovn-detrace destination pod to source pod indicates success from web to test-6459 Run the following command in the shell that was opened in step six to connect nginx to the web-server: wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 24.5.3. Additional resources Tracing Openflow with ovnkube-trace utility ovnkube-trace 24.6. Migrating from the OpenShift SDN network plugin As a cluster administrator, you can migrate to the OVN-Kubernetes network plugin from the OpenShift SDN network plugin. You can use the offline migration method for migrating from the OpenShift SDN network plugin to the OVN-Kubernetes plugin. The offline migration method is a manual process that includes some downtime. Additional resources About the OVN-Kubernetes network plugin 24.6.1. Migration to the OVN-Kubernetes network plugin Migrating to the OVN-Kubernetes network plugin is a manual process that includes some downtime during which your cluster is unreachable. Important Before you migrate your OpenShift Container Platform cluster to use the OVN-Kubernetes network plugin, update your cluster to the latest z-stream release so that all the latest bug fixes apply to your cluster. Although a rollback procedure is provided, the migration is intended to be a one-way process. A migration to the OVN-Kubernetes network plugin is supported on the following platforms: Bare metal hardware Amazon Web Services (AWS) Google Cloud Platform (GCP) IBM Cloud(R) Microsoft Azure Red Hat OpenStack Platform (RHOSP) VMware vSphere Important Migrating to or from the OVN-Kubernetes network plugin is not supported for managed OpenShift cloud services such as Red Hat OpenShift Dedicated, Azure Red Hat OpenShift(ARO), and Red Hat OpenShift Service on AWS (ROSA). Migrating from OpenShift SDN network plugin to OVN-Kubernetes network plugin is not supported on Nutanix. 24.6.1.1. Considerations for migrating to the OVN-Kubernetes network plugin If you have more than 150 nodes in your OpenShift Container Platform cluster, then open a support case for consultation on your migration to the OVN-Kubernetes network plugin. The subnets assigned to nodes and the IP addresses assigned to individual pods are not preserved during the migration. While the OVN-Kubernetes network plugin implements many of the capabilities present in the OpenShift SDN network plugin, the configuration is not the same. If your cluster uses any of the following OpenShift SDN network plugin capabilities, you must manually configure the same capability in the OVN-Kubernetes network plugin: Namespace isolation Egress router pods Before migrating to OVN-Kubernetes, ensure that the following IP address ranges are not in use: 100.64.0.0/16 , 169.254.169.0/29 , 100.88.0.0/16 , fd98::/64 , fd69::/125 , and fd97::/64 . OVN-Kubernetes uses these ranges internally. Do not include any of these ranges in any other CIDR definitions in your cluster or infrastructure. The following sections highlight the differences in configuration between the aforementioned capabilities in OVN-Kubernetes and OpenShift SDN network plugins. Primary network interface The OpenShift SDN plugin allows application of the NodeNetworkConfigurationPolicy (NNCP) custom resource (CR) to the primary interface on a node. The OVN-Kubernetes network plugin does not have this capability. If you have an NNCP applied to the primary interface, you must delete the NNCP before migrating to the OVN-Kubernetes network plugin. Deleting the NNCP does not remove the configuration from the primary interface, but with OVN-Kubernetes, the Kubernetes NMState cannot manage this configuration. Instead, the configure-ovs.sh shell script manages the primary interface and the configuration attached to this interface. Namespace isolation OVN-Kubernetes supports only the network policy isolation mode. Important For a cluster using OpenShift SDN that is configured in either the multitenant or subnet isolation mode, you can still migrate to the OVN-Kubernetes network plugin. Note that after the migration operation, multitenant isolation mode is dropped, so you must manually configure network policies to achieve the same level of project-level isolation for pods and services. Egress IP addresses OpenShift SDN supports two different Egress IP modes: In the automatically assigned approach, an egress IP address range is assigned to a node. In the manually assigned approach, a list of one or more egress IP addresses is assigned to a node. The migration process supports migrating Egress IP configurations that use the automatically assigned mode. The differences in configuring an egress IP address between OVN-Kubernetes and OpenShift SDN is described in the following table: Table 24.4. Differences in egress IP address configuration OVN-Kubernetes OpenShift SDN Create an EgressIPs object Add an annotation on a Node object Patch a NetNamespace object Patch a HostSubnet object For more information on using egress IP addresses in OVN-Kubernetes, see "Configuring an egress IP address". Egress network policies The difference in configuring an egress network policy, also known as an egress firewall, between OVN-Kubernetes and OpenShift SDN is described in the following table: Table 24.5. Differences in egress network policy configuration OVN-Kubernetes OpenShift SDN Create an EgressFirewall object in a namespace Create an EgressNetworkPolicy object in a namespace Note Because the name of an EgressFirewall object can only be set to default , after the migration all migrated EgressNetworkPolicy objects are named default , regardless of what the name was under OpenShift SDN. If you subsequently rollback to OpenShift SDN, all EgressNetworkPolicy objects are named default as the prior name is lost. For more information on using an egress firewall in OVN-Kubernetes, see "Configuring an egress firewall for a project". Egress router pods OVN-Kubernetes supports egress router pods in redirect mode. OVN-Kubernetes does not support egress router pods in HTTP proxy mode or DNS proxy mode. When you deploy an egress router with the Cluster Network Operator, you cannot specify a node selector to control which node is used to host the egress router pod. Multicast The difference between enabling multicast traffic on OVN-Kubernetes and OpenShift SDN is described in the following table: Table 24.6. Differences in multicast configuration OVN-Kubernetes OpenShift SDN Add an annotation on a Namespace object Add an annotation on a NetNamespace object For more information on using multicast in OVN-Kubernetes, see "Enabling multicast for a project". Network policies OVN-Kubernetes fully supports the Kubernetes NetworkPolicy API in the networking.k8s.io/v1 API group. No changes are necessary in your network policies when migrating from OpenShift SDN. Additional resources Understanding update channels and releases Asynchronous errata updates 24.6.1.2. How the migration process works The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response. Table 24.7. Migrating to OVN-Kubernetes from OpenShift SDN User-initiated steps Migration activity Set the migration field of the Network.operator.openshift.io custom resource (CR) named cluster to OVNKubernetes . Make sure the migration field is null before setting it to a value. Cluster Network Operator (CNO) Updates the status of the Network.config.openshift.io CR named cluster accordingly. Machine Config Operator (MCO) Rolls out an update to the systemd configuration necessary for OVN-Kubernetes; the MCO updates a single machine per pool at a time by default, causing the total time the migration takes to increase with the size of the cluster. Update the networkType field of the Network.config.openshift.io CR. CNO Performs the following actions: Destroys the OpenShift SDN control plane pods. Deploys the OVN-Kubernetes control plane pods. Updates the Multus objects to reflect the new network plugin. Reboot each node in the cluster. Cluster As nodes reboot, the cluster assigns IP addresses to pods on the OVN-Kubernetes cluster network. If a rollback to OpenShift SDN is required, the following table describes the process. Important You must wait until the migration process from OpenShift SDN to OVN-Kubernetes network plugin is successful before initiating a rollback. Table 24.8. Performing a rollback to OpenShift SDN User-initiated steps Migration activity Suspend the MCO to ensure that it does not interrupt the migration. The MCO stops. Set the migration field of the Network.operator.openshift.io custom resource (CR) named cluster to OpenShiftSDN . Make sure the migration field is null before setting it to a value. CNO Updates the status of the Network.config.openshift.io CR named cluster accordingly. Update the networkType field. CNO Performs the following actions: Destroys the OVN-Kubernetes control plane pods. Deploys the OpenShift SDN control plane pods. Updates the Multus objects to reflect the new network plugin. Reboot each node in the cluster. Cluster As nodes reboot, the cluster assigns IP addresses to pods on the OpenShift-SDN network. Enable the MCO after all nodes in the cluster reboot. MCO Rolls out an update to the systemd configuration necessary for OpenShift SDN; the MCO updates a single machine per pool at a time by default, so the total time the migration takes increases with the size of the cluster. 24.6.2. Migrating to the OVN-Kubernetes network plugin As a cluster administrator, you can change the network plugin for your cluster to OVN-Kubernetes. During the migration, you must reboot every node in your cluster. Important While performing the migration, your cluster is unavailable and workloads might be interrupted. Perform the migration only when an interruption in service is acceptable. Prerequisites You have a cluster configured with the OpenShift SDN CNI network plugin in the network policy isolation mode. You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have a recent backup of the etcd database. You can manually reboot each node. You checked that your cluster is in a known good state without any errors. You created a security group rule that allows User Datagram Protocol (UDP) packets on port 6081 for all nodes on all cloud platforms. You set all timeouts for webhooks to 3 seconds or removed the webhooks. Procedure To backup the configuration for the cluster network, enter the following command: USD oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml Verify that the OVN_SDN_MIGRATION_TIMEOUT environment variable is set and is equal to 0s by running the following command: #!/bin/bash if [ -n "USDOVN_SDN_MIGRATION_TIMEOUT" ] && [ "USDOVN_SDN_MIGRATION_TIMEOUT" = "0s" ]; then unset OVN_SDN_MIGRATION_TIMEOUT fi #loops the timeout command of the script to repeatedly check the cluster Operators until all are available. co_timeout=USD{OVN_SDN_MIGRATION_TIMEOUT:-1200s} timeout "USDco_timeout" bash <<EOT until oc wait co --all --for='condition=AVAILABLE=True' --timeout=10s && \ oc wait co --all --for='condition=PROGRESSING=False' --timeout=10s && \ oc wait co --all --for='condition=DEGRADED=False' --timeout=10s; do sleep 10 echo "Some ClusterOperators Degraded=False,Progressing=True,or Available=False"; done EOT Remove the configuration from the Cluster Network Operator (CNO) configuration object by running the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{"spec":{"migration":null}}' . Delete the NodeNetworkConfigurationPolicy (NNCP) custom resource (CR) that defines the primary network interface for the OpenShift SDN network plugin by completing the following steps: Check that the existing NNCP CR bonded the primary interface to your cluster by entering the following command: USD oc get nncp Example output NAME STATUS REASON bondmaster0 Available SuccessfullyConfigured Network Manager stores the connection profile for the bonded primary interface in the /etc/NetworkManager/system-connections system path. Remove the NNCP from your cluster: USD oc delete nncp <nncp_manifest_filename> To prepare all the nodes for the migration, set the migration field on the CNO configuration object by running the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OVNKubernetes" } } }' Note This step does not deploy OVN-Kubernetes immediately. Instead, specifying the migration field triggers the Machine Config Operator (MCO) to apply new machine configs to all the nodes in the cluster in preparation for the OVN-Kubernetes deployment. Check that the reboot is finished by running the following command: USD oc get mcp Check that all cluster Operators are available by running the following command: USD oc get co Alternatively: You can disable automatic migration of several OpenShift SDN capabilities to the OVN-Kubernetes equivalents: Egress IPs Egress firewall Multicast To disable automatic migration of the configuration for any of the previously noted OpenShift SDN features, specify the following keys: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OVNKubernetes", "features": { "egressIP": <bool>, "egressFirewall": <bool>, "multicast": <bool> } } } }' where: bool : Specifies whether to enable migration of the feature. The default is true . Optional: You can customize the following settings for OVN-Kubernetes to meet your network infrastructure requirements: Maximum transmission unit (MTU). Consider the following before customizing the MTU for this optional step: If you use the default MTU, and you want to keep the default MTU during migration, this step can be ignored. If you used a custom MTU, and you want to keep the custom MTU during migration, you must declare the custom MTU value in this step. This step does not work if you want to change the MTU value during migration. Instead, you must first follow the instructions for "Changing the cluster MTU". You can then keep the custom MTU value by performing this procedure and declaring the custom MTU value in this step. Note OpenShift-SDN and OVN-Kubernetes have different overlay overhead. MTU values should be selected by following the guidelines found on the "MTU value selection" page. Geneve (Generic Network Virtualization Encapsulation) overlay network port OVN-Kubernetes IPv4 internal subnet To customize either of the previously noted settings, enter and customize the following command. If you do not need to change the default value, omit the key from the patch. USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "mtu":<mtu>, "genevePort":<port>, "v4InternalSubnet":"<ipv4_subnet>" }}}}' where: mtu The MTU for the Geneve overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 100 less than the smallest node MTU value. port The UDP port for the Geneve overlay network. If a value is not specified, the default is 6081 . The port cannot be the same as the VXLAN port that is used by OpenShift SDN. The default value for the VXLAN port is 4789 . ipv4_subnet An IPv4 address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is 100.64.0.0/16 . Example patch command to update mtu field USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "mtu":1200 }}}}' As the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep ExecStart where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. The machine config must include the following update to the systemd configuration: ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes If a node is stuck in the NotReady state, investigate the machine config daemon pod logs and resolve any errors. To list the pods, enter the following command: USD oc get pod -n openshift-machine-config-operator Example output NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h The names for the config daemon pods are in the following format: machine-config-daemon-<seq> . The <seq> value is a random five character alphanumeric sequence. Display the pod log for the first machine config daemon pod shown in the output by enter the following command: USD oc logs <pod> -n openshift-machine-config-operator where pod is the name of a machine config daemon pod. Resolve any errors in the logs shown by the output from the command. To start the migration, configure the OVN-Kubernetes network plugin by using one of the following commands: To specify the network provider without changing the cluster network IP address block, enter the following command: USD oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "networkType": "OVNKubernetes" } }' To specify a different cluster network IP address block, enter the following command: USD oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "clusterNetwork": [ { "cidr": "<cidr>", "hostPrefix": <prefix> } ], "networkType": "OVNKubernetes" } }' where cidr is a CIDR block and prefix is the slice of the CIDR block apportioned to each node in your cluster. You cannot use any CIDR block that overlaps with the 100.64.0.0/16 CIDR block because the OVN-Kubernetes network provider uses this block internally. Important You cannot change the service network address block during the migration. Verify that the Multus daemon set rollout is complete before continuing with subsequent steps: USD oc -n openshift-multus rollout status daemonset/multus The name of the Multus pods is in the form of multus-<xxxxx> where <xxxxx> is a random sequence of letters. It might take several moments for the pods to restart. Example output Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated... ... Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available... daemon set "multus" successfully rolled out To complete changing the network plugin, reboot each node in your cluster. You can reboot the nodes in your cluster with either of the following approaches: Important The following scripts reboot all of the nodes in the cluster at the same time. This can cause your cluster to be unstable. Another option is to reboot your nodes manually one at a time. Rebooting nodes one-by-one causes considerable downtime in a cluster with many nodes. Cluster Operators will not work correctly before you reboot the nodes. With the oc rsh command, you can use a bash script similar to the following: #!/bin/bash readarray -t POD_NODES <<< "USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1" "USD7}')" for i in "USD{POD_NODES[@]}" do read -r POD NODE <<< "USDi" until oc rsh -n openshift-machine-config-operator "USDPOD" chroot /rootfs shutdown -r +1 do echo "cannot reboot node USDNODE, retry" && sleep 3 done done With the ssh command, you can use a bash script similar to the following. The script assumes that you have configured sudo to not prompt for a password. #!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}') do echo "reboot node USDip" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done Confirm that the migration succeeded: To confirm that the network plugin is OVN-Kubernetes, enter the following command. The value of status.networkType must be OVNKubernetes . USD oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}' To confirm that the cluster nodes are in the Ready state, enter the following command: USD oc get nodes To confirm that your pods are not in an error state, enter the following command: USD oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}' If pods on a node are in an error state, reboot that node. To confirm that all of the cluster Operators are not in an abnormal state, enter the following command: USD oc get co The status of every cluster Operator must be the following: AVAILABLE="True" , PROGRESSING="False" , DEGRADED="False" . If a cluster Operator is not available or degraded, check the logs for the cluster Operator for more information. Complete the following steps only if the migration succeeds and your cluster is in a good state: To remove the migration configuration from the CNO configuration object, enter the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }' To remove custom configuration for the OpenShift SDN network provider, enter the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "defaultNetwork": { "openshiftSDNConfig": null } } }' To remove the OpenShift SDN network provider namespace, enter the following command: USD oc delete namespace openshift-sdn steps Optional: After cluster migration, you can convert your IPv4 single-stack cluster to a dual-network cluster network that supports IPv4 and IPv6 address families. For more information, see "Converting to IPv4/IPv6 dual-stack networking". 24.6.3. Additional resources Red Hat OpenShift Network Calculator Configuration parameters for the OVN-Kubernetes network plugin Backing up etcd About network policy Changing the cluster MTU MTU value selection Converting to IPv4/IPv6 dual-stack networking OVN-Kubernetes capabilities Configuring an egress IP address Configuring an egress firewall for a project OVN-Kubernetes egress firewall blocks process to deploy application as DeploymentConfig Enabling multicast for a project OpenShift SDN capabilities Configuring egress IPs for a project Configuring an egress firewall for a project Enabling multicast for a project Network [operator.openshift.io/v1 ] 24.7. Rolling back to the OpenShift SDN network provider As a cluster administrator, you can rollback to the OpenShift SDN from the OVN-Kubernetes network plugin only after the migration to the OVN-Kubernetes network plugin is completed and successful. 24.7.1. Migrating to the OpenShift SDN network plugin Cluster administrators can roll back to the OpenShift SDN Container Network Interface (CNI) network plugin by using the offline migration method. During the migration you must manually reboot every node in your cluster. With the offline migration method, there is some downtime, during which your cluster is unreachable. Important You must wait until the migration process from OpenShift SDN to OVN-Kubernetes network plugin is successful before initiating a rollback. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. A cluster installed on infrastructure configured with the OVN-Kubernetes network plugin. A recent backup of the etcd database is available. A reboot can be triggered manually for each node. The cluster is in a known good state, without any errors. Procedure Stop all of the machine configuration pools managed by the Machine Config Operator (MCO): Stop the master configuration pool by entering the following command in your CLI: USD oc patch MachineConfigPool master --type='merge' --patch \ '{ "spec": { "paused": true } }' Stop the worker machine configuration pool by entering the following command in your CLI: USD oc patch MachineConfigPool worker --type='merge' --patch \ '{ "spec":{ "paused": true } }' To prepare for the migration, set the migration field to null by entering the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }' Check that the migration status is empty for the Network.config.openshift.io object by entering the following command in your CLI. Empty command output indicates that the object is not in a migration operation. USD oc get Network.config cluster -o jsonpath='{.status.migration}' Apply the patch to the Network.operator.openshift.io object to set the network plugin back to OpenShift SDN by entering the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OpenShiftSDN" } } }' Important If you applied the patch to the Network.config.openshift.io object before the patch operation finalizes on the Network.operator.openshift.io object, the Cluster Network Operator (CNO) enters into a degradation state and this causes a slight delay until the CNO recovers from the degraded state. Confirm that the migration status of the network plugin for the Network.config.openshift.io cluster object is OpenShiftSDN by entering the following command in your CLI: USD oc get Network.config cluster -o jsonpath='{.status.migration.networkType}' Apply the patch to the Network.config.openshift.io object to set the network plugin back to OpenShift SDN by entering the following command in your CLI: USD oc patch Network.config.openshift.io cluster --type='merge' \ --patch '{ "spec": { "networkType": "OpenShiftSDN" } }' Optional: Disable automatic migration of several OVN-Kubernetes capabilities to the OpenShift SDN equivalents: Egress IPs Egress firewall Multicast To disable automatic migration of the configuration for any of the previously noted OpenShift SDN features, specify the following keys: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OpenShiftSDN", "features": { "egressIP": <bool>, "egressFirewall": <bool>, "multicast": <bool> } } } }' where: bool : Specifies whether to enable migration of the feature. The default is true . Optional: You can customize the following settings for OpenShift SDN to meet your network infrastructure requirements: Maximum transmission unit (MTU) VXLAN port To customize either or both of the previously noted settings, customize and enter the following command in your CLI. If you do not need to change the default value, omit the key from the patch. USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "openshiftSDNConfig":{ "mtu":<mtu>, "vxlanPort":<port> }}}}' mtu The MTU for the VXLAN overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 50 less than the smallest node MTU value. port The UDP port for the VXLAN overlay network. If a value is not specified, the default is 4789 . The port cannot be the same as the Geneve port that is used by OVN-Kubernetes. The default value for the Geneve port is 6081 . Example patch command USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "openshiftSDNConfig":{ "mtu":1200 }}}}' Reboot each node in your cluster. You can reboot the nodes in your cluster with either of the following approaches: With the oc rsh command, you can use a bash script similar to the following: #!/bin/bash readarray -t POD_NODES <<< "USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1" "USD7}')" for i in "USD{POD_NODES[@]}" do read -r POD NODE <<< "USDi" until oc rsh -n openshift-machine-config-operator "USDPOD" chroot /rootfs shutdown -r +1 do echo "cannot reboot node USDNODE, retry" && sleep 3 done done With the ssh command, you can use a bash script similar to the following. The script assumes that you have configured sudo to not prompt for a password. #!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}') do echo "reboot node USDip" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done Wait until the Multus daemon set rollout completes. Run the following command to see your rollout status: USD oc -n openshift-multus rollout status daemonset/multus The name of the Multus pods is in the form of multus-<xxxxx> where <xxxxx> is a random sequence of letters. It might take several moments for the pods to restart. Example output Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated... ... Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available... daemon set "multus" successfully rolled out After the nodes in your cluster have rebooted and the multus pods are rolled out, start all of the machine configuration pools by running the following commands:: Start the master configuration pool: USD oc patch MachineConfigPool master --type='merge' --patch \ '{ "spec": { "paused": false } }' Start the worker configuration pool: USD oc patch MachineConfigPool worker --type='merge' --patch \ '{ "spec": { "paused": false } }' As the MCO updates machines in each config pool, it reboots each node. By default the MCO updates a single machine per pool at a time, so the time that the migration requires to complete grows with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command in your CLI: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command in your CLI: USD oc get machineconfig <config_name> -o yaml where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. Confirm that the migration succeeded: To confirm that the network plugin is OpenShift SDN, enter the following command in your CLI. The value of status.networkType must be OpenShiftSDN . USD oc get Network.config/cluster -o jsonpath='{.status.networkType}{"\n"}' To confirm that the cluster nodes are in the Ready state, enter the following command in your CLI: USD oc get nodes If a node is stuck in the NotReady state, investigate the machine config daemon pod logs and resolve any errors. To list the pods, enter the following command in your CLI: USD oc get pod -n openshift-machine-config-operator Example output NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h The names for the config daemon pods are in the following format: machine-config-daemon-<seq> . The <seq> value is a random five character alphanumeric sequence. To display the pod log for each machine config daemon pod shown in the output, enter the following command in your CLI: USD oc logs <pod> -n openshift-machine-config-operator where pod is the name of a machine config daemon pod. Resolve any errors in the logs shown by the output from the command. To confirm that your pods are not in an error state, enter the following command in your CLI: USD oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}' If pods on a node are in an error state, reboot that node. Complete the following steps only if the migration succeeds and your cluster is in a good state: To remove the migration configuration from the Cluster Network Operator configuration object, enter the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }' To remove the OVN-Kubernetes configuration, enter the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "defaultNetwork": { "ovnKubernetesConfig":null } } }' To remove the OVN-Kubernetes network provider namespace, enter the following command in your CLI: USD oc delete namespace openshift-ovn-kubernetes 24.8. Migrating from the Kuryr network plugin to the OVN-Kubernetes network plugin As the administrator of a cluster that runs on Red Hat OpenStack Platform (RHOSP), you can migrate to the OVN-Kubernetes network plugin from the Kuryr SDN network plugin. To learn more about OVN-Kubernetes, read About the OVN-Kubernetes network plugin . 24.8.1. Migration to the OVN-Kubernetes network provider You can manually migrate a cluster that runs on Red Hat OpenStack Platform (RHOSP) to the OVN-Kubernetes network provider. Important Migration to OVN-Kubernetes is a one-way process. During migration, your cluster will be unreachable for a brief time. 24.8.1.1. Considerations when migrating to the OVN-Kubernetes network provider Kubernetes namespaces are kept by Kuryr in separate RHOSP networking service (Neutron) subnets. Those subnets and the IP addresses that are assigned to individual pods are not preserved during the migration. 24.8.1.2. How the migration process works The following table summarizes the migration process by relating the steps that you perform with the actions that your cluster and Operators take. Table 24.9. The Kuryr to OVN-Kubernetes migration process User-initiated steps Migration activity Set the migration field of the Network.operator.openshift.io custom resource (CR) named cluster to OVNKubernetes . Verify that the value of the migration field prints the null value before setting it to another value. Cluster Network Operator (CNO) Updates the status of the Network.config.openshift.io CR named cluster accordingly. Machine Config Operator (MCO) Deploys an update to the systemd configuration that is required by OVN-Kubernetes. By default, the MCO updates a single machine per pool at a time. As a result, large clusters have longer migration times. Update the networkType field of the Network.config.openshift.io CR. CNO Performs the following actions: Destroys the Kuryr control plane pods: Kuryr CNIs and the Kuryr controller. Deploys the OVN-Kubernetes control plane pods. Updates the Multus objects to reflect the new network plugin. Reboot each node in the cluster. Cluster As nodes reboot, the cluster assigns IP addresses to pods on the OVN-Kubernetes cluster network. Clean up remaining resources Kuryr controlled. Cluster Holds RHOSP resources that need to be freed, as well as OpenShift Container Platform resources to configure. 24.8.2. Migrating to the OVN-Kubernetes network plugin As a cluster administrator, you can change the network plugin for your cluster to OVN-Kubernetes. Important During the migration, you must reboot every node in your cluster. Your cluster is unavailable and workloads might be interrupted. Perform the migration only if an interruption in service is acceptable. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have a recent backup of the etcd database is available. You can manually reboot each node. The cluster you plan to migrate is in a known good state, without any errors. You installed the Python interpreter. You installed the openstacksdk python package. You installed the openstack CLI tool. You have access to the underlying RHOSP cloud. Procedure Back up the configuration for the cluster network by running the following command: USD oc get Network.config.openshift.io cluster -o yaml > cluster-kuryr.yaml To set the CLUSTERID variable, run the following command: USD CLUSTERID=USD(oc get infrastructure.config.openshift.io cluster -o=jsonpath='{.status.infrastructureName}') To prepare all the nodes for the migration, set the migration field on the Cluster Network Operator configuration object by running the following command: USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{"spec": {"migration": {"networkType": "OVNKubernetes"}}}' Note This step does not deploy OVN-Kubernetes immediately. Specifying the migration field triggers the Machine Config Operator (MCO) to apply new machine configs to all the nodes in the cluster. This prepares the cluster for the OVN-Kubernetes deployment. Optional: Customize the following settings for OVN-Kubernetes for your network infrastructure requirements: Maximum transmission unit (MTU) Geneve (Generic Network Virtualization Encapsulation) overlay network port OVN-Kubernetes IPv4 internal subnet OVN-Kubernetes IPv6 internal subnet To customize these settings, enter and customize the following command: USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "mtu":<mtu>, "genevePort":<port>, "v4InternalSubnet":"<ipv4_subnet>", "v6InternalSubnet":"<ipv6_subnet>" }}}}' where: mtu Specifies the MTU for the Geneve overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 100 less than the smallest node MTU value. port Specifies the UDP port for the Geneve overlay network. If a value is not specified, the default is 6081 . The port cannot be the same as the VXLAN port that is used by Kuryr. The default value for the VXLAN port is 4789 . ipv4_subnet Specifies an IPv4 address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is 100.64.0.0/16 . ipv6_subnet Specifies an IPv6 address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/48 . If you do not need to change the default value, omit the key from the patch. Example patch command to update mtu field USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "mtu":1200 }}}}' Check the machine config pool status by entering the following command: USD oc get mcp While the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated before continuing. A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the MCO updates one machine per pool at a time. Large clusters take more time to migrate than small clusters. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b 1 machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b 2 machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Review the output from the step. The following statements must be true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep ExecStart where: <config_name> Specifies the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. The machine config must include the following update to the systemd configuration: Example output ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes If a node is stuck in the NotReady state, investigate the machine config daemon pod logs and resolve any errors: To list the pods, enter the following command: USD oc get pod -n openshift-machine-config-operator Example output NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h The names for the config daemon pods are in the following format: machine-config-daemon-<seq> . The <seq> value is a random five character alphanumeric sequence. Display the pod log for the first machine config daemon pod shown in the output by enter the following command: USD oc logs <pod> -n openshift-machine-config-operator where: <pod> Specifies the name of a machine config daemon pod. Resolve any errors in the logs shown by the output from the command. To start the migration, configure the OVN-Kubernetes network plugin by using one of the following commands: To specify the network provider without changing the cluster network IP address block, enter the following command: USD oc patch Network.config.openshift.io cluster --type=merge \ --patch '{"spec": {"networkType": "OVNKubernetes"}}' To specify a different cluster network IP address block, enter the following command: USD oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "clusterNetwork": [ { "cidr": "<cidr>", "hostPrefix": "<prefix>" } ] "networkType": "OVNKubernetes" } }' where: <cidr> Specifies a CIDR block. <prefix> Specifies a slice of the CIDR block that is apportioned to each node in your cluster. Important You cannot change the service network address block during the migration. You cannot use any CIDR block that overlaps with the 100.64.0.0/16 CIDR block because the OVN-Kubernetes network provider uses this block internally. To complete the migration, reboot each node in your cluster. For example, you can use a bash script similar to the following example. The script assumes that you can connect to each host by using ssh and that you have configured sudo to not prompt for a password: #!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}') do echo "reboot node USDip" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done Note If SSH access is not available, you can use the openstack command: USD for name in USD(openstack server list --name "USD{CLUSTERID}*" -f value -c Name); do openstack server reboot "USD{name}"; done Alternatively, you might be able to reboot each node through the management portal for your infrastructure provider. Otherwise, contact the appropriate authority to either gain access to the virtual machines through SSH or the management portal and OpenStack client. Verification Confirm that the migration succeeded, and then remove the migration resources: To confirm that the network plugin is OVN-Kubernetes, enter the following command. USD oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}' The value of status.networkType must be OVNKubernetes . To confirm that the cluster nodes are in the Ready state, enter the following command: USD oc get nodes To confirm that your pods are not in an error state, enter the following command: USD oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}' If pods on a node are in an error state, reboot that node. To confirm that all of the cluster Operators are not in an abnormal state, enter the following command: USD oc get co The status of every cluster Operator must be the following: AVAILABLE="True" , PROGRESSING="False" , DEGRADED="False" . If a cluster Operator is not available or degraded, check the logs for the cluster Operator for more information. Important Do not proceed if any of the verification steps indicate errors. You might encounter pods that have a Terminating state due to finalizers that are removed during clean up. They are not an error indication. If the migration completed and your cluster is in a good state, remove the migration configuration from the CNO configuration object by entering the following command: USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{"spec": {"migration": null}}' 24.8.3. Cleaning up resources after migration After migration from the Kuryr network plugin to the OVN-Kubernetes network plugin, you must clean up the resources that Kuryr created previously. Note The clean up process relies on a Python virtual environment to ensure that the package versions that you use support tags for Octavia objects. You do not need a virtual environment if you are certain that your environment uses at minimum: The openstacksdk Python package version 0.54.0 The python-openstackclient Python package version 5.5.0 The python-octaviaclient Python package version 2.3.0 If you decide to use these particular versions, be sure to pull python-neutronclient prior to version 9.0.0, as it prevents you from accessing trunks. Prerequisites You installed the OpenShift Container Platform CLI ( oc ). You installed a Python interpreter. You installed the openstacksdk Python package. You installed the openstack CLI. You have access to the underlying RHOSP cloud. You can access the cluster as a user with the cluster-admin role. Procedure Create a clean-up Python virtual environment: Create a temporary directory for your environment. For example: USD python3 -m venv /tmp/venv The virtual environment located in /tmp/venv directory is used in all clean up examples. Enter the virtual environment. For example: USD source /tmp/venv/bin/activate Upgrade the pip command in the virtual environment by running the following command: (venv) USD pip install --upgrade pip Install the required Python packages by running the following command: (venv) USD pip install openstacksdk==0.54.0 python-openstackclient==5.5.0 python-octaviaclient==2.3.0 'python-neutronclient<9.0.0' In your terminal, set variables to cluster and Kuryr identifiers by running the following commands: Set the cluster ID: (venv) USD CLUSTERID=USD(oc get infrastructure.config.openshift.io cluster -o=jsonpath='{.status.infrastructureName}') Set the cluster tag: (venv) USD CLUSTERTAG="openshiftClusterID=USD{CLUSTERID}" Set the router ID: (venv) USD ROUTERID=USD(oc get kuryrnetwork -A --no-headers -o custom-columns=":status.routerId"|uniq) Create a Bash function that removes finalizers from specified resources by running the following command: (venv) USD function REMFIN { local resource=USD1 local finalizer=USD2 for res in USD(oc get "USD{resource}" -A --template='{{range USDi,USDp := .items}}{{ USDp.metadata.name }}|{{ USDp.metadata.namespace }}{{"\n"}}{{end}}'); do name=USD{res%%|*} ns=USD{res##*|} yaml=USD(oc get -n "USD{ns}" "USD{resource}" "USD{name}" -o yaml) if echo "USD{yaml}" | grep -q "USD{finalizer}"; then echo "USD{yaml}" | grep -v "USD{finalizer}" | oc replace -n "USD{ns}" "USD{resource}" "USD{name}" -f - fi done } The function takes two parameters: the first parameter is name of the resource, and the second parameter is the finalizer to remove. The named resource is removed from the cluster and its definition is replaced with copied data, excluding the specified finalizer. To remove Kuryr finalizers from services, enter the following command: (venv) USD REMFIN services kuryr.openstack.org/service-finalizer To remove the Kuryr service-subnet-gateway-ip service, enter the following command: (venv) USD if oc get -n openshift-kuryr service service-subnet-gateway-ip &>/dev/null; then oc -n openshift-kuryr delete service service-subnet-gateway-ip fi To remove all tagged RHOSP load balancers from Octavia, enter the following command: (venv) USD for lb in USD(openstack loadbalancer list --tags "USD{CLUSTERTAG}" -f value -c id); do openstack loadbalancer delete --cascade "USD{lb}" done To remove Kuryr finalizers from all KuryrLoadBalancer CRs, enter the following command: (venv) USD REMFIN kuryrloadbalancers.openstack.org kuryr.openstack.org/kuryrloadbalancer-finalizers To remove the openshift-kuryr namespace, enter the following command: (venv) USD oc delete namespace openshift-kuryr To remove the Kuryr service subnet from the router, enter the following command: (venv) USD openstack router remove subnet "USD{ROUTERID}" "USD{CLUSTERID}-kuryr-service-subnet" To remove the Kuryr service network, enter the following command: (venv) USD openstack network delete "USD{CLUSTERID}-kuryr-service-network" To remove Kuryr finalizers from all pods, enter the following command: (venv) USD REMFIN pods kuryr.openstack.org/pod-finalizer To remove Kuryr finalizers from all KuryrPort CRs, enter the following command: (venv) USD REMFIN kuryrports.openstack.org kuryr.openstack.org/kuryrport-finalizer This command deletes the KuryrPort CRs. To remove Kuryr finalizers from network policies, enter the following command: (venv) USD REMFIN networkpolicy kuryr.openstack.org/networkpolicy-finalizer To remove Kuryr finalizers from remaining network policies, enter the following command: (venv) USD REMFIN kuryrnetworkpolicies.openstack.org kuryr.openstack.org/networkpolicy-finalizer To remove subports that Kuryr created from trunks, enter the following command: (venv) USD mapfile trunks < <(python -c "import openstack; n = openstack.connect().network; print('\n'.join([x.id for x in n.trunks(any_tags='USDCLUSTERTAG')]))") && \ i=0 && \ for trunk in "USD{trunks[@]}"; do trunk=USD(echo "USDtrunk"|tr -d '\n') i=USD((i+1)) echo "Processing trunk USDtrunk, USD{i}/USD{#trunks[@]}." subports=() for subport in USD(python -c "import openstack; n = openstack.connect().network; print(' '.join([x['port_id'] for x in n.get_trunk('USDtrunk').sub_ports if 'USDCLUSTERTAG' in n.get_port(x['port_id']).tags]))"); do subports+=("USDsubport"); done args=() for sub in "USD{subports[@]}" ; do args+=("--subport USDsub") done if [ USD{#args[@]} -gt 0 ]; then openstack network trunk unset USD{args[*]} "USD{trunk}" fi done To retrieve all networks and subnets from KuryrNetwork CRs and remove ports, router interfaces and the network itself, enter the following command: (venv) USD mapfile -t kuryrnetworks < <(oc get kuryrnetwork -A --template='{{range USDi,USDp := .items}}{{ USDp.status.netId }}|{{ USDp.status.subnetId }}{{"\n"}}{{end}}') && \ i=0 && \ for kn in "USD{kuryrnetworks[@]}"; do i=USD((i+1)) netID=USD{kn%%|*} subnetID=USD{kn##*|} echo "Processing network USDnetID, USD{i}/USD{#kuryrnetworks[@]}" # Remove all ports from the network. for port in USD(python -c "import openstack; n = openstack.connect().network; print(' '.join([x.id for x in n.ports(network_id='USDnetID') if x.device_owner != 'network:router_interface']))"); do ( openstack port delete "USD{port}" ) & # Only allow 20 jobs in parallel. if [[ USD(jobs -r -p | wc -l) -ge 20 ]]; then wait -n fi done wait # Remove the subnet from the router. openstack router remove subnet "USD{ROUTERID}" "USD{subnetID}" # Remove the network. openstack network delete "USD{netID}" done To remove the Kuryr security group, enter the following command: (venv) USD openstack security group delete "USD{CLUSTERID}-kuryr-pods-security-group" To remove all tagged subnet pools, enter the following command: (venv) USD for subnetpool in USD(openstack subnet pool list --tags "USD{CLUSTERTAG}" -f value -c ID); do openstack subnet pool delete "USD{subnetpool}" done To check that all of the networks based on KuryrNetwork CRs were removed, enter the following command: (venv) USD networks=USD(oc get kuryrnetwork -A --no-headers -o custom-columns=":status.netId") && \ for existingNet in USD(openstack network list --tags "USD{CLUSTERTAG}" -f value -c ID); do if [[ USDnetworks =~ USDexistingNet ]]; then echo "Network still exists: USDexistingNet" fi done If the command returns any existing networks, intestigate and remove them before you continue. To remove security groups that are related to network policy, enter the following command: (venv) USD for sgid in USD(openstack security group list -f value -c ID -c Description | grep 'Kuryr-Kubernetes Network Policy' | cut -f 1 -d ' '); do openstack security group delete "USD{sgid}" done To remove finalizers from KuryrNetwork CRs, enter the following command: (venv) USD REMFIN kuryrnetworks.openstack.org kuryrnetwork.finalizers.kuryr.openstack.org To remove the Kuryr router, enter the following command: (venv) USD if python3 -c "import sys; import openstack; n = openstack.connect().network; r = n.get_router('USDROUTERID'); sys.exit(0) if r.description != 'Created By OpenShift Installer' else sys.exit(1)"; then openstack router delete "USD{ROUTERID}" fi 24.8.4. Additional resources Configuration parameters for the OVN-Kubernetes network plugin Backing up etcd About network policy To learn more about OVN-Kubernetes capabilities, see: Configuring an egress IP address Configuring an egress firewall for a project Enabling multicast for a project 24.9. Converting to IPv4/IPv6 dual-stack networking As a cluster administrator, you can convert your IPv4 single-stack cluster to a dual-network cluster network that supports IPv4 and IPv6 address families. After converting to dual-stack, all newly created pods are dual-stack enabled. Note While using dual-stack networking, you cannot use IPv4-mapped IPv6 addresses, such as ::FFFF:198.51.100.1 , where IPv6 is required. A dual-stack network is supported on clusters provisioned on bare metal, IBM Power(R), IBM Z(R) infrastructure, single-node OpenShift, and VMware vSphere. 24.9.1. Converting to a dual-stack cluster network As a cluster administrator, you can convert your single-stack cluster network to a dual-stack cluster network. Note After converting to dual-stack networking only newly created pods are assigned IPv6 addresses. Any pods created before the conversion must be recreated to receive an IPv6 address. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. Your cluster uses the OVN-Kubernetes network plugin. The cluster nodes have IPv6 addresses. You have configured an IPv6-enabled router based on your infrastructure. Procedure To specify IPv6 address blocks for the cluster and service networks, create a file containing the following YAML: - op: add path: /spec/clusterNetwork/- value: 1 cidr: fd01::/48 hostPrefix: 64 - op: add path: /spec/serviceNetwork/- value: fd02::/112 2 1 Specify an object with the cidr and hostPrefix fields. The host prefix must be 64 or greater. The IPv6 CIDR prefix must be large enough to accommodate the specified host prefix. 1 2 Specify an IPv6 CIDR with a prefix of 112 . Kubernetes uses only the lowest 16 bits. For a prefix of 112 , IP addresses are assigned from 112 to 128 bits. To patch the cluster network configuration, enter the following command: USD oc patch network.config.openshift.io cluster \ --type='json' --patch-file <file>.yaml where: file Specifies the name of the file you created in the step. Example output network.config.openshift.io/cluster patched Verification Complete the following step to verify that the cluster network recognizes the IPv6 address blocks that you specified in the procedure. Display the network configuration: USD oc describe network Example output Status: Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cidr: fd01::/48 Host Prefix: 64 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 172.30.0.0/16 fd02::/112 24.9.2. Converting to a single-stack cluster network As a cluster administrator, you can convert your dual-stack cluster network to a single-stack cluster network. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. Your cluster uses the OVN-Kubernetes network plugin. The cluster nodes have IPv6 addresses. You have enabled dual-stack networking. Procedure Edit the networks.config.openshift.io custom resource (CR) by running the following command: USD oc edit networks.config.openshift.io Remove the IPv6 specific configuration that you have added to the cidr and hostPrefix fields in the procedure. 24.10. Configuring OVN-Kubernetes internal IP address subnets As a cluster administrator, you can change the IP address ranges that the OVN-Kubernetes network plugin uses for the join and transit subnets. 24.10.1. Configuring the OVN-Kubernetes join subnet You can change the join subnet used by OVN-Kubernetes to avoid conflicting with any existing subnets already in use in your environment. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Ensure that the cluster uses the OVN-Kubernetes network plugin. Procedure To change the OVN-Kubernetes join subnet, enter the following command: USD oc patch network.operator.openshift.io cluster --type='merge' \ -p='{"spec":{"defaultNetwork":{"ovnKubernetesConfig": {"ipv4":{"internalJoinSubnet": "<join_subnet>"}, "ipv6":{"internalJoinSubnet": "<join_subnet>"}}}}}' where: <join_subnet> Specifies an IP address subnet for internal use by OVN-Kubernetes. The subnet must be larger than the number of nodes in the cluster and it must be large enough to accommodate one IP address per node in the cluster. This subnet cannot overlap with any other subnets used by OpenShift Container Platform or on the host itself. The default value for IPv4 is 100.64.0.0/16 and the default value for IPv6 is fd98::/64 . Example output network.operator.openshift.io/cluster patched Verification To confirm that the configuration is active, enter the following command: USD oc get network.operator.openshift.io \ -o jsonpath="{.items[0].spec.defaultNetwork}" It can take up to 30 minutes for this change to take effect. Example output 24.10.2. Configuring the OVN-Kubernetes transit subnet You can change the transit subnet used by OVN-Kubernetes to avoid conflicting with any existing subnets already in use in your environment. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Ensure that the cluster uses the OVN-Kubernetes network plugin. Procedure To change the OVN-Kubernetes transit subnet, enter the following command: USD oc patch network.operator.openshift.io cluster --type='merge' \ -p='{"spec":{"defaultNetwork":{"ovnKubernetesConfig": {"ipv4":{"internalTransitSwitchSubnet": "<transit_subnet>"}, "ipv6":{"internalTransitSwitchSubnet": "<transit_subnet>"}}}}}' where: <transit_subnet> Specifies an IP address subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. The default value for IPv4 is 100.88.0.0/16 and the default value for IPv6 is fd97::/64 . Example output network.operator.openshift.io/cluster patched Verification To confirm that the configuration is active, enter the following command: USD oc get network.operator.openshift.io \ -o jsonpath="{.items[0].spec.defaultNetwork}" It can take up to 30 minutes for this change to take effect. Example output 24.11. Logging for egress firewall and network policy rules As a cluster administrator, you can configure audit logging for your cluster and enable logging for one or more namespaces. OpenShift Container Platform produces audit logs for both egress firewalls and network policies. Note Audit logging is available for only the OVN-Kubernetes network plugin . 24.11.1. Audit logging The OVN-Kubernetes network plugin uses Open Virtual Network (OVN) ACLs to manage egress firewalls and network policies. Audit logging exposes allow and deny ACL events. You can configure the destination for audit logs, such as a syslog server or a UNIX domain socket. Regardless of any additional configuration, an audit log is always saved to /var/log/ovn/acl-audit-log.log on each OVN-Kubernetes pod in the cluster. You can enable audit logging for each namespace by annotating each namespace configuration with a k8s.ovn.org/acl-logging section. In the k8s.ovn.org/acl-logging section, you must specify allow , deny , or both values to enable audit logging for a namespace. Note A network policy does not support setting the Pass action set as a rule. The ACL-logging implementation logs access control list (ACL) events for a network. You can view these logs to analyze any potential security issues. Example namespace annotation kind: Namespace apiVersion: v1 metadata: name: example1 annotations: k8s.ovn.org/acl-logging: |- { "deny": "info", "allow": "info" } To view the default ACL logging configuration values, see the policyAuditConfig object in the cluster-network-03-config.yml file. If required, you can change the ACL logging configuration values for log file parameters in this file. The logging message format is compatible with syslog as defined by RFC5424. The syslog facility is configurable and defaults to local0 . The following example shows key parameters and their values outputted in a log message: Example logging message that outputs parameters and their values <timestamp>|<message_serial>|acl_log(ovn_pinctrl0)|<severity>|name="<acl_name>", verdict="<verdict>", severity="<severity>", direction="<direction>": <flow> Where: <timestamp> states the time and date for the creation of a log message. <message_serial> lists the serial number for a log message. acl_log(ovn_pinctrl0) is a literal string that prints the location of the log message in the OVN-Kubernetes plugin. <severity> sets the severity level for a log message. If you enable audit logging that supports allow and deny tasks then two severity levels show in the log message output. <name> states the name of the ACL-logging implementation in the OVN Network Bridging Database ( nbdb ) that was created by the network policy. <verdict> can be either allow or drop . <direction> can be either to-lport or from-lport to indicate that the policy was applied to traffic going to or away from a pod. <flow> shows packet information in a format equivalent to the OpenFlow protocol. This parameter comprises Open vSwitch (OVS) fields. The following example shows OVS fields that the flow parameter uses to extract packet information from system memory: Example of OVS fields used by the flow parameter to extract packet information <proto>,vlan_tci=0x0000,dl_src=<src_mac>,dl_dst=<source_mac>,nw_src=<source_ip>,nw_dst=<target_ip>,nw_tos=<tos_dscp>,nw_ecn=<tos_ecn>,nw_ttl=<ip_ttl>,nw_frag=<fragment>,tp_src=<tcp_src_port>,tp_dst=<tcp_dst_port>,tcp_flags=<tcp_flags> Where: <proto> states the protocol. Valid values are tcp and udp . vlan_tci=0x0000 states the VLAN header as 0 because a VLAN ID is not set for internal pod network traffic. <src_mac> specifies the source for the Media Access Control (MAC) address. <source_mac> specifies the destination for the MAC address. <source_ip> lists the source IP address <target_ip> lists the target IP address. <tos_dscp> states Differentiated Services Code Point (DSCP) values to classify and prioritize certain network traffic over other traffic. <tos_ecn> states Explicit Congestion Notification (ECN) values that indicate any congested traffic in your network. <ip_ttl> states the Time To Live (TTP) information for an packet. <fragment> specifies what type of IP fragments or IP non-fragments to match. <tcp_src_port> shows the source for the port for TCP and UDP protocols. <tcp_dst_port> lists the destination port for TCP and UDP protocols. <tcp_flags> supports numerous flags such as SYN , ACK , PSH and so on. If you need to set multiple values then each value is separated by a vertical bar ( | ). The UDP protocol does not support this parameter. Note For more information about the field descriptions, go to the OVS manual page for ovs-fields . Example ACL deny log entry for a network policy 2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn The following table describes namespace annotation values: Table 24.10. Audit logging namespace annotation for k8s.ovn.org/acl-logging Field Description deny Blocks namespace access to any traffic that matches an ACL rule with the deny action. The field supports alert , warning , notice , info , or debug values. allow Permits namespace access to any traffic that matches an ACL rule with the allow action. The field supports alert , warning , notice , info , or debug values. pass A pass action applies to an admin network policy's ACL rule. A pass action allows either the network policy in the namespace or the baseline admin network policy rule to evaluate all incoming and outgoing traffic. A network policy does not support a pass action. Additional resources AdminNetworkPolicy 24.11.2. Audit configuration The configuration for audit logging is specified as part of the OVN-Kubernetes cluster network provider configuration. The following YAML illustrates the default values for the audit logging: Audit logging configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: "null" maxFileSize: 50 rateLimit: 20 syslogFacility: local0 The following table describes the configuration fields for audit logging. Table 24.11. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . 24.11.3. Configuring egress firewall and network policy auditing for a cluster As a cluster administrator, you can customize audit logging for your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To customize the audit logging configuration, enter the following command: USD oc edit network.operator.openshift.io/cluster Tip You can alternatively customize and apply the following YAML to configure audit logging: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: "null" maxFileSize: 50 rateLimit: 20 syslogFacility: local0 Verification To create a namespace with network policies complete the following steps: Create a namespace for verification: USD cat <<EOF| oc create -f - kind: Namespace apiVersion: v1 metadata: name: verify-audit-logging annotations: k8s.ovn.org/acl-logging: '{ "deny": "alert", "allow": "alert" }' EOF Example output namespace/verify-audit-logging created Create network policies for the namespace: USD cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: matchLabels: policyTypes: - Ingress - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace namespace: verify-audit-logging spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: verify-audit-logging EOF Example output networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created Create a pod for source traffic in the default namespace: USD cat <<EOF| oc create -n default -f - apiVersion: v1 kind: Pod metadata: name: client spec: containers: - name: client image: registry.access.redhat.com/rhel7/rhel-tools command: ["/bin/sh", "-c"] args: ["sleep inf"] EOF Create two pods in the verify-audit-logging namespace: USD for name in client server; do cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: v1 kind: Pod metadata: name: USD{name} spec: containers: - name: USD{name} image: registry.access.redhat.com/rhel7/rhel-tools command: ["/bin/sh", "-c"] args: ["sleep inf"] EOF done Example output pod/client created pod/server created To generate traffic and produce network policy audit log entries, complete the following steps: Obtain the IP address for pod named server in the verify-audit-logging namespace: USD POD_IP=USD(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}') Ping the IP address from the command from the pod named client in the default namespace and confirm that all packets are dropped: USD oc exec -it client -n default -- /bin/ping -c 2 USDPOD_IP Example output PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms Ping the IP address saved in the POD_IP shell environment variable from the pod named client in the verify-audit-logging namespace and confirm that all packets are allowed: USD oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 USDPOD_IP Example output PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data. 64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms --- 10.128.0.86 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms Display the latest entries in the network policy audit log: USD for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done Example output 2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 24.11.4. Enabling egress firewall and network policy audit logging for a namespace As a cluster administrator, you can enable audit logging for a namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To enable audit logging for a namespace, enter the following command: USD oc annotate namespace <namespace> \ k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "notice" }' where: <namespace> Specifies the name of the namespace. Tip You can alternatively apply the following YAML to enable audit logging: kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: |- { "deny": "alert", "allow": "notice" } Example output namespace/verify-audit-logging annotated Verification Display the latest entries in the audit log: USD for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done Example output 2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 24.11.5. Disabling egress firewall and network policy audit logging for a namespace As a cluster administrator, you can disable audit logging for a namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To disable audit logging for a namespace, enter the following command: USD oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging- where: <namespace> Specifies the name of the namespace. Tip You can alternatively apply the following YAML to disable audit logging: kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: null Example output namespace/verify-audit-logging annotated 24.11.6. Additional resources About network policy Configuring an egress firewall for a project 24.12. Configuring IPsec encryption With IPsec enabled, you can encrypt both internal pod-to-pod cluster traffic between nodes and external traffic between pods and IPsec endpoints external to your cluster. All pod-to-pod network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec Transport mode . IPsec is disabled by default. It can be enabled either during or after installing the cluster. For information about cluster installation, see OpenShift Container Platform installation overview . If you need to enable IPsec after cluster installation, you must first resize your cluster MTU to account for the overhead of the IPsec ESP IP header. Note IPsec on IBM Cloud(R) supports only NAT-T. Using ESP is not supported. The following support limitations exist for IPsec on a OpenShift Container Platform cluster: You must disable IPsec before updating to OpenShift Container Platform 4.15. After disabling IPsec, you must also delete the associated IPsec daemonsets. There is a known issue that can cause interruptions in pod-to-pod communication if you update without disabling IPsec. ( OCPBUGS-43323 ) Use the procedures in the following documentation to: Enable and disable IPSec after cluster installation Configure support for external IPsec endpoints outside the cluster Verify that IPsec encrypts traffic between pods on different nodes 24.12.1. Prerequisites You have decreased the size of the cluster MTU by 46 bytes to allow for the additional overhead of the IPsec ESP header. For more information on resizing the MTU that your cluster uses, see Changing the MTU for the cluster network . 24.12.2. Network connectivity requirements when IPsec is enabled You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. Table 24.12. Ports used for all-machine to all-machine communications Protocol Port Description UDP 500 IPsec IKE packets 4500 IPsec NAT-T packets ESP N/A IPsec Encapsulating Security Payload (ESP) 24.12.3. IPsec encryption for pod-to-pod traffic OpenShift Container Platform supports IPsec encryption for network traffic between pods. 24.12.3.1. Types of network traffic flows encrypted by pod-to-pod IPsec With IPsec enabled, only the following network traffic flows between pods are encrypted: Traffic between pods on different nodes on the cluster network Traffic from a pod on the host network to a pod on the cluster network The following traffic flows are not encrypted: Traffic between pods on the same node on the cluster network Traffic between pods on the host network Traffic from a pod on the cluster network to a pod on the host network The encrypted and unencrypted flows are illustrated in the following diagram: 24.12.3.2. Encryption protocol and IPsec mode The encrypt cipher used is AES-GCM-16-256 . The integrity check value (ICV) is 16 bytes. The key length is 256 bits. The IPsec mode used is Transport mode , a mode that encrypts end-to-end communication by adding an Encapsulated Security Payload (ESP) header to the IP header of the original packet and encrypts the packet data. OpenShift Container Platform does not currently use or support IPsec Tunnel mode for pod-to-pod communication. 24.12.3.3. Security certificate generation and rotation The Cluster Network Operator (CNO) generates a self-signed X.509 certificate authority (CA) that is used by IPsec for encryption. Certificate signing requests (CSRs) from each node are automatically fulfilled by the CNO. The CA is valid for 10 years. The individual node certificates are valid for 5 years and are automatically rotated after 4 1/2 years elapse. 24.12.3.4. Enabling pod-to-pod IPsec encryption As a cluster administrator, you can enable pod-to-pod IPsec encryption after cluster installation. Prerequisites Install the OpenShift CLI ( oc ). You are logged in to the cluster as a user with cluster-admin privileges. You have reduced the size of your cluster's maximum transmission unit (MTU) by 46 bytes to allow for the overhead of the IPsec ESP header. Procedure To enable IPsec encryption, enter the following command: USD oc patch networks.operator.openshift.io cluster --type=merge \ -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"ipsecConfig":{ }}}}}' Verification To find the names of the OVN-Kubernetes data plane pods, enter the following command: USD oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-node Example output ovnkube-node-5xqbf 8/8 Running 0 28m ovnkube-node-6mwcx 8/8 Running 0 29m ovnkube-node-ck5fr 8/8 Running 0 31m ovnkube-node-fr4ld 8/8 Running 0 26m ovnkube-node-wgs4l 8/8 Running 0 33m ovnkube-node-zfvcl 8/8 Running 0 34m Verify that IPsec is enabled on your cluster by entering the following command. The command output must state true to indicate that the node has IPsec enabled. USD oc -n openshift-ovn-kubernetes rsh ovnkube-node-<pod_number_sequence> ovn-nbctl --no-leader-only get nb_global . ipsec 1 1 Replace <pod_number_sequence> with the random sequence of letters, 5xqbf , for a data plane pod from the step 24.12.3.5. Disabling IPsec encryption As a cluster administrator, you can disable IPsec encryption only if you enabled IPsec after cluster installation. Important To avoid issues with your installed cluster, ensure that after you disable IPsec that you also delete the associated IPsec daemonsets pods. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To disable IPsec encryption, enter the following command: USD oc patch networks.operator.openshift.io/cluster --type=json \ -p='[{"op":"remove", "path":"/spec/defaultNetwork/ovnKubernetesConfig/ipsecConfig"}]' To find the names of the OVN-Kubernetes data plane pods that exist on a node in your cluster, enter the following command: USD oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-node Example output ovnkube-node-5xqbf 8/8 Running 0 28m ovnkube-node-6mwcx 8/8 Running 0 29m ovnkube-node-ck5fr 8/8 Running 0 31m ... To check if a node in your cluster has IPsec disabled, enter the following command. Ensure that you enter this command for each node that exists in your cluster. The command output must state false to indicate that the node has IPsec disabled. USD oc -n openshift-ovn-kubernetes rsh ovnkube-node-<pod_number_sequence> ovn-nbctl --no-leader-only get nb_global . ipsec 1 1 Replace <pod_number_sequence> with the random sequence of letters, 5xqbf , for a data plane pod from the step. To remove the IPsec ovn-ipsec-host daemonset pod from the openshift-ovn-kubernetes namespace on a node, enter the following command: USD oc delete daemonset ovn-ipsec-host -n openshift-ovn-kubernetes 1 1 The ovn-ipsec-host daemonset pod configures IPsec connections for east-west traffic on a node. To remove the IPsec ovn-ipsec-containerized daemonset pod from the openshift-ovn-kubernetes namespace on a node, enter the following command: USD oc delete daemonset ovn-ipsec-containerized -n openshift-ovn-kubernetes 1 1 The ovn-ipsec-containerized daemonset pod configures IPsec connections for east-west traffic on a node. Verify that the ovn-ipsec-host and ovn-ipsec-containerized daemonset pods were removed from all the nodes in your cluster by entering the following command. If the command output does not list the pods, the removal operation is successful. USD oc get pods -n openshift-ovn-kubernetes -l=app=ovn-ipsec Note You might need to re-run the oc delete command for a pod because sometimes the initial command attempt might not delete the pod. Optional: You can increase the size of your cluster MTU by 46 bytes because there is no longer any overhead from the IPsec ESP header in IP packets. 24.12.4. IPsec encryption for external traffic OpenShift Container Platform supports IPsec encryption for traffic to external hosts. You must supply a custom IPsec configuration, which includes the IPsec configuration file itself and TLS certificates. Ensure that the following prohibitions are observed: The custom IPsec configuration must not include any connection specifications that might interfere with the cluster's pod-to-pod IPsec configuration. Certificate common names (CN) in the provided certificate bundle must not begin with the ovs_ prefix, because this naming can collide with pod-to-pod IPsec CN names in the Network Security Services (NSS) database of each node. Important IPsec support for external endpoints is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 24.12.4.1. Enabling IPsec encryption for external IPsec endpoints As a cluster administrator, you can enable IPsec encryption between the cluster and external IPsec endpoints. Because this procedure uses Butane to create machine configs, you must have the butane command installed. Note After you apply the machine config, the Machine Config Operator reboots affected nodes in your cluster to rollout the new machine config. Prerequisites Install the OpenShift CLI ( oc ). You are logged in to the cluster as a user with cluster-admin privileges. You have reduced the size of your cluster MTU by 46 bytes to allow for the overhead of the IPsec ESP header. You have installed the butane utility. You have an existing PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format. Procedure As a cluster administrator, you can enable IPsec support for external IPsec endpoints. Create an IPsec configuration file named ipsec-endpoint-config.conf . The configuration is consumed in the step. For more information, see Libreswan as an IPsec VPN implementation . Provide the following certificate files to add to the Network Security Services (NSS) database on each host. These files are imported as part of the Butane configuration in subsequent steps. left_server.p12 : The certificate bundle for the IPsec endpoints ca.pem : The certificate authority that you signed your certificates with Create a machine config to apply the IPsec configuration to your cluster by using the following two steps: To add the IPsec configuration, create Butane config files for the control plane and worker nodes with the following contents: USD for role in master worker; do cat >> "99-ipsec-USD{role}-endpoint-config.bu" <<-EOF variant: openshift version: 4.14.0 metadata: name: 99-USD{role}-import-certs-enable-svc-os-ext labels: machineconfiguration.openshift.io/role: USDrole openshift: extensions: - ipsec systemd: units: - name: ipsec-import.service enabled: true contents: | [Unit] Description=Import external certs into ipsec NSS Before=ipsec.service [Service] Type=oneshot ExecStart=/usr/local/bin/ipsec-addcert.sh RemainAfterExit=false StandardOutput=journal [Install] WantedBy=multi-user.target - name: ipsecenabler.service enabled: true contents: | [Service] Type=oneshot ExecStart=systemctl enable --now ipsec.service [Install] WantedBy=multi-user.target storage: files: - path: /etc/ipsec.d/ipsec-endpoint-config.conf mode: 0400 overwrite: true contents: local: ipsec-endpoint-config.conf - path: /etc/pki/certs/ca.pem mode: 0400 overwrite: true contents: local: ca.pem - path: /etc/pki/certs/left_server.p12 mode: 0400 overwrite: true contents: local: left_server.p12 - path: /usr/local/bin/ipsec-addcert.sh mode: 0740 overwrite: true contents: inline: | #!/bin/bash -e echo "importing cert to NSS" certutil -A -n "CA" -t "CT,C,C" -d /var/lib/ipsec/nss/ -i /etc/pki/certs/ca.pem pk12util -W "" -i /etc/pki/certs/left_server.p12 -d /var/lib/ipsec/nss/ certutil -M -n "left_server" -t "u,u,u" -d /var/lib/ipsec/nss/ EOF done To transform the Butane files that you created in the step into machine configs, enter the following command: USD for role in master worker; do butane -d . 99-ipsec-USD{role}-endpoint-config.bu -o ./99-ipsec-USDrole-endpoint-config.yaml done To apply the machine configs to your cluster, enter the following command: USD for role in master worker; do oc apply -f 99-ipsec-USD{role}-endpoint-config.yaml done Important As the Machine Config Operator (MCO) updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated before external IPsec connectivity is available. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. 24.12.5. Additional resources About the OVN-Kubernetes Container Network Interface (CNI) network plugin Changing the MTU for the cluster network Installing Butane Network [operator.openshift.io/v1 ] API 24.13. Configure an external gateway on the default network As a cluster administrator, you can configure an external gateway on the default network. This feature offers the following benefits: Granular control over egress traffic on a per-namespace basis Flexible configuration of static and dynamic external gateway IP addresses Support for both IPv4 and IPv6 address families 24.13.1. Prerequisites Your cluster uses the OVN-Kubernetes network plugin. Your infrastructure is configured to route traffic from the secondary external gateway. 24.13.2. How OpenShift Container Platform determines the external gateway IP address You configure a secondary external gateway with the AdminPolicyBasedExternalRoute custom resource (CR) from the k8s.ovn.org API group. The CR supports static and dynamic approaches to specifying an external gateway's IP address. Each namespace that a AdminPolicyBasedExternalRoute CR targets cannot be selected by any other AdminPolicyBasedExternalRoute CR. A namespace cannot have concurrent secondary external gateways. Changes to policies are isolated in the controller. If a policy fails to apply, changes to other policies do not trigger a retry of other policies. Policies are only re-evaluated, applying any differences that might have occurred by the change, when updates to the policy itself or related objects to the policy such as target namespaces, pod gateways, or namespaces hosting them from dynamic hops are made. Static assignment You specify an IP address directly. Dynamic assignment You specify an IP address indirectly, with namespace and pod selectors, and an optional network attachment definition. If the name of a network attachment definition is provided, the external gateway IP address of the network attachment is used. If the name of a network attachment definition is not provided, the external gateway IP address for the pod itself is used. However, this approach works only if the pod is configured with hostNetwork set to true . 24.13.3. AdminPolicyBasedExternalRoute object configuration You can define an AdminPolicyBasedExternalRoute object, which is cluster scoped, with the following properties. A namespace can be selected by only one AdminPolicyBasedExternalRoute CR at a time. Table 24.13. AdminPolicyBasedExternalRoute object Field Type Description metadata.name string Specifies the name of the AdminPolicyBasedExternalRoute object. spec.from string Specifies a namespace selector that the routing polices apply to. Only namespaceSelector is supported for external traffic. For example: from: namespaceSelector: matchLabels: kubernetes.io/metadata.name: novxlan-externalgw-ecmp-4059 A namespace can only be targeted by one AdminPolicyBasedExternalRoute CR. If a namespace is selected by more than one AdminPolicyBasedExternalRoute CR, a failed error status occurs on the second and subsequent CRs that target the same namespace. To apply updates, you must change the policy itself or related objects to the policy such as target namespaces, pod gateways, or namespaces hosting them from dynamic hops in order for the policy to be re-evaluated and your changes to be applied. spec.nextHops object Specifies the destinations where the packets are forwarded to. Must be either or both of static and dynamic . You must have at least one hop defined. Table 24.14. nextHops object Field Type Description static array Specifies an array of static IP addresses. dynamic array Specifies an array of pod selectors corresponding to pods configured with a network attachment definition to use as the external gateway target. Table 24.15. nextHops.static object Field Type Description ip string Specifies either an IPv4 or IPv6 address of the destination hop. bfdEnabled boolean Optional: Specifies whether Bi-Directional Forwarding Detection (BFD) is supported by the network. The default value is false . Table 24.16. nextHops.dynamic object Field Type Description podSelector string Specifies a [set-based]( https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#set-based-requirement ) label selector to filter the pods in the namespace that match this network configuration. namespaceSelector string Specifies a set-based selector to filter the namespaces that the podSelector applies to. You must specify a value for this field. bfdEnabled boolean Optional: Specifies whether Bi-Directional Forwarding Detection (BFD) is supported by the network. The default value is false . networkAttachmentName string Optional: Specifies the name of a network attachment definition. The name must match the list of logical networks associated with the pod. If this field is not specified, the host network of the pod is used. However, the pod must be configure as a host network pod to use the host network. 24.13.3.1. Example secondary external gateway configurations In the following example, the AdminPolicyBasedExternalRoute object configures two static IP addresses as external gateways for pods in namespaces with the kubernetes.io/metadata.name: novxlan-externalgw-ecmp-4059 label. apiVersion: k8s.ovn.org/v1 kind: AdminPolicyBasedExternalRoute metadata: name: default-route-policy spec: from: namespaceSelector: matchLabels: kubernetes.io/metadata.name: novxlan-externalgw-ecmp-4059 nextHops: static: - ip: "172.18.0.8" - ip: "172.18.0.9" In the following example, the AdminPolicyBasedExternalRoute object configures a dynamic external gateway. The IP addresses used for the external gateway are derived from the additional network attachments associated with each of the selected pods. apiVersion: k8s.ovn.org/v1 kind: AdminPolicyBasedExternalRoute metadata: name: shadow-traffic-policy spec: from: namespaceSelector: matchLabels: externalTraffic: "" nextHops: dynamic: - podSelector: matchLabels: gatewayPod: "" namespaceSelector: matchLabels: shadowTraffic: "" networkAttachmentName: shadow-gateway - podSelector: matchLabels: gigabyteGW: "" namespaceSelector: matchLabels: gatewayNamespace: "" networkAttachmentName: gateway In the following example, the AdminPolicyBasedExternalRoute object configures both static and dynamic external gateways. apiVersion: k8s.ovn.org/v1 kind: AdminPolicyBasedExternalRoute metadata: name: multi-hop-policy spec: from: namespaceSelector: matchLabels: trafficType: "egress" nextHops: static: - ip: "172.18.0.8" - ip: "172.18.0.9" dynamic: - podSelector: matchLabels: gatewayPod: "" namespaceSelector: matchLabels: egressTraffic: "" networkAttachmentName: gigabyte 24.13.4. Configure a secondary external gateway You can configure an external gateway on the default network for a namespace in your cluster. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. Procedure Create a YAML file that contains an AdminPolicyBasedExternalRoute object. To create an admin policy based external route, enter the following command: USD oc create -f <file>.yaml where: <file> Specifies the name of the YAML file that you created in the step. Example output adminpolicybasedexternalroute.k8s.ovn.org/default-route-policy created To confirm that the admin policy based external route was created, enter the following command: USD oc describe apbexternalroute <name> | tail -n 6 where: <name> Specifies the name of the AdminPolicyBasedExternalRoute object. Example output Status: Last Transition Time: 2023-04-24T15:09:01Z Messages: Configured external gateway IPs: 172.18.0.8 Status: Success Events: <none> 24.13.5. Additional resources For more information about additional network attachments, see Understanding multiple networks 24.14. Configuring an egress firewall for a project As a cluster administrator, you can create an egress firewall for a project that restricts egress traffic leaving your OpenShift Container Platform cluster. 24.14.1. How an egress firewall works in a project As a cluster administrator, you can use an egress firewall to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios: A pod can only connect to internal hosts and cannot initiate connections to the public internet. A pod can only connect to the public internet and cannot initiate connections to internal hosts that are outside the OpenShift Container Platform cluster. A pod cannot reach specified internal subnets or hosts outside the OpenShift Container Platform cluster. A pod can connect to only specific external hosts. For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources. Note Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules. You configure an egress firewall policy by creating an EgressFirewall custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria: An IP address range in CIDR format A DNS name that resolves to an IP address A port number A protocol that is one of the following protocols: TCP, UDP, and SCTP Important If your egress firewall includes a deny rule for 0.0.0.0/0 , access to your OpenShift Container Platform API servers is blocked. You must either add allow rules for each IP address or use the nodeSelector type allow rule in your egress policy rules to connect to API servers. The following example illustrates the order of the egress firewall rules necessary to ensure API server access: apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow # ... - to: cidrSelector: 0.0.0.0/0 3 type: Deny 1 The namespace for the egress firewall. 2 The IP address range that includes your OpenShift Container Platform API servers. 3 A global deny rule prevents access to the OpenShift Container Platform API servers. To find the IP address for your API servers, run oc get ep kubernetes -n default . For more information, see BZ#1988324 . Warning Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a Route CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination. 24.14.1.1. Limitations of an egress firewall An egress firewall has the following limitations: No project can have more than one EgressFirewall object. A maximum of one EgressFirewall object with a maximum of 8,000 rules can be defined per project. If you are using the OVN-Kubernetes network plugin with shared gateway mode in Red Hat OpenShift Networking, return ingress replies are affected by egress firewall rules. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped. Violating any of these restrictions results in a broken egress firewall for the project. Consequently, all external network traffic is dropped, which can cause security risks for your organization. An Egress Firewall resource can be created in the kube-node-lease , kube-public , kube-system , openshift and openshift- projects. 24.14.1.2. Matching order for egress firewall policy rules The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection. 24.14.1.3. How Domain Name Server (DNS) resolution works If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions: Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 minutes. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL and the TTL is less than 30 minutes, the controller sets the duration for that DNS name to the returned value. Each DNS name is queried after the TTL for the DNS record expires. The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently. Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in EgressFirewall objects is only recommended for domains with infrequent IP address changes. Note Using DNS names in your egress firewall policy does not affect local DNS resolution through CoreDNS. However, if your egress firewall policy uses domain names, and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that permit access to the IP addresses of your DNS server. 24.14.2. EgressFirewall custom resource (CR) object You can define one or more rules for an egress firewall. A rule is either an Allow rule or a Deny rule, with a specification for the traffic that the rule applies to. The following YAML describes an EgressFirewall CR object: EgressFirewall object apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: <name> 1 spec: egress: 2 ... 1 The name for the object must be default . 2 A collection of one or more egress network policy rules as described in the following section. 24.14.2.1. EgressFirewall rules The following YAML describes an egress firewall rule object. The user can select either an IP address range in CIDR format, a domain name, or use the nodeSelector to allow or deny egress traffic. The egress stanza expects an array of one or more objects. Egress policy rule stanza egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 nodeSelector: <label_name>: <label_value> 5 ports: 6 ... 1 The type of rule. The value must be either Allow or Deny . 2 A stanza describing an egress traffic match rule that specifies the cidrSelector field or the dnsName field. You cannot use both fields in the same rule. 3 An IP address range in CIDR format. 4 A DNS domain name. 5 Labels are key/value pairs that the user defines. Labels are attached to objects, such as pods. The nodeSelector allows for one or more node labels to be selected and attached to pods. 6 Optional: A stanza describing a collection of network ports and protocols for the rule. Ports stanza ports: - port: <port> 1 protocol: <protocol> 2 1 A network port, such as 80 or 443 . If you specify a value for this field, you must also specify a value for protocol . 2 A network protocol. The value must be either TCP , UDP , or SCTP . 24.14.2.2. Example EgressFirewall CR objects The following example defines several egress firewall policy rules: apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Deny to: cidrSelector: 0.0.0.0/0 1 A collection of egress firewall policy rule objects. The following example defines a policy rule that denies traffic to the host at the 172.16.1.1/32 IP address, if the traffic is using either the TCP protocol and destination port 80 or any protocol and destination port 443 . apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - type: Deny to: cidrSelector: 172.16.1.1/32 ports: - port: 80 protocol: TCP - port: 443 24.14.2.3. Example nodeSelector for EgressFirewall As a cluster administrator, you can allow or deny egress traffic to nodes in your cluster by specifying a label using nodeSelector . Labels can be applied to one or more nodes. The following is an example with the region=east label: apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - to: nodeSelector: matchLabels: region: east type: Allow Tip Instead of adding manual rules per node IP address, use node selectors to create a label that allows pods behind an egress firewall to access host network pods. 24.14.3. Creating an egress firewall policy object As a cluster administrator, you can create an egress firewall policy object for a project. Important If the project already has an EgressFirewall object defined, you must edit the existing policy to make changes to the egress firewall rules. Prerequisites A cluster that uses the OVN-Kubernetes network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Create a policy rule: Create a <policy_name>.yaml file where <policy_name> describes the egress policy rules. In the file you created, define an egress policy object. Enter the following command to create the policy object. Replace <policy_name> with the name of the policy and <project> with the project that the rule applies to. USD oc create -f <policy_name>.yaml -n <project> In the following example, a new EgressFirewall object is created in a project named project1 : USD oc create -f default.yaml -n project1 Example output egressfirewall.k8s.ovn.org/v1 created Optional: Save the <policy_name>.yaml file so that you can make changes later. 24.15. Viewing an egress firewall for a project As a cluster administrator, you can list the names of any existing egress firewalls and view the traffic rules for a specific egress firewall. 24.15.1. Viewing an EgressFirewall object You can view an EgressFirewall object in your cluster. Prerequisites A cluster using the OVN-Kubernetes network plugin. Install the OpenShift Command-line Interface (CLI), commonly known as oc . You must log in to the cluster. Procedure Optional: To view the names of the EgressFirewall objects defined in your cluster, enter the following command: USD oc get egressfirewall --all-namespaces To inspect a policy, enter the following command. Replace <policy_name> with the name of the policy to inspect. USD oc describe egressfirewall <policy_name> Example output Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0 24.16. Editing an egress firewall for a project As a cluster administrator, you can modify network traffic rules for an existing egress firewall. 24.16.1. Editing an EgressFirewall object As a cluster administrator, you can update the egress firewall for a project. Prerequisites A cluster using the OVN-Kubernetes network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressFirewall object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressfirewall Optional: If you did not save a copy of the EgressFirewall object when you created the egress network firewall, enter the following command to create a copy. USD oc get -n <project> egressfirewall <name> -o yaml > <filename>.yaml Replace <project> with the name of the project. Replace <name> with the name of the object. Replace <filename> with the name of the file to save the YAML to. After making changes to the policy rules, enter the following command to replace the EgressFirewall object. Replace <filename> with the name of the file containing the updated EgressFirewall object. USD oc replace -f <filename>.yaml 24.17. Removing an egress firewall from a project As a cluster administrator, you can remove an egress firewall from a project to remove all restrictions on network traffic from the project that leaves the OpenShift Container Platform cluster. 24.17.1. Removing an EgressFirewall object As a cluster administrator, you can remove an egress firewall from a project. Prerequisites A cluster using the OVN-Kubernetes network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressFirewall object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressfirewall Enter the following command to delete the EgressFirewall object. Replace <project> with the name of the project and <name> with the name of the object. USD oc delete -n <project> egressfirewall <name> 24.18. Configuring an egress IP address As a cluster administrator, you can configure the OVN-Kubernetes Container Network Interface (CNI) network plugin to assign one or more egress IP addresses to a namespace, or to specific pods in a namespace. Important In an installer-provisioned infrastructure cluster, do not assign egress IP addresses to the infrastructure node that already hosts the ingress VIP. For more information, see the Red Hat Knowledgebase solution POD from the egress IP enabled namespace cannot access OCP route in an IPI cluster when the egress IP is assigned to the infra node that already hosts the ingress VIP . 24.18.1. Egress IP address architectural design and implementation The OpenShift Container Platform egress IP address functionality allows you to ensure that the traffic from one or more pods in one or more namespaces has a consistent source IP address for services outside the cluster network. For example, you might have a pod that periodically queries a database that is hosted on a server outside of your cluster. To enforce access requirements for the server, a packet filtering device is configured to allow traffic only from specific IP addresses. To ensure that you can reliably allow access to the server from only that specific pod, you can configure a specific egress IP address for the pod that makes the requests to the server. An egress IP address assigned to a namespace is different from an egress router, which is used to send traffic to specific destinations. In some cluster configurations, application pods and ingress router pods run on the same node. If you configure an egress IP address for an application project in this scenario, the IP address is not used when you send a request to a route from the application project. Important Egress IP addresses must not be configured in any Linux network configuration files, such as ifcfg-eth0 . 24.18.1.1. Platform support Support for the egress IP address functionality on various platforms is summarized in the following table: Platform Supported Bare metal Yes VMware vSphere Yes Red Hat OpenStack Platform (RHOSP) Yes Amazon Web Services (AWS) Yes Google Cloud Platform (GCP) Yes Microsoft Azure Yes IBM Z(R) and IBM(R) LinuxONE Yes IBM Z(R) and IBM(R) LinuxONE for Red Hat Enterprise Linux (RHEL) KVM Yes IBM Power(R) Yes Nutanix Yes Important The assignment of egress IP addresses to control plane nodes with the EgressIP feature is not supported on a cluster provisioned on Amazon Web Services (AWS). ( BZ#2039656 ). 24.18.1.2. Public cloud platform considerations For clusters provisioned on public cloud infrastructure, there is a constraint on the absolute number of assignable IP addresses per node. The maximum number of assignable IP addresses per node, or the IP capacity , can be described in the following formula: IP capacity = public cloud default capacity - sum(current IP assignments) While the Egress IPs capability manages the IP address capacity per node, it is important to plan for this constraint in your deployments. For example, for a cluster installed on bare-metal infrastructure with 8 nodes you can configure 150 egress IP addresses. However, if a public cloud provider limits IP address capacity to 10 IP addresses per node, the total number of assignable IP addresses is only 80. To achieve the same IP address capacity in this example cloud provider, you would need to allocate 7 additional nodes. To confirm the IP capacity and subnets for any node in your public cloud environment, you can enter the oc get node <node_name> -o yaml command. The cloud.network.openshift.io/egress-ipconfig annotation includes capacity and subnet information for the node. The annotation value is an array with a single object with fields that provide the following information for the primary network interface: interface : Specifies the interface ID on AWS and Azure and the interface name on GCP. ifaddr : Specifies the subnet mask for one or both IP address families. capacity : Specifies the IP address capacity for the node. On AWS, the IP address capacity is provided per IP address family. On Azure and GCP, the IP address capacity includes both IPv4 and IPv6 addresses. Automatic attachment and detachment of egress IP addresses for traffic between nodes are available. This allows for traffic from many pods in namespaces to have a consistent source IP address to locations outside of the cluster. This also supports OpenShift SDN and OVN-Kubernetes, which is the default networking plugin in Red Hat OpenShift Networking in OpenShift Container Platform 4.14. Note The RHOSP egress IP address feature creates a Neutron reservation port called egressip-<IP address> . Using the same RHOSP user as the one used for the OpenShift Container Platform cluster installation, you can assign a floating IP address to this reservation port to have a predictable SNAT address for egress traffic. When an egress IP address on an RHOSP network is moved from one node to another, because of a node failover, for example, the Neutron reservation port is removed and recreated. This means that the floating IP association is lost and you need to manually reassign the floating IP address to the new reservation port. Note When an RHOSP cluster administrator assigns a floating IP to the reservation port, OpenShift Container Platform cannot delete the reservation port. The CloudPrivateIPConfig object cannot perform delete and move operations until an RHOSP cluster administrator unassigns the floating IP from the reservation port. The following examples illustrate the annotation from nodes on several public cloud providers. The annotations are indented for readability. Example cloud.network.openshift.io/egress-ipconfig annotation on AWS cloud.network.openshift.io/egress-ipconfig: [ { "interface":"eni-078d267045138e436", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ipv4":14,"ipv6":15} } ] Example cloud.network.openshift.io/egress-ipconfig annotation on GCP cloud.network.openshift.io/egress-ipconfig: [ { "interface":"nic0", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ip":14} } ] The following sections describe the IP address capacity for supported public cloud environments for use in your capacity calculation. 24.18.1.2.1. Amazon Web Services (AWS) IP address capacity limits On AWS, constraints on IP address assignments depend on the instance type configured. For more information, see IP addresses per network interface per instance type 24.18.1.2.2. Google Cloud Platform (GCP) IP address capacity limits On GCP, the networking model implements additional node IP addresses through IP address aliasing, rather than IP address assignments. However, IP address capacity maps directly to IP aliasing capacity. The following capacity limits exist for IP aliasing assignment: Per node, the maximum number of IP aliases, both IPv4 and IPv6, is 100. Per VPC, the maximum number of IP aliases is unspecified, but OpenShift Container Platform scalability testing reveals the maximum to be approximately 15,000. For more information, see Per instance quotas and Alias IP ranges overview . 24.18.1.2.3. Microsoft Azure IP address capacity limits On Azure, the following capacity limits exist for IP address assignment: Per NIC, the maximum number of assignable IP addresses, for both IPv4 and IPv6, is 256. Per virtual network, the maximum number of assigned IP addresses cannot exceed 65,536. For more information, see Networking limits . 24.18.1.3. Considerations for using an egress IP on additional network interfaces In OpenShift Container Platform, egress IPs provide administrators a way to control network traffic. Egress IPs can be used with the br-ex , or primary, network interface, which is a Linux bridge interface associated with Open vSwitch, or they can be used with additional network interfaces. You can inspect your network interface type by running the following command: USD ip -details link show The primary network interface is assigned a node IP address which also contains a subnet mask. Information for this node IP address can be retrieved from the Kubernetes node object for each node within your cluster by inspecting the k8s.ovn.org/node-primary-ifaddr annotation. In an IPv4 cluster, this annotation is similar to the following example: "k8s.ovn.org/node-primary-ifaddr: {"ipv4":"192.168.111.23/24"}" . If the egress IP is not within the subnet of the primary network interface subnet, you can use an egress IP on another Linux network interface that is not of the primary network interface type. By doing so, OpenShift Container Platform administrators are provided with a greater level of control over networking aspects such as routing, addressing, segmentation, and security policies. This feature provides users with the option to route workload traffic over specific network interfaces for purposes such as traffic segmentation or meeting specialized requirements. If the egress IP is not within the subnet of the primary network interface, then the selection of another network interface for egress traffic might occur if they are present on a node. You can determine which other network interfaces might support egress IPs by inspecting the k8s.ovn.org/host-cidrs Kubernetes node annotation. This annotation contains the addresses and subnet mask found for the primary network interface. It also contains additional network interface addresses and subnet mask information. These addresses and subnet masks are assigned to network interfaces that use the longest prefix match routing mechanism to determine which network interface supports the egress IP. Note OVN-Kubernetes provides a mechanism to control and direct outbound network traffic from specific namespaces and pods. This ensures that it exits the cluster through a particular network interface and with a specific egress IP address. Requirements for assigning an egress IP to a network interface that is not the primary network interface For users who want an egress IP and traffic to be routed over a particular interface that is not the primary network interface, the following conditions must be met: OpenShift Container Platform is installed on a bare metal cluster. This feature is disabled within cloud or hypervisor environments. Your OpenShift Container Platform pods are not configured as host-networked. If a network interface is removed or if the IP address and subnet mask which allows the egress IP to be hosted on the interface is removed, then the egress IP is reconfigured. Consequently, it could be assigned to another node and interface. IP forwarding must be enabled for the network interface. To enable IP forwarding, you can use the oc edit network.operator command and edit the object like the following example: # ... spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: ovnKubernetesConfig: gatewayConfig: ipForwarding: Global # ... 24.18.1.4. Assignment of egress IPs to pods To assign one or more egress IPs to a namespace or specific pods in a namespace, the following conditions must be satisfied: At least one node in your cluster must have the k8s.ovn.org/egress-assignable: "" label. An EgressIP object exists that defines one or more egress IP addresses to use as the source IP address for traffic leaving the cluster from pods in a namespace. Important If you create EgressIP objects prior to labeling any nodes in your cluster for egress IP assignment, OpenShift Container Platform might assign every egress IP address to the first node with the k8s.ovn.org/egress-assignable: "" label. To ensure that egress IP addresses are widely distributed across nodes in the cluster, always apply the label to the nodes you intent to host the egress IP addresses before creating any EgressIP objects. 24.18.1.5. Assignment of egress IPs to nodes When creating an EgressIP object, the following conditions apply to nodes that are labeled with the k8s.ovn.org/egress-assignable: "" label: An egress IP address is never assigned to more than one node at a time. An egress IP address is equally balanced between available nodes that can host the egress IP address. If the spec.EgressIPs array in an EgressIP object specifies more than one IP address, the following conditions apply: No node will ever host more than one of the specified IP addresses. Traffic is balanced roughly equally between the specified IP addresses for a given namespace. If a node becomes unavailable, any egress IP addresses assigned to it are automatically reassigned, subject to the previously described conditions. When a pod matches the selector for multiple EgressIP objects, there is no guarantee which of the egress IP addresses that are specified in the EgressIP objects is assigned as the egress IP address for the pod. Additionally, if an EgressIP object specifies multiple egress IP addresses, there is no guarantee which of the egress IP addresses might be used. For example, if a pod matches a selector for an EgressIP object with two egress IP addresses, 10.10.20.1 and 10.10.20.2 , either might be used for each TCP connection or UDP conversation. 24.18.1.6. Architectural diagram of an egress IP address configuration The following diagram depicts an egress IP address configuration. The diagram describes four pods in two different namespaces running on three nodes in a cluster. The nodes are assigned IP addresses from the 192.168.126.0/18 CIDR block on the host network. Both Node 1 and Node 3 are labeled with k8s.ovn.org/egress-assignable: "" and thus available for the assignment of egress IP addresses. The dashed lines in the diagram depict the traffic flow from pod1, pod2, and pod3 traveling through the pod network to egress the cluster from Node 1 and Node 3. When an external service receives traffic from any of the pods selected by the example EgressIP object, the source IP address is either 192.168.126.10 or 192.168.126.102 . The traffic is balanced roughly equally between these two nodes. The following resources from the diagram are illustrated in detail: Namespace objects The namespaces are defined in the following manifest: Namespace objects apiVersion: v1 kind: Namespace metadata: name: namespace1 labels: env: prod --- apiVersion: v1 kind: Namespace metadata: name: namespace2 labels: env: prod EgressIP object The following EgressIP object describes a configuration that selects all pods in any namespace with the env label set to prod . The egress IP addresses for the selected pods are 192.168.126.10 and 192.168.126.102 . EgressIP object apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egressips-prod spec: egressIPs: - 192.168.126.10 - 192.168.126.102 namespaceSelector: matchLabels: env: prod status: items: - node: node1 egressIP: 192.168.126.10 - node: node3 egressIP: 192.168.126.102 For the configuration in the example, OpenShift Container Platform assigns both egress IP addresses to the available nodes. The status field reflects whether and where the egress IP addresses are assigned. 24.18.2. EgressIP object The following YAML describes the API for the EgressIP object. The scope of the object is cluster-wide; it is not created in a namespace. apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: <name> 1 spec: egressIPs: 2 - <ip_address> namespaceSelector: 3 ... podSelector: 4 ... 1 The name for the EgressIPs object. 2 An array of one or more IP addresses. 3 One or more selectors for the namespaces to associate the egress IP addresses with. 4 Optional: One or more selectors for pods in the specified namespaces to associate egress IP addresses with. Applying these selectors allows for the selection of a subset of pods within a namespace. The following YAML describes the stanza for the namespace selector: Namespace selector stanza namespaceSelector: 1 matchLabels: <label_name>: <label_value> 1 One or more matching rules for namespaces. If more than one match rule is provided, all matching namespaces are selected. The following YAML describes the optional stanza for the pod selector: Pod selector stanza podSelector: 1 matchLabels: <label_name>: <label_value> 1 Optional: One or more matching rules for pods in the namespaces that match the specified namespaceSelector rules. If specified, only pods that match are selected. Others pods in the namespace are not selected. In the following example, the EgressIP object associates the 192.168.126.11 and 192.168.126.102 egress IP addresses with pods that have the app label set to web and are in the namespaces that have the env label set to prod : Example EgressIP object apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group1 spec: egressIPs: - 192.168.126.11 - 192.168.126.102 podSelector: matchLabels: app: web namespaceSelector: matchLabels: env: prod In the following example, the EgressIP object associates the 192.168.127.30 and 192.168.127.40 egress IP addresses with any pods that do not have the environment label set to development : Example EgressIP object apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group2 spec: egressIPs: - 192.168.127.30 - 192.168.127.40 namespaceSelector: matchExpressions: - key: environment operator: NotIn values: - development 24.18.3. The egressIPConfig object As a feature of egress IP, the reachabilityTotalTimeoutSeconds parameter configures the EgressIP node reachability check total timeout in seconds. If the EgressIP node cannot be reached within this timeout, the node is declared down. You can set a value for the reachabilityTotalTimeoutSeconds in the configuration file for the egressIPConfig object. Setting a large value might cause the EgressIP implementation to react slowly to node changes. The implementation reacts slowly for EgressIP nodes that have an issue and are unreachable. If you omit the reachabilityTotalTimeoutSeconds parameter from the egressIPConfig object, the platform chooses a reasonable default value, which is subject to change over time. The current default is 1 second. A value of 0 disables the reachability check for the EgressIP node. The following egressIPConfig object describes changing the reachabilityTotalTimeoutSeconds from the default 1 second probes to 5 second probes: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: ovnKubernetesConfig: egressIPConfig: 1 reachabilityTotalTimeoutSeconds: 5 2 gatewayConfig: routingViaHost: false genevePort: 6081 1 The egressIPConfig holds the configurations for the options of the EgressIP object. By changing these configurations, you can extend the EgressIP object. 2 The value for reachabilityTotalTimeoutSeconds accepts integer values from 0 to 60 . A value of 0 disables the reachability check of the egressIP node. Setting a value from 1 to 60 corresponds to the timeout in seconds for a probe to send the reachability check to the node. 24.18.4. Labeling a node to host egress IP addresses You can apply the k8s.ovn.org/egress-assignable="" label to a node in your cluster so that OpenShift Container Platform can assign one or more egress IP addresses to the node. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster as a cluster administrator. Procedure To label a node so that it can host one or more egress IP addresses, enter the following command: USD oc label nodes <node_name> k8s.ovn.org/egress-assignable="" 1 1 The name of the node to label. Tip You can alternatively apply the following YAML to add the label to a node: apiVersion: v1 kind: Node metadata: labels: k8s.ovn.org/egress-assignable: "" name: <node_name> 24.18.5. steps Assigning egress IPs 24.18.6. Additional resources LabelSelector meta/v1 LabelSelectorRequirement meta/v1 24.19. Assigning an egress IP address As a cluster administrator, you can assign an egress IP address for traffic leaving the cluster from a namespace or from specific pods in a namespace. 24.19.1. Assigning an egress IP address to a namespace You can assign one or more egress IP addresses to a namespace or to specific pods in a namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster as a cluster administrator. Configure at least one node to host an egress IP address. Procedure Create an EgressIP object: Create a <egressips_name>.yaml file where <egressips_name> is the name of the object. In the file that you created, define an EgressIP object, as in the following example: apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-project1 spec: egressIPs: - 192.168.127.10 - 192.168.127.11 namespaceSelector: matchLabels: env: qa To create the object, enter the following command. USD oc apply -f <egressips_name>.yaml 1 1 Replace <egressips_name> with the name of the object. Example output egressips.k8s.ovn.org/<egressips_name> created Optional: Store the <egressips_name>.yaml file so that you can make changes later. Add labels to the namespace that requires egress IP addresses. To add a label to the namespace of an EgressIP object defined in step 1, run the following command: USD oc label ns <namespace> env=qa 1 1 Replace <namespace> with the namespace that requires egress IP addresses. Verification To show all egress IPs that are in use in your cluster, enter the following command: USD oc get egressip -o yaml Note The command oc get egressip only returns one egress IP address regardless of how many are configured. This is not a bug and is a limitation of Kubernetes. As a workaround, you can pass in the -o yaml or -o json flags to return all egress IPs addresses in use. Example output # ... spec: egressIPs: - 192.168.127.10 - 192.168.127.11 # ... 24.19.2. Additional resources Configuring egress IP addresses 24.20. Configuring an egress service As a cluster administrator, you can configure egress traffic for pods behind a load balancer service by using an egress service. Important Egress service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use the EgressService custom resource (CR) to manage egress traffic in the following ways: Assign a load balancer service IP address as the source IP address for egress traffic for pods behind the load balancer service. Assigning the load balancer IP address as the source IP address in this context is useful to present a single point of egress and ingress. For example, in some scenarios, an external system communicating with an application behind a load balancer service can expect the source and destination IP address for the application to be the same. Note When you assign the load balancer service IP address to egress traffic for pods behind the service, OVN-Kubernetes restricts the ingress and egress point to a single node. This limits the load balancing of traffic that MetalLB typically provides. Assign the egress traffic for pods behind a load balancer to a different network than the default node network. This is useful to assign the egress traffic for applications behind a load balancer to a different network than the default network. Typically, the different network is implemented by using a VRF instance associated with a network interface. 24.20.1. Egress service custom resource Define the configuration for an egress service in an EgressService custom resource. The following YAML describes the fields for the configuration of an egress service: apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: <egress_service_name> 1 namespace: <namespace> 2 spec: sourceIPBy: <egress_traffic_ip> 3 nodeSelector: 4 matchLabels: node-role.kubernetes.io/<role>: "" network: <egress_traffic_network> 5 1 Specify the name for the egress service. The name of the EgressService resource must match the name of the load-balancer service that you want to modify. 2 Specify the namespace for the egress service. The namespace for the EgressService must match the namespace of the load-balancer service that you want to modify. The egress service is namespace-scoped. 3 Specify the source IP address of egress traffic for pods behind a service. Valid values are LoadBalancerIP or Network . Use the LoadBalancerIP value to assign the LoadBalancer service ingress IP address as the source IP address for egress traffic. Specify Network to assign the network interface IP address as the source IP address for egress traffic. 4 Optional: If you use the LoadBalancerIP value for the sourceIPBy specification, a single node handles the LoadBalancer service traffic. Use the nodeSelector field to limit which node can be assigned this task. When a node is selected to handle the service traffic, OVN-Kubernetes labels the node in the following format: egress-service.k8s.ovn.org/<svc-namespace>-<svc-name>: "" . When the nodeSelector field is not specified, any node can manage the LoadBalancer service traffic. 5 Optional: Specify the routing table for egress traffic. If you do not include the network specification, the egress service uses the default host network. Example egress service specification apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: test-egress-service namespace: test-namespace spec: sourceIPBy: "LoadBalancerIP" nodeSelector: matchLabels: vrf: "true" network: "2" 24.20.2. Deploying an egress service You can deploy an egress service to manage egress traffic for pods behind a LoadBalancer service. The following example configures the egress traffic to have the same source IP address as the ingress IP address of the LoadBalancer service. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. You configured MetalLB BGPPeer resources. Procedure Create an IPAddressPool CR with the desired IP for the service: Create a file, such as ip-addr-pool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: example-pool namespace: metallb-system spec: addresses: - 172.19.0.100/32 Apply the configuration for the IP address pool by running the following command: USD oc apply -f ip-addr-pool.yaml Create Service and EgressService CRs: Create a file, such as service-egress-service.yaml , with content like the following example: apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace annotations: metallb.universe.tf/address-pool: example-pool 1 spec: selector: app: example ports: - name: http protocol: TCP port: 8080 targetPort: 8080 type: LoadBalancer --- apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: example-service namespace: example-namespace spec: sourceIPBy: "LoadBalancerIP" 2 nodeSelector: 3 matchLabels: node-role.kubernetes.io/worker: "" 1 The LoadBalancer service uses the IP address assigned by MetalLB from the example-pool IP address pool. 2 This example uses the LoadBalancerIP value to assign the ingress IP address of the LoadBalancer service as the source IP address of egress traffic. 3 When you specify the LoadBalancerIP value, a single node handles the LoadBalancer service's traffic. In this example, only nodes with the worker label can be selected to handle the traffic. When a node is selected, OVN-Kubernetes labels the node in the following format egress-service.k8s.ovn.org/<svc-namespace>-<svc-name>: "" . Note If you use the sourceIPBy: "LoadBalancerIP" setting, you must specify the load-balancer node in the BGPAdvertisement custom resource (CR). Apply the configuration for the service and egress service by running the following command: USD oc apply -f service-egress-service.yaml Create a BGPAdvertisement CR to advertise the service: Create a file, such as service-bgp-advertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example-bgp-adv namespace: metallb-system spec: ipAddressPools: - example-pool nodeSelector: - matchLabels: egress-service.k8s.ovn.org/example-namespace-example-service: "" 1 1 In this example, the EgressService CR configures the source IP address for egress traffic to use the load-balancer service IP address. Therefore, you must specify the load-balancer node for return traffic to use the same return path for the traffic originating from the pod. Verification Verify that you can access the application endpoint of the pods running behind the MetalLB service by running the following command: USD curl <external_ip_address>:<port_number> 1 1 Update the external IP address and port number to suit your application endpoint. If you assigned the LoadBalancer service's ingress IP address as the source IP address for egress traffic, verify this configuration by using tools such as tcpdump to analyze packets received at the external client. Additional resources Exposing a service through a network VRF Example: Network interface with a VRF instance node network configuration policy Managing symmetric routing with MetalLB About virtual routing and forwarding 24.21. Considerations for the use of an egress router pod 24.21.1. About an egress router pod The OpenShift Container Platform egress router pod redirects traffic to a specified remote server from a private source IP address that is not used for any other purpose. An egress router pod can send network traffic to servers that are set up to allow access only from specific IP addresses. Note The egress router pod is not intended for every outgoing connection. Creating large numbers of egress router pods can exceed the limits of your network hardware. For example, creating an egress router pod for every project or application could exceed the number of local MAC addresses that the network interface can handle before reverting to filtering MAC addresses in software. Important The egress router image is not compatible with Amazon AWS, Azure Cloud, or any other cloud platform that does not support layer 2 manipulations due to their incompatibility with macvlan traffic. 24.21.1.1. Egress router modes In redirect mode , an egress router pod configures iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the curl command. For example: USD curl <router_service_IP> <port> Note The egress router CNI plugin supports redirect mode only. This is a difference with the egress router implementation that you can deploy with OpenShift SDN. Unlike the egress router for OpenShift SDN, the egress router CNI plugin does not support HTTP proxy mode or DNS proxy mode. 24.21.1.2. Egress router pod implementation The egress router implementation uses the egress router Container Network Interface (CNI) plugin. The plugin adds a secondary network interface to a pod. An egress router is a pod that has two network interfaces. For example, the pod can have eth0 and net1 network interfaces. The eth0 interface is on the cluster network and the pod continues to use the interface for ordinary cluster-related network traffic. The net1 interface is on a secondary network and has an IP address and gateway for that network. Other pods in the OpenShift Container Platform cluster can access the egress router service and the service enables the pods to access external services. The egress router acts as a bridge between pods and an external system. Traffic that leaves the egress router exits through a node, but the packets have the MAC address of the net1 interface from the egress router pod. When you add an egress router custom resource, the Cluster Network Operator creates the following objects: The network attachment definition for the net1 secondary network interface of the pod. A deployment for the egress router. If you delete an egress router custom resource, the Operator deletes the two objects in the preceding list that are associated with the egress router. 24.21.1.3. Deployment considerations An egress router pod adds an additional IP address and MAC address to the primary network interface of the node. As a result, you might need to configure your hypervisor or cloud provider to allow the additional address. Red Hat OpenStack Platform (RHOSP) If you deploy OpenShift Container Platform on RHOSP, you must allow traffic from the IP and MAC addresses of the egress router pod on your OpenStack environment. If you do not allow the traffic, then communication will fail : USD openstack port set --allowed-address \ ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid> VMware vSphere If you are using VMware vSphere, see the VMware documentation for securing vSphere standard switches . View and change VMware vSphere default settings by selecting the host virtual switch from the vSphere Web Client. Specifically, ensure that the following are enabled: MAC Address Changes Forged Transits Promiscuous Mode Operation 24.21.1.4. Failover configuration To avoid downtime, the Cluster Network Operator deploys the egress router pod as a deployment resource. The deployment name is egress-router-cni-deployment . The pod that corresponds to the deployment has a label of app=egress-router-cni . To create a new service for the deployment, use the oc expose deployment/egress-router-cni-deployment --port <port_number> command or create a file like the following example: apiVersion: v1 kind: Service metadata: name: app-egress spec: ports: - name: tcp-8080 protocol: TCP port: 8080 - name: tcp-8443 protocol: TCP port: 8443 - name: udp-80 protocol: UDP port: 80 type: ClusterIP selector: app: egress-router-cni 24.21.2. Additional resources Deploying an egress router in redirection mode 24.22. Deploying an egress router pod in redirect mode As a cluster administrator, you can deploy an egress router pod to redirect traffic to specified destination IP addresses from a reserved source IP address. The egress router implementation uses the egress router Container Network Interface (CNI) plugin. 24.22.1. Egress router custom resource Define the configuration for an egress router pod in an egress router custom resource. The following YAML describes the fields for the configuration of an egress router in redirect mode: apiVersion: network.operator.openshift.io/v1 kind: EgressRouter metadata: name: <egress_router_name> namespace: <namespace> 1 spec: addresses: [ 2 { ip: "<egress_router>", 3 gateway: "<egress_gateway>" 4 } ] mode: Redirect redirect: { redirectRules: [ 5 { destinationIP: "<egress_destination>", port: <egress_router_port>, targetPort: <target_port>, 6 protocol: <network_protocol> 7 }, ... ], fallbackIP: "<egress_destination>" 8 } 1 Optional: The namespace field specifies the namespace to create the egress router in. If you do not specify a value in the file or on the command line, the default namespace is used. 2 The addresses field specifies the IP addresses to configure on the secondary network interface. 3 The ip field specifies the reserved source IP address and netmask from the physical network that the node is on to use with egress router pod. Use CIDR notation to specify the IP address and netmask. 4 The gateway field specifies the IP address of the network gateway. 5 Optional: The redirectRules field specifies a combination of egress destination IP address, egress router port, and protocol. Incoming connections to the egress router on the specified port and protocol are routed to the destination IP address. 6 Optional: The targetPort field specifies the network port on the destination IP address. If this field is not specified, traffic is routed to the same network port that it arrived on. 7 The protocol field supports TCP, UDP, or SCTP. 8 Optional: The fallbackIP field specifies a destination IP address. If you do not specify any redirect rules, the egress router sends all traffic to this fallback IP address. If you specify redirect rules, any connections to network ports that are not defined in the rules are sent by the egress router to this fallback IP address. If you do not specify this field, the egress router rejects connections to network ports that are not defined in the rules. Example egress router specification apiVersion: network.operator.openshift.io/v1 kind: EgressRouter metadata: name: egress-router-redirect spec: networkInterface: { macvlan: { mode: "Bridge" } } addresses: [ { ip: "192.168.12.99/24", gateway: "192.168.12.1" } ] mode: Redirect redirect: { redirectRules: [ { destinationIP: "10.0.0.99", port: 80, protocol: UDP }, { destinationIP: "203.0.113.26", port: 8080, targetPort: 80, protocol: TCP }, { destinationIP: "203.0.113.27", port: 8443, targetPort: 443, protocol: TCP } ] } 24.22.2. Deploying an egress router in redirect mode You can deploy an egress router to redirect traffic from its own reserved source IP address to one or more destination IP addresses. After you add an egress router, the client pods that need to use the reserved source IP address must be modified to connect to the egress router rather than connecting directly to the destination IP. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router definition. To ensure that other pods can find the IP address of the egress router pod, create a service that uses the egress router, as in the following example: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: web-app protocol: TCP port: 8080 type: ClusterIP selector: app: egress-router-cni 1 1 Specify the label for the egress router. The value shown is added by the Cluster Network Operator and is not configurable. After you create the service, your pods can connect to the service. The egress router pod redirects traffic to the corresponding port on the destination IP address. The connections originate from the reserved source IP address. Verification To verify that the Cluster Network Operator started the egress router, complete the following procedure: View the network attachment definition that the Operator created for the egress router: USD oc get network-attachment-definition egress-router-cni-nad The name of the network attachment definition is not configurable. Example output NAME AGE egress-router-cni-nad 18m View the deployment for the egress router pod: USD oc get deployment egress-router-cni-deployment The name of the deployment is not configurable. Example output NAME READY UP-TO-DATE AVAILABLE AGE egress-router-cni-deployment 1/1 1 1 18m View the status of the egress router pod: USD oc get pods -l app=egress-router-cni Example output NAME READY STATUS RESTARTS AGE egress-router-cni-deployment-575465c75c-qkq6m 1/1 Running 0 18m View the logs and the routing table for the egress router pod. Get the node name for the egress router pod: USD POD_NODENAME=USD(oc get pod -l app=egress-router-cni -o jsonpath="{.items[0].spec.nodeName}") Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/USDPOD_NODENAME Set /host as the root directory within the debug shell. The debug pod mounts the root file system of the host in /host within the pod. By changing the root directory to /host , you can run binaries from the executable paths of the host: # chroot /host From within the chroot environment console, display the egress router logs: # cat /tmp/egress-router-log Example output 2021-04-26T12:27:20Z [debug] Called CNI ADD 2021-04-26T12:27:20Z [debug] Gateway: 192.168.12.1 2021-04-26T12:27:20Z [debug] IP Source Addresses: [192.168.12.99/24] 2021-04-26T12:27:20Z [debug] IP Destinations: [80 UDP 10.0.0.99/30 8080 TCP 203.0.113.26/30 80 8443 TCP 203.0.113.27/30 443] 2021-04-26T12:27:20Z [debug] Created macvlan interface 2021-04-26T12:27:20Z [debug] Renamed macvlan to "net1" 2021-04-26T12:27:20Z [debug] Adding route to gateway 192.168.12.1 on macvlan interface 2021-04-26T12:27:20Z [debug] deleted default route {Ifindex: 3 Dst: <nil> Src: <nil> Gw: 10.128.10.1 Flags: [] Table: 254} 2021-04-26T12:27:20Z [debug] Added new default route with gateway 192.168.12.1 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p UDP --dport 80 -j DNAT --to-destination 10.0.0.99 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8080 -j DNAT --to-destination 203.0.113.26:80 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8443 -j DNAT --to-destination 203.0.113.27:443 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat -o net1 -j SNAT --to-source 192.168.12.99 The logging file location and logging level are not configurable when you start the egress router by creating an EgressRouter object as described in this procedure. From within the chroot environment console, get the container ID: # crictl ps --name egress-router-cni-pod | awk '{print USD1}' Example output CONTAINER bac9fae69ddb6 Determine the process ID of the container. In this example, the container ID is bac9fae69ddb6 : # crictl inspect -o yaml bac9fae69ddb6 | grep 'pid:' | awk '{print USD2}' Example output 68857 Enter the network namespace of the container: # nsenter -n -t 68857 Display the routing table: # ip route In the following example output, the net1 network interface is the default route. Traffic for the cluster network uses the eth0 network interface. Traffic for the 192.168.12.0/24 network uses the net1 network interface and originates from the reserved source IP address 192.168.12.99 . The pod routes all other traffic to the gateway at IP address 192.168.12.1 . Routing for the service network is not shown. Example output default via 192.168.12.1 dev net1 10.128.10.0/23 dev eth0 proto kernel scope link src 10.128.10.18 192.168.12.0/24 dev net1 proto kernel scope link src 192.168.12.99 192.168.12.1 dev net1 24.23. Enabling multicast for a project 24.23.1. About multicast With IP multicast, data is broadcast to many IP addresses simultaneously. Important At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution. By default, network policies affect all connections in a namespace. However, multicast is unaffected by network policies. If multicast is enabled in the same namespace as your network policies, it is always allowed, even if there is a deny-all network policy. Cluster administrators should consider the implications to the exemption of multicast from network policies before enabling it. Multicast traffic between OpenShift Container Platform pods is disabled by default. If you are using the OVN-Kubernetes network plugin, you can enable multicast on a per-project basis. 24.23.2. Enabling multicast between pods You can enable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Run the following command to enable multicast for a project. Replace <namespace> with the namespace for the project you want to enable multicast for. USD oc annotate namespace <namespace> \ k8s.ovn.org/multicast-enabled=true Tip You can alternatively apply the following YAML to add the annotation: apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: "true" Verification To verify that multicast is enabled for a project, complete the following procedure: Change your current project to the project that you enabled multicast for. Replace <project> with the project name. USD oc project <project> Create a pod to act as a multicast receiver: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: ["/bin/sh", "-c"] args: ["dnf -y install socat hostname && sleep inf"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF Create a pod to act as a multicast sender: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: ["/bin/sh", "-c"] args: ["dnf -y install socat && sleep inf"] EOF In a new terminal window or tab, start the multicast listener. Get the IP address for the Pod: USD POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}') Start the multicast listener by entering the following command: USD oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname Start the multicast transmitter. Get the pod network IP address range: USD CIDR=USD(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}') To send a multicast message, enter the following command: USD oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64" If multicast is working, the command returns the following output: mlistener 24.24. Disabling multicast for a project 24.24.1. Disabling multicast between pods You can disable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Disable multicast by running the following command: USD oc annotate namespace <namespace> \ 1 k8s.ovn.org/multicast-enabled- 1 The namespace for the project you want to disable multicast for. Tip You can alternatively apply the following YAML to delete the annotation: apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: null 24.25. Tracking network flows As a cluster administrator, you can collect information about pod network flows from your cluster to assist with the following areas: Monitor ingress and egress traffic on the pod network. Troubleshoot performance issues. Gather data for capacity planning and security audits. When you enable the collection of the network flows, only the metadata about the traffic is collected. For example, packet data is not collected, but the protocol, source address, destination address, port numbers, number of bytes, and other packet-level information is collected. The data is collected in one or more of the following record formats: NetFlow sFlow IPFIX When you configure the Cluster Network Operator (CNO) with one or more collector IP addresses and port numbers, the Operator configures Open vSwitch (OVS) on each node to send the network flows records to each collector. You can configure the Operator to send records to more than one type of network flow collector. For example, you can send records to NetFlow collectors and also send records to sFlow collectors. When OVS sends data to the collectors, each type of collector receives identical records. For example, if you configure two NetFlow collectors, OVS on a node sends identical records to the two collectors. If you also configure two sFlow collectors, the two sFlow collectors receive identical records. However, each collector type has a unique record format. Collecting the network flows data and sending the records to collectors affects performance. Nodes process packets at a slower rate. If the performance impact is too great, you can delete the destinations for collectors to disable collecting network flows data and restore performance. Note Enabling network flow collectors might have an impact on the overall performance of the cluster network. 24.25.1. Network object configuration for tracking network flows The fields for configuring network flows collectors in the Cluster Network Operator (CNO) are shown in the following table: Table 24.17. Network flows configuration Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.exportNetworkFlows object One or more of netFlow , sFlow , or ipfix . spec.exportNetworkFlows.netFlow.collectors array A list of IP address and network port pairs for up to 10 collectors. spec.exportNetworkFlows.sFlow.collectors array A list of IP address and network port pairs for up to 10 collectors. spec.exportNetworkFlows.ipfix.collectors array A list of IP address and network port pairs for up to 10 collectors. After applying the following manifest to the CNO, the Operator configures Open vSwitch (OVS) on each node in the cluster to send network flows records to the NetFlow collector that is listening at 192.168.1.99:2056 . Example configuration for tracking network flows apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056 24.25.2. Adding destinations for network flows collectors As a cluster administrator, you can configure the Cluster Network Operator (CNO) to send network flows metadata about the pod network to a network flows collector. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You have a network flows collector and know the IP address and port that it listens on. Procedure Create a patch file that specifies the network flows collector type and the IP address and port information of the collectors: spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056 Configure the CNO with the network flows collectors: USD oc patch network.operator cluster --type merge -p "USD(cat <file_name>.yaml)" Example output network.operator.openshift.io/cluster patched Verification Verification is not typically necessary. You can run the following command to confirm that Open vSwitch (OVS) on each node is configured to send network flows records to one or more collectors. View the Operator configuration to confirm that the exportNetworkFlows field is configured: USD oc get network.operator cluster -o jsonpath="{.spec.exportNetworkFlows}" Example output {"netFlow":{"collectors":["192.168.1.99:2056"]}} View the network flows configuration in OVS from each node: USD for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node -o jsonpath='{[email protected][*]}{.metadata.name}{"\n"}{end}'); do ; echo; echo USDpod; oc -n openshift-ovn-kubernetes exec -c ovnkube-controller USDpod \ -- bash -c 'for type in ipfix sflow netflow ; do ovs-vsctl find USDtype ; done'; done Example output ovnkube-node-xrn4p _uuid : a4d2aaca-5023-4f3d-9400-7275f92611f9 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : ["192.168.1.99:2056"] ovnkube-node-z4vq9 _uuid : 61d02fdb-9228-4993-8ff5-b27f01a29bd6 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : ["192.168.1.99:2056"]- ... 24.25.3. Deleting all destinations for network flows collectors As a cluster administrator, you can configure the Cluster Network Operator (CNO) to stop sending network flows metadata to a network flows collector. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. Procedure Remove all network flows collectors: USD oc patch network.operator cluster --type='json' \ -p='[{"op":"remove", "path":"/spec/exportNetworkFlows"}]' Example output network.operator.openshift.io/cluster patched 24.25.4. Additional resources Network [operator.openshift.io/v1 ] 24.26. Configuring hybrid networking As a cluster administrator, you can configure the Red Hat OpenShift Networking OVN-Kubernetes network plugin to allow Linux and Windows nodes to host Linux and Windows workloads, respectively. 24.26.1. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations. Note This configuration is necessary to run both Linux and Windows nodes in the same cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster as a user with cluster-admin privileges. Ensure that the cluster uses the OVN-Kubernetes network plugin. Procedure To configure the OVN-Kubernetes hybrid network overlay, enter the following command: USD oc patch networks.operator.openshift.io cluster --type=merge \ -p '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "hybridOverlayConfig":{ "hybridClusterNetwork":[ { "cidr": "<cidr>", "hostPrefix": <prefix> } ], "hybridOverlayVXLANPort": <overlay_port> } } } } }' where: cidr Specify the CIDR configuration used for nodes on the additional overlay network. This CIDR must not overlap with the cluster network CIDR. hostPrefix Specifies the subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. hybridOverlayVXLANPort Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Example output network.operator.openshift.io/cluster patched To confirm that the configuration is active, enter the following command. It can take several minutes for the update to apply. USD oc get network.operator.openshift.io -o jsonpath="{.items[0].spec.defaultNetwork.ovnKubernetesConfig}" 24.26.2. Additional resources Understanding Windows container workloads Enabling Windows container workloads Installing a cluster on AWS with network customizations Installing a cluster on Azure with network customizations | [
"I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4",
"I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface",
"oc get all,ep,cm -n openshift-ovn-kubernetes",
"Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ NAME READY STATUS RESTARTS AGE pod/ovnkube-control-plane-65c6f55656-6d55h 2/2 Running 0 114m pod/ovnkube-control-plane-65c6f55656-fd7vw 2/2 Running 2 (104m ago) 114m pod/ovnkube-node-bcvts 8/8 Running 0 113m pod/ovnkube-node-drgvv 8/8 Running 0 113m pod/ovnkube-node-f2pxt 8/8 Running 0 113m pod/ovnkube-node-frqsb 8/8 Running 0 105m pod/ovnkube-node-lbxkk 8/8 Running 0 105m pod/ovnkube-node-tt7bx 8/8 Running 1 (102m ago) 105m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ovn-kubernetes-control-plane ClusterIP None <none> 9108/TCP 114m service/ovn-kubernetes-node ClusterIP None <none> 9103/TCP,9105/TCP 114m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/ovnkube-node 6 6 6 6 6 beta.kubernetes.io/os=linux 114m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ovnkube-control-plane 3/3 3 3 114m NAME DESIRED CURRENT READY AGE replicaset.apps/ovnkube-control-plane-65c6f55656 3 3 3 114m NAME ENDPOINTS AGE endpoints/ovn-kubernetes-control-plane 10.0.0.3:9108,10.0.0.4:9108,10.0.0.5:9108 114m endpoints/ovn-kubernetes-node 10.0.0.3:9105,10.0.0.4:9105,10.0.0.5:9105 + 9 more... 114m NAME DATA AGE configmap/control-plane-status 1 113m configmap/kube-root-ca.crt 1 114m configmap/openshift-service-ca.crt 1 114m configmap/ovn-ca 1 114m configmap/ovnkube-config 1 114m configmap/signer-ca 1 114m",
"oc get pods ovnkube-node-bcvts -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes",
"ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller",
"oc get pods ovnkube-control-plane-65c6f55656-6d55h -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes",
"kube-rbac-proxy ovnkube-cluster-manager",
"oc get po -n openshift-ovn-kubernetes",
"NAME READY STATUS RESTARTS AGE ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m ovnkube-node-55xs2 8/8 Running 0 26m ovnkube-node-7r84r 8/8 Running 0 16m ovnkube-node-bqq8p 8/8 Running 0 17m ovnkube-node-mkj4f 8/8 Running 0 26m ovnkube-node-mlr8k 8/8 Running 0 26m ovnkube-node-wqn2m 8/8 Running 0 16m",
"oc get pods -n openshift-ovn-kubernetes -owide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-55xs2 8/8 Running 0 26m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-7r84r 8/8 Running 0 17m 10.0.128.3 ci-ln-t487nnb-72292-mdcnq-worker-b-wbz7z <none> <none> ovnkube-node-bqq8p 8/8 Running 0 17m 10.0.128.2 ci-ln-t487nnb-72292-mdcnq-worker-a-lh7ms <none> <none> ovnkube-node-mkj4f 8/8 Running 0 27m 10.0.0.5 ci-ln-t487nnb-72292-mdcnq-master-0 <none> <none> ovnkube-node-mlr8k 8/8 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-node-wqn2m 8/8 Running 0 17m 10.0.128.4 ci-ln-t487nnb-72292-mdcnq-worker-c-przlm <none> <none>",
"oc rsh -c nbdb -n openshift-ovn-kubernetes ovnkube-node-55xs2",
"ovn-nbctl show",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 -c northd -- ovn-nbctl lr-list",
"45339f4f-7d0b-41d0-b5f9-9fca9ce40ce6 (GR_ci-ln-t487nnb-72292-mdcnq-master-2) 96a0a0f0-e7ed-4fec-8393-3195563de1b8 (ovn_cluster_router)",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 -c nbdb -- ovn-nbctl ls-list",
"bdd7dc3d-d848-4a74-b293-cc15128ea614 (ci-ln-t487nnb-72292-mdcnq-master-2) b349292d-ee03-4914-935f-1940b6cb91e5 (ext_ci-ln-t487nnb-72292-mdcnq-master-2) 0aac0754-ea32-4e33-b086-35eeabf0a140 (join) 992509d7-2c3f-4432-88db-c179e43592e5 (transit_switch)",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 -c nbdb -- ovn-nbctl lb-list",
"UUID LB PROTO VIP IPs 7c84c673-ed2a-4436-9a1f-9bc5dd181eea Service_default/ tcp 172.30.0.1:443 10.0.0.3:6443,169.254.169.2:6443,10.0.0.5:6443 4d663fd9-ddc8-4271-b333-4c0e279e20bb Service_default/ tcp 172.30.0.1:443 10.0.0.3:6443,10.0.0.4:6443,10.0.0.5:6443 292eb07f-b82f-4962-868a-4f541d250bca Service_openshif tcp 172.30.105.247:443 10.129.0.12:8443 034b5a7f-bb6a-45e9-8e6d-573a82dc5ee3 Service_openshif tcp 172.30.192.38:443 10.0.0.3:10259,10.0.0.4:10259,10.0.0.5:10259 a68bb53e-be84-48df-bd38-bdd82fcd4026 Service_openshif tcp 172.30.161.125:8443 10.129.0.32:8443 6cc21b3d-2c54-4c94-8ff5-d8e017269c2e Service_openshif tcp 172.30.3.144:443 10.129.0.22:8443 37996ffd-7268-4862-a27f-61cd62e09c32 Service_openshif tcp 172.30.181.107:443 10.129.0.18:8443 81d4da3c-f811-411f-ae0c-bc6713d0861d Service_openshif tcp 172.30.228.23:443 10.129.0.29:8443 ac5a4f3b-b6ba-4ceb-82d0-d84f2c41306e Service_openshif tcp 172.30.14.240:9443 10.129.0.36:9443 c88979fb-1ef5-414b-90ac-43b579351ac9 Service_openshif tcp 172.30.231.192:9001 10.128.0.5:9001,10.128.2.5:9001,10.129.0.5:9001,10.129.2.4:9001,10.130.0.3:9001,10.131.0.3:9001 fcb0a3fb-4a77-4230-a84a-be45dce757e8 Service_openshif tcp 172.30.189.92:443 10.130.0.17:8440 67ef3e7b-ceb9-4bf0-8d96-b43bde4c9151 Service_openshif tcp 172.30.67.218:443 10.129.0.9:8443 d0032fba-7d5e-424a-af25-4ab9b5d46e81 Service_openshif tcp 172.30.102.137:2379 10.0.0.3:2379,10.0.0.4:2379,10.0.0.5:2379 tcp 172.30.102.137:9979 10.0.0.3:9979,10.0.0.4:9979,10.0.0.5:9979 7361c537-3eec-4e6c-bc0c-0522d182abd4 Service_openshif tcp 172.30.198.215:9001 10.0.0.3:9001,10.0.0.4:9001,10.0.0.5:9001,10.0.128.2:9001,10.0.128.3:9001,10.0.128.4:9001 0296c437-1259-410b-a6fd-81c310ad0af5 Service_openshif tcp 172.30.198.215:9001 10.0.0.3:9001,169.254.169.2:9001,10.0.0.5:9001,10.0.128.2:9001,10.0.128.3:9001,10.0.128.4:9001 5d5679f5-45b8-479d-9f7c-08b123c688b8 Service_openshif tcp 172.30.38.253:17698 10.128.0.52:17698,10.129.0.84:17698,10.130.0.60:17698 2adcbab4-d1c9-447d-9573-b5dc9f2efbfa Service_openshif tcp 172.30.148.52:443 10.0.0.4:9202,10.0.0.5:9202 tcp 172.30.148.52:444 10.0.0.4:9203,10.0.0.5:9203 tcp 172.30.148.52:445 10.0.0.4:9204,10.0.0.5:9204 tcp 172.30.148.52:446 10.0.0.4:9205,10.0.0.5:9205 2a33a6d7-af1b-4892-87cc-326a380b809b Service_openshif tcp 172.30.67.219:9091 10.129.2.16:9091,10.131.0.16:9091 tcp 172.30.67.219:9092 10.129.2.16:9092,10.131.0.16:9092 tcp 172.30.67.219:9093 10.129.2.16:9093,10.131.0.16:9093 tcp 172.30.67.219:9094 10.129.2.16:9094,10.131.0.16:9094 f56f59d7-231a-4974-99b3-792e2741ec8d Service_openshif tcp 172.30.89.212:443 10.128.0.41:8443,10.129.0.68:8443,10.130.0.44:8443 08c2c6d7-d217-4b96-b5d8-c80c4e258116 Service_openshif tcp 172.30.102.137:2379 10.0.0.3:2379,169.254.169.2:2379,10.0.0.5:2379 tcp 172.30.102.137:9979 10.0.0.3:9979,169.254.169.2:9979,10.0.0.5:9979 60a69c56-fc6a-4de6-bd88-3f2af5ba5665 Service_openshif tcp 172.30.10.193:443 10.129.0.25:8443 ab1ef694-0826-4671-a22c-565fc2d282ec Service_openshif tcp 172.30.196.123:443 10.128.0.33:8443,10.129.0.64:8443,10.130.0.37:8443 b1fb34d3-0944-4770-9ee3-2683e7a630e2 Service_openshif tcp 172.30.158.93:8443 10.129.0.13:8443 95811c11-56e2-4877-be1e-c78ccb3a82a9 Service_openshif tcp 172.30.46.85:9001 10.130.0.16:9001 4baba1d1-b873-4535-884c-3f6fc07a50fd Service_openshif tcp 172.30.28.87:443 10.129.0.26:8443 6c2e1c90-f0ca-484e-8a8e-40e71442110a Service_openshif udp 172.30.0.10:53 10.128.0.13:5353,10.128.2.6:5353,10.129.0.39:5353,10.129.2.6:5353,10.130.0.11:5353,10.131.0.9:5353",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 -c nbdb ovn-nbctl --help",
"oc get po -n openshift-ovn-kubernetes",
"NAME READY STATUS RESTARTS AGE ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m ovnkube-node-55xs2 8/8 Running 0 26m ovnkube-node-7r84r 8/8 Running 0 16m ovnkube-node-bqq8p 8/8 Running 0 17m ovnkube-node-mkj4f 8/8 Running 0 26m ovnkube-node-mlr8k 8/8 Running 0 26m ovnkube-node-wqn2m 8/8 Running 0 16m",
"oc get pods -n openshift-ovn-kubernetes -owide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-55xs2 8/8 Running 0 26m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-7r84r 8/8 Running 0 17m 10.0.128.3 ci-ln-t487nnb-72292-mdcnq-worker-b-wbz7z <none> <none> ovnkube-node-bqq8p 8/8 Running 0 17m 10.0.128.2 ci-ln-t487nnb-72292-mdcnq-worker-a-lh7ms <none> <none> ovnkube-node-mkj4f 8/8 Running 0 27m 10.0.0.5 ci-ln-t487nnb-72292-mdcnq-master-0 <none> <none> ovnkube-node-mlr8k 8/8 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-node-wqn2m 8/8 Running 0 17m 10.0.128.4 ci-ln-t487nnb-72292-mdcnq-worker-c-przlm <none> <none>",
"oc rsh -c sbdb -n openshift-ovn-kubernetes ovnkube-node-55xs2",
"ovn-sbctl show",
"Chassis \"5db31703-35e9-413b-8cdf-69e7eecb41f7\" hostname: ci-ln-9gp362t-72292-v2p94-worker-a-8bmwz Encap geneve ip: \"10.0.128.4\" options: {csum=\"true\"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-worker-a-8bmwz Chassis \"070debed-99b7-4bce-b17d-17e720b7f8bc\" hostname: ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Encap geneve ip: \"10.0.128.2\" options: {csum=\"true\"} Port_Binding k8s-ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding rtoe-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding openshift-monitoring_alertmanager-main-1 Port_Binding rtoj-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding etor-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding cr-rtos-ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding openshift-e2e-loki_loki-promtail-qcrcz Port_Binding jtor-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding openshift-multus_network-metrics-daemon-mkd4t Port_Binding openshift-ingress-canary_ingress-canary-xtvj4 Port_Binding openshift-ingress_router-default-6c76cbc498-pvlqk Port_Binding openshift-dns_dns-default-zz582 Port_Binding openshift-monitoring_thanos-querier-57585899f5-lbf4f Port_Binding openshift-network-diagnostics_network-check-target-tn228 Port_Binding openshift-monitoring_prometheus-k8s-0 Port_Binding openshift-image-registry_image-registry-68899bd877-xqxjj Chassis \"179ba069-0af1-401c-b044-e5ba90f60fea\" hostname: ci-ln-9gp362t-72292-v2p94-master-0 Encap geneve ip: \"10.0.0.5\" options: {csum=\"true\"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-master-0 Chassis \"68c954f2-5a76-47be-9e84-1cb13bd9dab9\" hostname: ci-ln-9gp362t-72292-v2p94-worker-c-mjf9w Encap geneve ip: \"10.0.128.3\" options: {csum=\"true\"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-worker-c-mjf9w Chassis \"2de65d9e-9abf-4b6e-a51d-a1e038b4d8af\" hostname: ci-ln-9gp362t-72292-v2p94-master-2 Encap geneve ip: \"10.0.0.4\" options: {csum=\"true\"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-master-2 Chassis \"1d371cb8-5e21-44fd-9025-c4b162cc4247\" hostname: ci-ln-9gp362t-72292-v2p94-master-1 Encap geneve ip: \"10.0.0.3\" options: {csum=\"true\"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-master-1",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 -c sbdb ovn-sbctl --help",
"git clone [email protected]:openshift/network-tools.git",
"cd network-tools",
"./debug-scripts/network-tools -h",
"./debug-scripts/network-tools ovn-db-run-command ovn-nbctl lr-list",
"944a7b53-7948-4ad2-a494-82b55eeccf87 (GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99) 84bd4a4c-4b0b-4a47-b0cf-a2c32709fc53 (ovn_cluster_router)",
"./debug-scripts/network-tools ovn-db-run-command ovn-sbctl find Port_Binding type=localnet",
"_uuid : d05298f5-805b-4838-9224-1211afc2f199 additional_chassis : [] additional_encap : [] chassis : [] datapath : f3c2c959-743b-4037-854d-26627902597c encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : br-ex_ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [unknown] mirror_rules : [] nat_addresses : [] options : {network_name=physnet} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 2 type : localnet up : false virtual_parent : [] [...]",
"./debug-scripts/network-tools ovn-db-run-command ovn-sbctl find Port_Binding type=l3gateway",
"_uuid : 5207a1f3-1cf3-42f1-83e9-387bbb06b03c additional_chassis : [] additional_encap : [] chassis : ca6eb600-3a10-4372-a83e-e0d957c4cd92 datapath : f3c2c959-743b-4037-854d-26627902597c encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : etor-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [\"42:01:0a:00:80:04\"] mirror_rules : [] nat_addresses : [\"42:01:0a:00:80:04 10.0.128.4\"] options : {l3gateway-chassis=\"84737c36-b383-4c83-92c5-2bd5b3c7e772\", peer=rtoe-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : l3gateway up : true virtual_parent : [] _uuid : 6088d647-84f2-43f2-b53f-c9d379042679 additional_chassis : [] additional_encap : [] chassis : ca6eb600-3a10-4372-a83e-e0d957c4cd92 datapath : dc9cea00-d94a-41b8-bdb0-89d42d13aa2e encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : jtor-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [router] mirror_rules : [] nat_addresses : [] options : {l3gateway-chassis=\"84737c36-b383-4c83-92c5-2bd5b3c7e772\", peer=rtoj-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 2 type : l3gateway up : true virtual_parent : [] [...]",
"./debug-scripts/network-tools ovn-db-run-command ovn-sbctl find Port_Binding type=patch",
"_uuid : 785fb8b6-ee5a-4792-a415-5b1cb855dac2 additional_chassis : [] additional_encap : [] chassis : [] datapath : f1ddd1cc-dc0d-43b4-90ca-12651305acec encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : stor-ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [router] mirror_rules : [] nat_addresses : [\"0a:58:0a:80:02:01 10.128.2.1 is_chassis_resident(\\\"cr-rtos-ci-ln-54932yb-72292-kd676-worker-c-rzj99\\\")\"] options : {peer=rtos-ci-ln-54932yb-72292-kd676-worker-c-rzj99} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : patch up : false virtual_parent : [] _uuid : c01ff587-21a5-40b4-8244-4cd0425e5d9a additional_chassis : [] additional_encap : [] chassis : [] datapath : f6795586-bf92-4f84-9222-efe4ac6a7734 encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : rtoj-ovn_cluster_router mac : [\"0a:58:64:40:00:01 100.64.0.1/16\"] mirror_rules : [] nat_addresses : [] options : {peer=jtor-ovn_cluster_router} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : patch up : false virtual_parent : [] [...]",
"oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node -o json | jq '.items[0].spec.containers[] | .name,.readinessProbe'",
"oc get events -n openshift-ovn-kubernetes",
"oc describe pod ovnkube-node-9lqfk -n openshift-ovn-kubernetes",
"oc get co/network -o json | jq '.status.conditions[]'",
"for p in USD(oc get pods --selector app=ovnkube-node -n openshift-ovn-kubernetes -o jsonpath='{range.items[*]}{\" \"}{.metadata.name}'); do echo === USDp ===; get pods -n openshift-ovn-kubernetes USDp -o json | jq '.status.containerStatuses[] | .name, .ready'; done",
"ALERT_MANAGER=USD(oc get route alertmanager-main -n openshift-monitoring -o jsonpath='{@.spec.host}')",
"curl -s -k -H \"Authorization: Bearer USD(oc create token prometheus-k8s -n openshift-monitoring)\" https://USDALERT_MANAGER/api/v1/alerts | jq '.data[] | \"\\(.labels.severity) \\(.labels.alertname) \\(.labels.pod) \\(.labels.container) \\(.labels.endpoint) \\(.labels.instance)\"'",
"oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -s 'http://localhost:9090/api/v1/rules' | jq '.data.groups[].rules[] | select(((.name|contains(\"ovn\")) or (.name|contains(\"OVN\")) or (.name|contains(\"Ovn\")) or (.name|contains(\"North\")) or (.name|contains(\"South\"))) and .type==\"alerting\")'",
"oc logs -f <pod_name> -c <container_name> -n <namespace>",
"oc logs ovnkube-node-5dx44 -n openshift-ovn-kubernetes",
"oc logs -f ovnkube-node-5dx44 -c ovnkube-controller -n openshift-ovn-kubernetes",
"for p in USD(oc get pods --selector app=ovnkube-node -n openshift-ovn-kubernetes -o jsonpath='{range.items[*]}{\" \"}{.metadata.name}'); do echo === USDp ===; for container in USD(oc get pods -n openshift-ovn-kubernetes USDp -o json | jq -r '.status.containerStatuses[] | .name');do echo ---USDcontainer---; logs -c USDcontainer USDp -n openshift-ovn-kubernetes --tail=5; done; done",
"oc logs -l app=ovnkube-node -n openshift-ovn-kubernetes --all-containers --tail 5",
"oc get po -o wide -n openshift-ovn-kubernetes",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-65497d4548-9ptdr 2/2 Running 2 (128m ago) 147m 10.0.0.3 ci-ln-3njdr9b-72292-5nwkp-master-0 <none> <none> ovnkube-control-plane-65497d4548-j6zfk 2/2 Running 0 147m 10.0.0.5 ci-ln-3njdr9b-72292-5nwkp-master-2 <none> <none> ovnkube-node-5dx44 8/8 Running 0 146m 10.0.0.3 ci-ln-3njdr9b-72292-5nwkp-master-0 <none> <none> ovnkube-node-dpfn4 8/8 Running 0 146m 10.0.0.4 ci-ln-3njdr9b-72292-5nwkp-master-1 <none> <none> ovnkube-node-kwc9l 8/8 Running 0 134m 10.0.128.2 ci-ln-3njdr9b-72292-5nwkp-worker-a-2fjcj <none> <none> ovnkube-node-mcrhl 8/8 Running 0 134m 10.0.128.4 ci-ln-3njdr9b-72292-5nwkp-worker-c-v9x5v <none> <none> ovnkube-node-nsct4 8/8 Running 0 146m 10.0.0.5 ci-ln-3njdr9b-72292-5nwkp-master-2 <none> <none> ovnkube-node-zrj9f 8/8 Running 0 134m 10.0.128.3 ci-ln-3njdr9b-72292-5nwkp-worker-b-v78h7 <none> <none>",
"kind: ConfigMap apiVersion: v1 metadata: name: env-overrides namespace: openshift-ovn-kubernetes data: ci-ln-3njdr9b-72292-5nwkp-master-0: | 1 # This sets the log level for the ovn-kubernetes node process: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for ovn-controller: OVN_LOG_LEVEL=dbg ci-ln-3njdr9b-72292-5nwkp-master-2: | # This sets the log level for the ovn-kubernetes node process: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for ovn-controller: OVN_LOG_LEVEL=dbg _master: | 2 # This sets the log level for the ovn-kubernetes master process as well as the ovn-dbchecker: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for northd, nbdb and sbdb on all masters: OVN_LOG_LEVEL=dbg",
"oc apply -n openshift-ovn-kubernetes -f env-overrides.yaml",
"configmap/env-overrides.yaml created",
"oc delete pod -n openshift-ovn-kubernetes --field-selector spec.nodeName=ci-ln-3njdr9b-72292-5nwkp-master-0 -l app=ovnkube-node",
"oc delete pod -n openshift-ovn-kubernetes --field-selector spec.nodeName=ci-ln-3njdr9b-72292-5nwkp-master-2 -l app=ovnkube-node",
"oc delete pod -n openshift-ovn-kubernetes -l app=ovnkube-node",
"oc logs -n openshift-ovn-kubernetes --all-containers --prefix ovnkube-node-<xxxx> | grep -E -m 10 '(Logging config:|vconsole|DBG)'",
"[pod/ovnkube-node-2cpjc/sbdb] + exec /usr/share/ovn/scripts/ovn-ctl --no-monitor '--ovn-sb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_sb_ovsdb [pod/ovnkube-node-2cpjc/ovnkube-controller] I1012 14:39:59.984506 35767 config.go:2247] Logging config: {File: CNIFile:/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log LibovsdbFile:/var/log/ovnkube/libovsdb.log Level:5 LogFileMaxSize:100 LogFileMaxBackups:5 LogFileMaxAge:0 ACLLoggingRateLimit:20} [pod/ovnkube-node-2cpjc/northd] + exec ovn-northd --no-chdir -vconsole:info -vfile:off '-vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' --pidfile /var/run/ovn/ovn-northd.pid --n-threads=1 [pod/ovnkube-node-2cpjc/nbdb] + exec /usr/share/ovn/scripts/ovn-ctl --no-monitor '--ovn-nb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_nb_ovsdb [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.552Z|00002|hmap|DBG|lib/shash.c:114: 1 bucket with 6+ nodes, including 1 bucket with 6 nodes (32 nodes total across 32 buckets) [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00003|hmap|DBG|lib/shash.c:114: 1 bucket with 6+ nodes, including 1 bucket with 6 nodes (64 nodes total across 64 buckets) [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00004|hmap|DBG|lib/shash.c:114: 1 bucket with 6+ nodes, including 1 bucket with 7 nodes (32 nodes total across 32 buckets) [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00005|reconnect|DBG|unix:/var/run/openvswitch/db.sock: entering BACKOFF [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00007|reconnect|DBG|unix:/var/run/openvswitch/db.sock: entering CONNECTING [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00008|ovsdb_cs|DBG|unix:/var/run/openvswitch/db.sock: SERVER_SCHEMA_REQUESTED -> SERVER_SCHEMA_REQUESTED at lib/ovsdb-cs.c:423",
"for f in USD(oc -n openshift-ovn-kubernetes get po -l 'app=ovnkube-node' --no-headers -o custom-columns=N:.metadata.name) ; do echo \"---- USDf ----\" ; oc -n openshift-ovn-kubernetes exec -c ovnkube-controller USDf -- pgrep -a -f init-ovnkube-controller | grep -P -o '^.*loglevel\\s+\\d' ; done",
"---- ovnkube-node-2dt57 ---- 60981 /usr/bin/ovnkube --init-ovnkube-controller xpst8-worker-c-vmh5n.c.openshift-qe.internal --init-node xpst8-worker-c-vmh5n.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-4zznh ---- 178034 /usr/bin/ovnkube --init-ovnkube-controller xpst8-master-2.c.openshift-qe.internal --init-node xpst8-master-2.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-548sx ---- 77499 /usr/bin/ovnkube --init-ovnkube-controller xpst8-worker-a-fjtnb.c.openshift-qe.internal --init-node xpst8-worker-a-fjtnb.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-6btrf ---- 73781 /usr/bin/ovnkube --init-ovnkube-controller xpst8-worker-b-p8rww.c.openshift-qe.internal --init-node xpst8-worker-b-p8rww.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-fkc9r ---- 130707 /usr/bin/ovnkube --init-ovnkube-controller xpst8-master-0.c.openshift-qe.internal --init-node xpst8-master-0.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 5 ---- ovnkube-node-tk9l4 ---- 181328 /usr/bin/ovnkube --init-ovnkube-controller xpst8-master-1.c.openshift-qe.internal --init-node xpst8-master-1.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4",
"oc get podnetworkconnectivitychecks -n openshift-network-diagnostics",
"oc get podnetworkconnectivitychecks -n openshift-network-diagnostics -o json | jq '.items[]| .spec.targetEndpoint,.status.successes[0]'",
"oc get podnetworkconnectivitychecks -n openshift-network-diagnostics -o json | jq '.items[]| .spec.targetEndpoint,.status.failures[0]'",
"oc get podnetworkconnectivitychecks -n openshift-network-diagnostics -o json | jq '.items[]| .spec.targetEndpoint,.status.outages[0]'",
"oc exec prometheus-k8s-0 -n openshift-monitoring -- promtool query instant http://localhost:9090 '{component=\"openshift-network-diagnostics\"}'",
"oc exec prometheus-k8s-0 -n openshift-monitoring -- promtool query instant http://localhost:9090 '{component=\"openshift-network-diagnostics\"}'",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: sample-anp-deny-pass-rules 1 spec: priority: 50 2 subject: namespaces: matchLabels: kubernetes.io/metadata.name: example.name 3 ingress: 4 - name: \"deny-all-ingress-tenant-1\" 5 action: \"Deny\" from: - pods: namespaces: 6 namespaceSelector: matchLabels: custom-anp: tenant-1 podSelector: matchLabels: custom-anp: tenant-1 7 egress: 8 - name: \"pass-all-egress-to-tenant-1\" action: \"Pass\" to: - pods: namespaces: namespaceSelector: matchLabels: custom-anp: tenant-1 podSelector: matchLabels: custom-anp: tenant-1",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: allow-monitoring spec: priority: 9 subject: namespaces: {} ingress: - name: \"allow-ingress-from-monitoring\" action: \"Allow\" from: - namespaces: namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: block-monitoring spec: priority: 5 subject: namespaces: matchLabels: security: restricted ingress: - name: \"deny-ingress-from-monitoring\" action: \"Deny\" from: - namespaces: namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: pass-monitoring spec: priority: 7 subject: namespaces: matchLabels: security: internal ingress: - name: \"pass-ingress-from-monitoring\" action: \"Pass\" from: - namespaces: namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default 1 spec: subject: namespaces: matchLabels: kubernetes.io/metadata.name: example.name 2 ingress: 3 - name: \"deny-all-ingress-from-tenant-1\" 4 action: \"Deny\" from: - pods: namespaces: namespaceSelector: matchLabels: custom-banp: tenant-1 5 podSelector: matchLabels: custom-banp: tenant-1 6 egress: - name: \"allow-all-egress-to-tenant-1\" action: \"Allow\" to: - pods: namespaces: namespaceSelector: matchLabels: custom-banp: tenant-1 podSelector: matchLabels: custom-banp: tenant-1",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default spec: subject: namespaces: matchLabels: security: internal ingress: - name: \"deny-ingress-from-monitoring\" action: \"Deny\" from: - namespaces: namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-monitoring namespace: tenant 1 spec: podSelector: policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring",
"POD=USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-control-plane -o name | head -1 | awk -F '/' '{print USDNF}')",
"oc cp -n openshift-ovn-kubernetes USDPOD:/usr/bin/ovnkube-trace -c ovnkube-cluster-manager ovnkube-trace",
"chmod +x ovnkube-trace",
"./ovnkube-trace -help",
"Usage of ./ovnkube-trace: -addr-family string Address family (ip4 or ip6) to be used for tracing (default \"ip4\") -dst string dest: destination pod name -dst-ip string destination IP address (meant for tests to external targets) -dst-namespace string k8s namespace of dest pod (default \"default\") -dst-port string dst-port: destination port (default \"80\") -kubeconfig string absolute path to the kubeconfig file -loglevel string loglevel: klog level (default \"0\") -ovn-config-namespace string namespace used by ovn-config itself -service string service: destination service name -skip-detrace skip ovn-detrace command -src string src: source pod name -src-namespace string k8s namespace of source pod (default \"default\") -tcp use tcp transport protocol -udp use udp transport protocol",
"oc run web --namespace=default --image=quay.io/openshifttest/nginx --labels=\"app=web\" --expose --port=80",
"get pods -n openshift-dns",
"NAME READY STATUS RESTARTS AGE dns-default-8s42x 2/2 Running 0 5h8m dns-default-mdw6r 2/2 Running 0 4h58m dns-default-p8t5h 2/2 Running 0 4h58m dns-default-rl6nk 2/2 Running 0 5h8m dns-default-xbgqx 2/2 Running 0 5h8m dns-default-zv8f6 2/2 Running 0 4h58m node-resolver-62jjb 1/1 Running 0 5h8m node-resolver-8z4cj 1/1 Running 0 4h59m node-resolver-bq244 1/1 Running 0 5h8m node-resolver-hc58n 1/1 Running 0 4h59m node-resolver-lm6z4 1/1 Running 0 5h8m node-resolver-zfx5k 1/1 Running 0 5h",
"./ovnkube-trace -src-namespace default \\ 1 -src web \\ 2 -dst-namespace openshift-dns \\ 3 -dst dns-default-p8t5h \\ 4 -udp -dst-port 53 \\ 5 -loglevel 0 6",
"ovn-trace source pod to destination pod indicates success from web to dns-default-p8t5h ovn-trace destination pod to source pod indicates success from dns-default-p8t5h to web ovs-appctl ofproto/trace source pod to destination pod indicates success from web to dns-default-p8t5h ovs-appctl ofproto/trace destination pod to source pod indicates success from dns-default-p8t5h to web ovn-detrace source pod to destination pod indicates success from web to dns-default-p8t5h ovn-detrace destination pod to source pod indicates success from dns-default-p8t5h to web",
"ovn-trace source pod to destination pod indicates success from web to dns-default-8s42x ovn-trace (remote) source pod to destination pod indicates success from web to dns-default-8s42x ovn-trace destination pod to source pod indicates success from dns-default-8s42x to web ovn-trace (remote) destination pod to source pod indicates success from dns-default-8s42x to web ovs-appctl ofproto/trace source pod to destination pod indicates success from web to dns-default-8s42x ovs-appctl ofproto/trace destination pod to source pod indicates success from dns-default-8s42x to web ovn-detrace source pod to destination pod indicates success from web to dns-default-8s42x ovn-detrace destination pod to source pod indicates success from dns-default-8s42x to web",
"./ovnkube-trace -src-namespace default -src web -dst-namespace openshift-dns -dst dns-default-467qw -udp -dst-port 53 -loglevel 2",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default spec: podSelector: {} ingress: []",
"oc apply -f deny-by-default.yaml",
"networkpolicy.networking.k8s.io/deny-by-default created",
"oc run web --namespace=default --image=quay.io/openshifttest/nginx --labels=\"app=web\" --expose --port=80",
"oc create namespace prod",
"oc label namespace/prod purpose=production",
"oc run test-6459 --namespace=prod --rm -i -t --image=alpine -- sh",
"./ovnkube-trace -src-namespace prod -src test-6459 -dst-namespace default -dst web -tcp -dst-port 80 -loglevel 0",
"ovn-trace source pod to destination pod indicates failure from test-6459 to web",
"./ovnkube-trace -src-namespace prod -src test-6459 -dst-namespace default -dst web -tcp -dst-port 80 -loglevel 2",
"------------------------------------------------ 3. ls_out_acl_hint (northd.c:7454): !ct.new && ct.est && !ct.rpl && ct_mark.blocked == 0, priority 4, uuid 12efc456 reg0[8] = 1; reg0[10] = 1; next; 5. ls_out_acl_action (northd.c:7835): reg8[30..31] == 0, priority 500, uuid 69372c5d reg8[30..31] = 1; next(4); 5. ls_out_acl_action (northd.c:7835): reg8[30..31] == 1, priority 500, uuid 2fa0af89 reg8[30..31] = 2; next(4); 4. ls_out_acl_eval (northd.c:7691): reg8[30..31] == 2 && reg0[10] == 1 && (outport == @a16982411286042166782_ingressDefaultDeny), priority 2000, uuid 447d0dab reg8[17] = 1; ct_commit { ct_mark.blocked = 1; }; 1 next;",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production",
"oc apply -f web-allow-prod.yaml",
"./ovnkube-trace -src-namespace prod -src test-6459 -dst-namespace default -dst web -tcp -dst-port 80 -loglevel 0",
"ovn-trace source pod to destination pod indicates success from test-6459 to web ovn-trace destination pod to source pod indicates success from web to test-6459 ovs-appctl ofproto/trace source pod to destination pod indicates success from test-6459 to web ovs-appctl ofproto/trace destination pod to source pod indicates success from web to test-6459 ovn-detrace source pod to destination pod indicates success from test-6459 to web ovn-detrace destination pod to source pod indicates success from web to test-6459",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml",
"#!/bin/bash if [ -n \"USDOVN_SDN_MIGRATION_TIMEOUT\" ] && [ \"USDOVN_SDN_MIGRATION_TIMEOUT\" = \"0s\" ]; then unset OVN_SDN_MIGRATION_TIMEOUT fi #loops the timeout command of the script to repeatedly check the cluster Operators until all are available. co_timeout=USD{OVN_SDN_MIGRATION_TIMEOUT:-1200s} timeout \"USDco_timeout\" bash <<EOT until oc wait co --all --for='condition=AVAILABLE=True' --timeout=10s && oc wait co --all --for='condition=PROGRESSING=False' --timeout=10s && oc wait co --all --for='condition=DEGRADED=False' --timeout=10s; do sleep 10 echo \"Some ClusterOperators Degraded=False,Progressing=True,or Available=False\"; done EOT",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{\"spec\":{\"migration\":null}}'",
"oc get nncp",
"NAME STATUS REASON bondmaster0 Available SuccessfullyConfigured",
"oc delete nncp <nncp_manifest_filename>",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OVNKubernetes\" } } }'",
"oc get mcp",
"oc get co",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OVNKubernetes\", \"features\": { \"egressIP\": <bool>, \"egressFirewall\": <bool>, \"multicast\": <bool> } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":<mtu>, \"genevePort\":<port>, \"v4InternalSubnet\":\"<ipv4_subnet>\" }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":1200 }}}}'",
"oc get mcp",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep ExecStart",
"ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes",
"oc get pod -n openshift-machine-config-operator",
"NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h",
"oc logs <pod> -n openshift-machine-config-operator",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OVNKubernetes\" } }'",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"clusterNetwork\": [ { \"cidr\": \"<cidr>\", \"hostPrefix\": <prefix> } ], \"networkType\": \"OVNKubernetes\" } }'",
"oc -n openshift-multus rollout status daemonset/multus",
"Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out",
"#!/bin/bash readarray -t POD_NODES <<< \"USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1\" \"USD7}')\" for i in \"USD{POD_NODES[@]}\" do read -r POD NODE <<< \"USDi\" until oc rsh -n openshift-machine-config-operator \"USDPOD\" chroot /rootfs shutdown -r +1 do echo \"cannot reboot node USDNODE, retry\" && sleep 3 done done",
"#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done",
"oc get network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'",
"oc get nodes",
"oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'",
"oc get co",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"defaultNetwork\": { \"openshiftSDNConfig\": null } } }'",
"oc delete namespace openshift-sdn",
"oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": true } }'",
"oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\":{ \"paused\": true } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"oc get Network.config cluster -o jsonpath='{.status.migration}'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OpenShiftSDN\" } } }'",
"oc get Network.config cluster -o jsonpath='{.status.migration.networkType}'",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OpenShiftSDN\" } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OpenShiftSDN\", \"features\": { \"egressIP\": <bool>, \"egressFirewall\": <bool>, \"multicast\": <bool> } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":<mtu>, \"vxlanPort\":<port> }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":1200 }}}}'",
"#!/bin/bash readarray -t POD_NODES <<< \"USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1\" \"USD7}')\" for i in \"USD{POD_NODES[@]}\" do read -r POD NODE <<< \"USDi\" until oc rsh -n openshift-machine-config-operator \"USDPOD\" chroot /rootfs shutdown -r +1 do echo \"cannot reboot node USDNODE, retry\" && sleep 3 done done",
"#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done",
"oc -n openshift-multus rollout status daemonset/multus",
"Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out",
"oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": false } }'",
"oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\": { \"paused\": false } }'",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml",
"oc get Network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'",
"oc get nodes",
"oc get pod -n openshift-machine-config-operator",
"NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h",
"oc logs <pod> -n openshift-machine-config-operator",
"oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"defaultNetwork\": { \"ovnKubernetesConfig\":null } } }'",
"oc delete namespace openshift-ovn-kubernetes",
"oc get Network.config.openshift.io cluster -o yaml > cluster-kuryr.yaml",
"CLUSTERID=USD(oc get infrastructure.config.openshift.io cluster -o=jsonpath='{.status.infrastructureName}')",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": {\"migration\": {\"networkType\": \"OVNKubernetes\"}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":<mtu>, \"genevePort\":<port>, \"v4InternalSubnet\":\"<ipv4_subnet>\", \"v6InternalSubnet\":\"<ipv6_subnet>\" }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":1200 }}}}'",
"oc get mcp",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b 1 machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b 2 machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep ExecStart",
"ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes",
"oc get pod -n openshift-machine-config-operator",
"NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h",
"oc logs <pod> -n openshift-machine-config-operator",
"oc patch Network.config.openshift.io cluster --type=merge --patch '{\"spec\": {\"networkType\": \"OVNKubernetes\"}}'",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"clusterNetwork\": [ { \"cidr\": \"<cidr>\", \"hostPrefix\": \"<prefix>\" } ] \"networkType\": \"OVNKubernetes\" } }'",
"#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done",
"for name in USD(openstack server list --name \"USD{CLUSTERID}*\" -f value -c Name); do openstack server reboot \"USD{name}\"; done",
"oc get network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'",
"oc get nodes",
"oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'",
"oc get co",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": {\"migration\": null}}'",
"python3 -m venv /tmp/venv",
"source /tmp/venv/bin/activate",
"(venv) USD pip install --upgrade pip",
"(venv) USD pip install openstacksdk==0.54.0 python-openstackclient==5.5.0 python-octaviaclient==2.3.0 'python-neutronclient<9.0.0'",
"(venv) USD CLUSTERID=USD(oc get infrastructure.config.openshift.io cluster -o=jsonpath='{.status.infrastructureName}')",
"(venv) USD CLUSTERTAG=\"openshiftClusterID=USD{CLUSTERID}\"",
"(venv) USD ROUTERID=USD(oc get kuryrnetwork -A --no-headers -o custom-columns=\":status.routerId\"|uniq)",
"(venv) USD function REMFIN { local resource=USD1 local finalizer=USD2 for res in USD(oc get \"USD{resource}\" -A --template='{{range USDi,USDp := .items}}{{ USDp.metadata.name }}|{{ USDp.metadata.namespace }}{{\"\\n\"}}{{end}}'); do name=USD{res%%|*} ns=USD{res##*|} yaml=USD(oc get -n \"USD{ns}\" \"USD{resource}\" \"USD{name}\" -o yaml) if echo \"USD{yaml}\" | grep -q \"USD{finalizer}\"; then echo \"USD{yaml}\" | grep -v \"USD{finalizer}\" | oc replace -n \"USD{ns}\" \"USD{resource}\" \"USD{name}\" -f - fi done }",
"(venv) USD REMFIN services kuryr.openstack.org/service-finalizer",
"(venv) USD if oc get -n openshift-kuryr service service-subnet-gateway-ip &>/dev/null; then oc -n openshift-kuryr delete service service-subnet-gateway-ip fi",
"(venv) USD for lb in USD(openstack loadbalancer list --tags \"USD{CLUSTERTAG}\" -f value -c id); do openstack loadbalancer delete --cascade \"USD{lb}\" done",
"(venv) USD REMFIN kuryrloadbalancers.openstack.org kuryr.openstack.org/kuryrloadbalancer-finalizers",
"(venv) USD oc delete namespace openshift-kuryr",
"(venv) USD openstack router remove subnet \"USD{ROUTERID}\" \"USD{CLUSTERID}-kuryr-service-subnet\"",
"(venv) USD openstack network delete \"USD{CLUSTERID}-kuryr-service-network\"",
"(venv) USD REMFIN pods kuryr.openstack.org/pod-finalizer",
"(venv) USD REMFIN kuryrports.openstack.org kuryr.openstack.org/kuryrport-finalizer",
"(venv) USD REMFIN networkpolicy kuryr.openstack.org/networkpolicy-finalizer",
"(venv) USD REMFIN kuryrnetworkpolicies.openstack.org kuryr.openstack.org/networkpolicy-finalizer",
"(venv) USD mapfile trunks < <(python -c \"import openstack; n = openstack.connect().network; print('\\n'.join([x.id for x in n.trunks(any_tags='USDCLUSTERTAG')]))\") && i=0 && for trunk in \"USD{trunks[@]}\"; do trunk=USD(echo \"USDtrunk\"|tr -d '\\n') i=USD((i+1)) echo \"Processing trunk USDtrunk, USD{i}/USD{#trunks[@]}.\" subports=() for subport in USD(python -c \"import openstack; n = openstack.connect().network; print(' '.join([x['port_id'] for x in n.get_trunk('USDtrunk').sub_ports if 'USDCLUSTERTAG' in n.get_port(x['port_id']).tags]))\"); do subports+=(\"USDsubport\"); done args=() for sub in \"USD{subports[@]}\" ; do args+=(\"--subport USDsub\") done if [ USD{#args[@]} -gt 0 ]; then openstack network trunk unset USD{args[*]} \"USD{trunk}\" fi done",
"(venv) USD mapfile -t kuryrnetworks < <(oc get kuryrnetwork -A --template='{{range USDi,USDp := .items}}{{ USDp.status.netId }}|{{ USDp.status.subnetId }}{{\"\\n\"}}{{end}}') && i=0 && for kn in \"USD{kuryrnetworks[@]}\"; do i=USD((i+1)) netID=USD{kn%%|*} subnetID=USD{kn##*|} echo \"Processing network USDnetID, USD{i}/USD{#kuryrnetworks[@]}\" # Remove all ports from the network. for port in USD(python -c \"import openstack; n = openstack.connect().network; print(' '.join([x.id for x in n.ports(network_id='USDnetID') if x.device_owner != 'network:router_interface']))\"); do ( openstack port delete \"USD{port}\" ) & # Only allow 20 jobs in parallel. if [[ USD(jobs -r -p | wc -l) -ge 20 ]]; then wait -n fi done wait # Remove the subnet from the router. openstack router remove subnet \"USD{ROUTERID}\" \"USD{subnetID}\" # Remove the network. openstack network delete \"USD{netID}\" done",
"(venv) USD openstack security group delete \"USD{CLUSTERID}-kuryr-pods-security-group\"",
"(venv) USD for subnetpool in USD(openstack subnet pool list --tags \"USD{CLUSTERTAG}\" -f value -c ID); do openstack subnet pool delete \"USD{subnetpool}\" done",
"(venv) USD networks=USD(oc get kuryrnetwork -A --no-headers -o custom-columns=\":status.netId\") && for existingNet in USD(openstack network list --tags \"USD{CLUSTERTAG}\" -f value -c ID); do if [[ USDnetworks =~ USDexistingNet ]]; then echo \"Network still exists: USDexistingNet\" fi done",
"(venv) USD for sgid in USD(openstack security group list -f value -c ID -c Description | grep 'Kuryr-Kubernetes Network Policy' | cut -f 1 -d ' '); do openstack security group delete \"USD{sgid}\" done",
"(venv) USD REMFIN kuryrnetworks.openstack.org kuryrnetwork.finalizers.kuryr.openstack.org",
"(venv) USD if python3 -c \"import sys; import openstack; n = openstack.connect().network; r = n.get_router('USDROUTERID'); sys.exit(0) if r.description != 'Created By OpenShift Installer' else sys.exit(1)\"; then openstack router delete \"USD{ROUTERID}\" fi",
"- op: add path: /spec/clusterNetwork/- value: 1 cidr: fd01::/48 hostPrefix: 64 - op: add path: /spec/serviceNetwork/- value: fd02::/112 2",
"oc patch network.config.openshift.io cluster --type='json' --patch-file <file>.yaml",
"network.config.openshift.io/cluster patched",
"oc describe network",
"Status: Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cidr: fd01::/48 Host Prefix: 64 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 172.30.0.0/16 fd02::/112",
"oc edit networks.config.openshift.io",
"oc patch network.operator.openshift.io cluster --type='merge' -p='{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\": {\"ipv4\":{\"internalJoinSubnet\": \"<join_subnet>\"}, \"ipv6\":{\"internalJoinSubnet\": \"<join_subnet>\"}}}}}'",
"network.operator.openshift.io/cluster patched",
"oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.defaultNetwork}\"",
"{ \"ovnKubernetesConfig\": { \"ipv4\": { \"internalJoinSubnet\": \"100.64.1.0/16\" }, }, \"type\": \"OVNKubernetes\" }",
"oc patch network.operator.openshift.io cluster --type='merge' -p='{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\": {\"ipv4\":{\"internalTransitSwitchSubnet\": \"<transit_subnet>\"}, \"ipv6\":{\"internalTransitSwitchSubnet\": \"<transit_subnet>\"}}}}}'",
"network.operator.openshift.io/cluster patched",
"oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.defaultNetwork}\"",
"{ \"ovnKubernetesConfig\": { \"ipv4\": { \"internalTransitSwitchSubnet\": \"100.88.1.0/16\" }, }, \"type\": \"OVNKubernetes\" }",
"kind: Namespace apiVersion: v1 metadata: name: example1 annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"info\", \"allow\": \"info\" }",
"<timestamp>|<message_serial>|acl_log(ovn_pinctrl0)|<severity>|name=\"<acl_name>\", verdict=\"<verdict>\", severity=\"<severity>\", direction=\"<direction>\": <flow>",
"<proto>,vlan_tci=0x0000,dl_src=<src_mac>,dl_dst=<source_mac>,nw_src=<source_ip>,nw_dst=<target_ip>,nw_tos=<tos_dscp>,nw_ecn=<tos_ecn>,nw_ttl=<ip_ttl>,nw_frag=<fragment>,tp_src=<tcp_src_port>,tp_dst=<tcp_dst_port>,tcp_flags=<tcp_flags>",
"2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0",
"oc edit network.operator.openshift.io/cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0",
"cat <<EOF| oc create -f - kind: Namespace apiVersion: v1 metadata: name: verify-audit-logging annotations: k8s.ovn.org/acl-logging: '{ \"deny\": \"alert\", \"allow\": \"alert\" }' EOF",
"namespace/verify-audit-logging created",
"cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: matchLabels: policyTypes: - Ingress - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace namespace: verify-audit-logging spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: verify-audit-logging EOF",
"networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created",
"cat <<EOF| oc create -n default -f - apiVersion: v1 kind: Pod metadata: name: client spec: containers: - name: client image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF",
"for name in client server; do cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: v1 kind: Pod metadata: name: USD{name} spec: containers: - name: USD{name} image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF done",
"pod/client created pod/server created",
"POD_IP=USD(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}')",
"oc exec -it client -n default -- /bin/ping -c 2 USDPOD_IP",
"PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms",
"oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 USDPOD_IP",
"PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data. 64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms --- 10.128.0.86 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms",
"for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done",
"2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0",
"oc annotate namespace <namespace> k8s.ovn.org/acl-logging='{ \"deny\": \"alert\", \"allow\": \"notice\" }'",
"kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"alert\", \"allow\": \"notice\" }",
"namespace/verify-audit-logging annotated",
"for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done",
"2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0",
"oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging-",
"kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: null",
"namespace/verify-audit-logging annotated",
"oc patch networks.operator.openshift.io cluster --type=merge -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"ipsecConfig\":{ }}}}}'",
"oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-node",
"ovnkube-node-5xqbf 8/8 Running 0 28m ovnkube-node-6mwcx 8/8 Running 0 29m ovnkube-node-ck5fr 8/8 Running 0 31m ovnkube-node-fr4ld 8/8 Running 0 26m ovnkube-node-wgs4l 8/8 Running 0 33m ovnkube-node-zfvcl 8/8 Running 0 34m",
"oc -n openshift-ovn-kubernetes rsh ovnkube-node-<pod_number_sequence> ovn-nbctl --no-leader-only get nb_global . ipsec 1",
"oc patch networks.operator.openshift.io/cluster --type=json -p='[{\"op\":\"remove\", \"path\":\"/spec/defaultNetwork/ovnKubernetesConfig/ipsecConfig\"}]'",
"oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-node",
"ovnkube-node-5xqbf 8/8 Running 0 28m ovnkube-node-6mwcx 8/8 Running 0 29m ovnkube-node-ck5fr 8/8 Running 0 31m",
"oc -n openshift-ovn-kubernetes rsh ovnkube-node-<pod_number_sequence> ovn-nbctl --no-leader-only get nb_global . ipsec 1",
"oc delete daemonset ovn-ipsec-host -n openshift-ovn-kubernetes 1",
"oc delete daemonset ovn-ipsec-containerized -n openshift-ovn-kubernetes 1",
"oc get pods -n openshift-ovn-kubernetes -l=app=ovn-ipsec",
"for role in master worker; do cat >> \"99-ipsec-USD{role}-endpoint-config.bu\" <<-EOF variant: openshift version: 4.14.0 metadata: name: 99-USD{role}-import-certs-enable-svc-os-ext labels: machineconfiguration.openshift.io/role: USDrole openshift: extensions: - ipsec systemd: units: - name: ipsec-import.service enabled: true contents: | [Unit] Description=Import external certs into ipsec NSS Before=ipsec.service [Service] Type=oneshot ExecStart=/usr/local/bin/ipsec-addcert.sh RemainAfterExit=false StandardOutput=journal [Install] WantedBy=multi-user.target - name: ipsecenabler.service enabled: true contents: | [Service] Type=oneshot ExecStart=systemctl enable --now ipsec.service [Install] WantedBy=multi-user.target storage: files: - path: /etc/ipsec.d/ipsec-endpoint-config.conf mode: 0400 overwrite: true contents: local: ipsec-endpoint-config.conf - path: /etc/pki/certs/ca.pem mode: 0400 overwrite: true contents: local: ca.pem - path: /etc/pki/certs/left_server.p12 mode: 0400 overwrite: true contents: local: left_server.p12 - path: /usr/local/bin/ipsec-addcert.sh mode: 0740 overwrite: true contents: inline: | #!/bin/bash -e echo \"importing cert to NSS\" certutil -A -n \"CA\" -t \"CT,C,C\" -d /var/lib/ipsec/nss/ -i /etc/pki/certs/ca.pem pk12util -W \"\" -i /etc/pki/certs/left_server.p12 -d /var/lib/ipsec/nss/ certutil -M -n \"left_server\" -t \"u,u,u\" -d /var/lib/ipsec/nss/ EOF done",
"for role in master worker; do butane -d . 99-ipsec-USD{role}-endpoint-config.bu -o ./99-ipsec-USDrole-endpoint-config.yaml done",
"for role in master worker; do oc apply -f 99-ipsec-USD{role}-endpoint-config.yaml done",
"oc get mcp",
"from: namespaceSelector: matchLabels: kubernetes.io/metadata.name: novxlan-externalgw-ecmp-4059",
"apiVersion: k8s.ovn.org/v1 kind: AdminPolicyBasedExternalRoute metadata: name: default-route-policy spec: from: namespaceSelector: matchLabels: kubernetes.io/metadata.name: novxlan-externalgw-ecmp-4059 nextHops: static: - ip: \"172.18.0.8\" - ip: \"172.18.0.9\"",
"apiVersion: k8s.ovn.org/v1 kind: AdminPolicyBasedExternalRoute metadata: name: shadow-traffic-policy spec: from: namespaceSelector: matchLabels: externalTraffic: \"\" nextHops: dynamic: - podSelector: matchLabels: gatewayPod: \"\" namespaceSelector: matchLabels: shadowTraffic: \"\" networkAttachmentName: shadow-gateway - podSelector: matchLabels: gigabyteGW: \"\" namespaceSelector: matchLabels: gatewayNamespace: \"\" networkAttachmentName: gateway",
"apiVersion: k8s.ovn.org/v1 kind: AdminPolicyBasedExternalRoute metadata: name: multi-hop-policy spec: from: namespaceSelector: matchLabels: trafficType: \"egress\" nextHops: static: - ip: \"172.18.0.8\" - ip: \"172.18.0.9\" dynamic: - podSelector: matchLabels: gatewayPod: \"\" namespaceSelector: matchLabels: egressTraffic: \"\" networkAttachmentName: gigabyte",
"oc create -f <file>.yaml",
"adminpolicybasedexternalroute.k8s.ovn.org/default-route-policy created",
"oc describe apbexternalroute <name> | tail -n 6",
"Status: Last Transition Time: 2023-04-24T15:09:01Z Messages: Configured external gateway IPs: 172.18.0.8 Status: Success Events: <none>",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow - to: cidrSelector: 0.0.0.0/0 3 type: Deny",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: <name> 1 spec: egress: 2",
"egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 nodeSelector: <label_name>: <label_value> 5 ports: 6",
"ports: - port: <port> 1 protocol: <protocol> 2",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Deny to: cidrSelector: 0.0.0.0/0",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - type: Deny to: cidrSelector: 172.16.1.1/32 ports: - port: 80 protocol: TCP - port: 443",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - to: nodeSelector: matchLabels: region: east type: Allow",
"oc create -f <policy_name>.yaml -n <project>",
"oc create -f default.yaml -n project1",
"egressfirewall.k8s.ovn.org/v1 created",
"oc get egressfirewall --all-namespaces",
"oc describe egressfirewall <policy_name>",
"Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0",
"oc get -n <project> egressfirewall",
"oc get -n <project> egressfirewall <name> -o yaml > <filename>.yaml",
"oc replace -f <filename>.yaml",
"oc get -n <project> egressfirewall",
"oc delete -n <project> egressfirewall <name>",
"IP capacity = public cloud default capacity - sum(current IP assignments)",
"cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"eni-078d267045138e436\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ipv4\":14,\"ipv6\":15} } ]",
"cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"nic0\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ip\":14} } ]",
"ip -details link show",
"spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: ovnKubernetesConfig: gatewayConfig: ipForwarding: Global",
"apiVersion: v1 kind: Namespace metadata: name: namespace1 labels: env: prod --- apiVersion: v1 kind: Namespace metadata: name: namespace2 labels: env: prod",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egressips-prod spec: egressIPs: - 192.168.126.10 - 192.168.126.102 namespaceSelector: matchLabels: env: prod status: items: - node: node1 egressIP: 192.168.126.10 - node: node3 egressIP: 192.168.126.102",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: <name> 1 spec: egressIPs: 2 - <ip_address> namespaceSelector: 3 podSelector: 4",
"namespaceSelector: 1 matchLabels: <label_name>: <label_value>",
"podSelector: 1 matchLabels: <label_name>: <label_value>",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group1 spec: egressIPs: - 192.168.126.11 - 192.168.126.102 podSelector: matchLabels: app: web namespaceSelector: matchLabels: env: prod",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group2 spec: egressIPs: - 192.168.127.30 - 192.168.127.40 namespaceSelector: matchExpressions: - key: environment operator: NotIn values: - development",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: ovnKubernetesConfig: egressIPConfig: 1 reachabilityTotalTimeoutSeconds: 5 2 gatewayConfig: routingViaHost: false genevePort: 6081",
"oc label nodes <node_name> k8s.ovn.org/egress-assignable=\"\" 1",
"apiVersion: v1 kind: Node metadata: labels: k8s.ovn.org/egress-assignable: \"\" name: <node_name>",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-project1 spec: egressIPs: - 192.168.127.10 - 192.168.127.11 namespaceSelector: matchLabels: env: qa",
"oc apply -f <egressips_name>.yaml 1",
"egressips.k8s.ovn.org/<egressips_name> created",
"oc label ns <namespace> env=qa 1",
"oc get egressip -o yaml",
"spec: egressIPs: - 192.168.127.10 - 192.168.127.11",
"apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: <egress_service_name> 1 namespace: <namespace> 2 spec: sourceIPBy: <egress_traffic_ip> 3 nodeSelector: 4 matchLabels: node-role.kubernetes.io/<role>: \"\" network: <egress_traffic_network> 5",
"apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: test-egress-service namespace: test-namespace spec: sourceIPBy: \"LoadBalancerIP\" nodeSelector: matchLabels: vrf: \"true\" network: \"2\"",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: example-pool namespace: metallb-system spec: addresses: - 172.19.0.100/32",
"oc apply -f ip-addr-pool.yaml",
"apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace annotations: metallb.universe.tf/address-pool: example-pool 1 spec: selector: app: example ports: - name: http protocol: TCP port: 8080 targetPort: 8080 type: LoadBalancer --- apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: example-service namespace: example-namespace spec: sourceIPBy: \"LoadBalancerIP\" 2 nodeSelector: 3 matchLabels: node-role.kubernetes.io/worker: \"\"",
"oc apply -f service-egress-service.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example-bgp-adv namespace: metallb-system spec: ipAddressPools: - example-pool nodeSelector: - matchLabels: egress-service.k8s.ovn.org/example-namespace-example-service: \"\" 1",
"curl <external_ip_address>:<port_number> 1",
"curl <router_service_IP> <port>",
"openstack port set --allowed-address ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>",
"apiVersion: v1 kind: Service metadata: name: app-egress spec: ports: - name: tcp-8080 protocol: TCP port: 8080 - name: tcp-8443 protocol: TCP port: 8443 - name: udp-80 protocol: UDP port: 80 type: ClusterIP selector: app: egress-router-cni",
"apiVersion: network.operator.openshift.io/v1 kind: EgressRouter metadata: name: <egress_router_name> namespace: <namespace> 1 spec: addresses: [ 2 { ip: \"<egress_router>\", 3 gateway: \"<egress_gateway>\" 4 } ] mode: Redirect redirect: { redirectRules: [ 5 { destinationIP: \"<egress_destination>\", port: <egress_router_port>, targetPort: <target_port>, 6 protocol: <network_protocol> 7 }, ], fallbackIP: \"<egress_destination>\" 8 }",
"apiVersion: network.operator.openshift.io/v1 kind: EgressRouter metadata: name: egress-router-redirect spec: networkInterface: { macvlan: { mode: \"Bridge\" } } addresses: [ { ip: \"192.168.12.99/24\", gateway: \"192.168.12.1\" } ] mode: Redirect redirect: { redirectRules: [ { destinationIP: \"10.0.0.99\", port: 80, protocol: UDP }, { destinationIP: \"203.0.113.26\", port: 8080, targetPort: 80, protocol: TCP }, { destinationIP: \"203.0.113.27\", port: 8443, targetPort: 443, protocol: TCP } ] }",
"apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: web-app protocol: TCP port: 8080 type: ClusterIP selector: app: egress-router-cni 1",
"oc get network-attachment-definition egress-router-cni-nad",
"NAME AGE egress-router-cni-nad 18m",
"oc get deployment egress-router-cni-deployment",
"NAME READY UP-TO-DATE AVAILABLE AGE egress-router-cni-deployment 1/1 1 1 18m",
"oc get pods -l app=egress-router-cni",
"NAME READY STATUS RESTARTS AGE egress-router-cni-deployment-575465c75c-qkq6m 1/1 Running 0 18m",
"POD_NODENAME=USD(oc get pod -l app=egress-router-cni -o jsonpath=\"{.items[0].spec.nodeName}\")",
"oc debug node/USDPOD_NODENAME",
"chroot /host",
"cat /tmp/egress-router-log",
"2021-04-26T12:27:20Z [debug] Called CNI ADD 2021-04-26T12:27:20Z [debug] Gateway: 192.168.12.1 2021-04-26T12:27:20Z [debug] IP Source Addresses: [192.168.12.99/24] 2021-04-26T12:27:20Z [debug] IP Destinations: [80 UDP 10.0.0.99/30 8080 TCP 203.0.113.26/30 80 8443 TCP 203.0.113.27/30 443] 2021-04-26T12:27:20Z [debug] Created macvlan interface 2021-04-26T12:27:20Z [debug] Renamed macvlan to \"net1\" 2021-04-26T12:27:20Z [debug] Adding route to gateway 192.168.12.1 on macvlan interface 2021-04-26T12:27:20Z [debug] deleted default route {Ifindex: 3 Dst: <nil> Src: <nil> Gw: 10.128.10.1 Flags: [] Table: 254} 2021-04-26T12:27:20Z [debug] Added new default route with gateway 192.168.12.1 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p UDP --dport 80 -j DNAT --to-destination 10.0.0.99 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8080 -j DNAT --to-destination 203.0.113.26:80 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8443 -j DNAT --to-destination 203.0.113.27:443 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat -o net1 -j SNAT --to-source 192.168.12.99",
"crictl ps --name egress-router-cni-pod | awk '{print USD1}'",
"CONTAINER bac9fae69ddb6",
"crictl inspect -o yaml bac9fae69ddb6 | grep 'pid:' | awk '{print USD2}'",
"68857",
"nsenter -n -t 68857",
"ip route",
"default via 192.168.12.1 dev net1 10.128.10.0/23 dev eth0 proto kernel scope link src 10.128.10.18 192.168.12.0/24 dev net1 proto kernel scope link src 192.168.12.99 192.168.12.1 dev net1",
"oc annotate namespace <namespace> k8s.ovn.org/multicast-enabled=true",
"apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: \"true\"",
"oc project <project>",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF",
"POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')",
"oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname",
"CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')",
"oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"",
"mlistener",
"oc annotate namespace <namespace> \\ 1 k8s.ovn.org/multicast-enabled-",
"apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: null",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056",
"spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056",
"oc patch network.operator cluster --type merge -p \"USD(cat <file_name>.yaml)\"",
"network.operator.openshift.io/cluster patched",
"oc get network.operator cluster -o jsonpath=\"{.spec.exportNetworkFlows}\"",
"{\"netFlow\":{\"collectors\":[\"192.168.1.99:2056\"]}}",
"for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node -o jsonpath='{[email protected][*]}{.metadata.name}{\"\\n\"}{end}'); do ; echo; echo USDpod; oc -n openshift-ovn-kubernetes exec -c ovnkube-controller USDpod -- bash -c 'for type in ipfix sflow netflow ; do ovs-vsctl find USDtype ; done'; done",
"ovnkube-node-xrn4p _uuid : a4d2aaca-5023-4f3d-9400-7275f92611f9 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : [\"192.168.1.99:2056\"] ovnkube-node-z4vq9 _uuid : 61d02fdb-9228-4993-8ff5-b27f01a29bd6 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : [\"192.168.1.99:2056\"]-",
"oc patch network.operator cluster --type='json' -p='[{\"op\":\"remove\", \"path\":\"/spec/exportNetworkFlows\"}]'",
"network.operator.openshift.io/cluster patched",
"oc patch networks.operator.openshift.io cluster --type=merge -p '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"hybridOverlayConfig\":{ \"hybridClusterNetwork\":[ { \"cidr\": \"<cidr>\", \"hostPrefix\": <prefix> } ], \"hybridOverlayVXLANPort\": <overlay_port> } } } } }'",
"network.operator.openshift.io/cluster patched",
"oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.defaultNetwork.ovnKubernetesConfig}\""
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/networking/ovn-kubernetes-network-plugin |
5.5.3. Directory Structure | 5.5.3. Directory Structure Many system administrators give little thought to how the storage they make available to users today is actually going to be used tomorrow. However, a bit of thought spent on this matter before handing over the storage to users can save a great deal of unnecessary effort later on. The main thing that system administrators can do is to use directories and subdirectories to structure the storage available in an understandable way. There are several benefits to this approach: More easily understood More flexibility in the future By enforcing some level of structure on your storage, it can be more easily understood. For example, consider a large mult-user system. Instead of placing all user directories in one large directory, it might make sense to use subdirectories that mirror your organization's structure. In this way, people that work in accounting have their directories under a directory named accounting , people that work in engineering would have their directories under engineering , and so on. The benefits of such an approach are that it would be easier on a day-to-day basis to keep track of the storage needs (and usage) for each part of your organization. Obtaining a listing of the files used by everyone in human resources is straightforward. Backing up all the files used by the legal department is easy. With the appropriate structure, flexibility is increased. To continue using the example, assume for a moment that the engineering department is due to take on several large new projects. Because of this, many new engineers are to be hired in the near future. However, there is currently not enough free storage available to support the expected additions to engineering. However, since every person in engineering has their files stored under the engineering directory, it would be a straightforward process to: Procure the additional storage necessary to support engineering Back up everything under the engineering directory Restore the backup onto the new storage Rename the engineering directory on the original storage to something like engineering-archive (before deleting it entirely after running smoothly with the new configuration for a month) Make the necessary changes so that all engineering personnel can access their files on the new storage Of course, such an approach does have its shortcomings. For example, if people frequently move between departments, you must have a way of being informed of such transfers, and you must modify the directory structure appropriately. Otherwise, the structure no longer reflects reality, which makes more work -- not less -- for you in the long run. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-storage-usable-dirs |
Chapter 4. Installing RHEL AI on IBM cloud | Chapter 4. Installing RHEL AI on IBM cloud For installing and deploying Red Hat Enterprise Linux AI on IBM Cloud, you must first convert the RHEL AI image into an IBM Cloud image. You can then launch an instance using the IBM Cloud image and deploy RHEL AI on an IBM Cloud machine. Important On Red Hat Enterprise Linux AI version 1.1 currently only supports inference serving on IBM Cloud 4.1. Converting the RHEL AI image into a IBM Cloud image. To create a bootable image in IBM Cloud you must configure your IBM Cloud accounts, set up a Cloud Object Storage (COS) bucket, and create a IBM Cloud image using the RHEL AI image. Prerequisites You installed the IBM CLI on your specific machine, see Installing the stand-alone IBM Cloud CLI . Procedure Log in to IBM Cloud with the following command: USD ibmcloud login When prompted, select your desired account to log in to. Example output of the login USD ibmcloud login API endpoint: https://cloud.ibm.com Region: us-east Get a one-time code from https://identity-1.eu-central.iam.cloud.ibm.com/identity/passcode to proceed. Open the URL in the default browser? [Y/n] > One-time code > Authenticating... OK Select an account: 1. <account-name> 2. <account-name-2> API endpoint: https://cloud.ibm.com Region: us-east User: <user-name> Account: <selected-account> Resource group: No resource group targeted, use 'ibmcloud target -g RESOURCE_GROUP' You need to set up various IBM Cloud configurations and create your COS bucket before generating a QCOW2 image. You can install the necessary IBM Cloud plugins by running the following command: USD ibmcloud plugin install cloud-object-storage infrastructure-service Set your preferred resource group, the following example command sets the resource group named Default . USD ibmcloud target -g Default Set your preferred region, the following example command sets the us-east region. USD ibmcloud target -r us-east You need to select a deployment plan for your service instance. Ensure you check the properties and pricing on the IBM cloud website. You can list the available deployment plans by running the following command: USD ibmcloud catalog service cloud-object-storage --output json | jq -r '.[].children[] | select(.children != null) | .children[].name' The following example command uses the premium-global-deployment plan and puts it in the environment variable cos_deploy_plan : USD cos_deploy_plan=premium-global-deployment Create a Cloud Object Storage (COS) service instance and save the name in an environment variable named cos_si_name and create the cloud-object-storage and by running the following commands: USD cos_si_name=THE_NAME_OF_YOUR_SERVICE_INSTANCE USD ibmcloud resource service-instance-create USD{cos_si_name} cloud-object-storage standard global -d USD{cos_deploy_plan} Get the Cloud Resource Name (CRN) for your Cloud Object Storage (COS) bucket in a variable named cos_crn by running the following commands: USD cos_crn=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains("cloud-object-storage")) | .crn') USD ibmcloud cos config crn --crn USD{cos_crn} --force Create your Cloud Object Storage (COS) bucket named as the environment variable bucket_name with the following commands: USD bucket_name=NAME_OF_MY_BUCKET USD ibmcloud cos bucket-create --bucket USD{bucket_name} Allow the infrastructure service to read the buckets that are in the service instance USD{cos_si_guid} variable by running the following commands: USD cos_si_guid=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains("cloud-object-storage")) | .guid') USD ibmcloud iam authorization-policy-create is cloud-object-storage Reader --source-resource-type image --target-service-instance-id USD{cos_si_guid} Now that your S3 bucket is set up, you need to download the RAW image from Red Hat Enterprise Linux AI download page Copy the RAW image link and add it to the following command: USD curl -Lo disk.qcow2 "PASTE_HERE_THE_LINK_OF_THE_QCOW2_FILE" Set the name you want to use as the RHEL AI IBM Cloud image USD image_name=rhel-ai-20240703v0 Upload the QCOW2 image to the Cloud Object Storage (COS) bucket by running the following command: USD ibmcloud cos upload --bucket USD{bucket_name} --key USD{image_name}.qcow2 --file disk.qcow2 --region <region> Convert the QCOW2 you just uploaded to an IBM Cloud image with the following commands: USD ibmcloud is image-create USD{image_name} --file cos://<region>/USD{bucket_name}/USD{image_name}.qcow2 --os-name red-ai-9-amd64-nvidia-byol Once the job launches, set the IBM Cloud image configurations into a variable called image_id by running the following command: USD image_id=USD(ibmcloud is images --visibility private --output json | jq -r '.[] | select(.name=="'USDimage_name'") | .id') You can view the progress of the job with the following command: USD while ibmcloud is image --output json USD{image_id} | jq -r .status | grep -xq pending; do sleep 1; done You can view the information of the newly created image with the following command: USD ibmcloud is image USD{image_id} 4.2. Deploying your instance on IBM Cloud using the CLI You can launch an instance with your new RHEL AI IBM Cloud image from the IBM Cloud web console or the CLI. You can use whichever method of deployment you want to launch your instance. The following procedure displays how you can use the CLI to launch an IBM Cloud instance with the custom IBM Cloud image If you choose to use the CLI as a deployment option, there are several configurations you have to create, as shown in "Prerequisites". Prerequisites You created your RHEL AI IBM Cloud image. For more information, see "Converting the RHEL AI image to an IBM Cloud image". You installed the IBM CLI on your specific machine, see Installing the stand-alone IBM Cloud CLI . You configured your Virtual private cloud (VPC). You created a subnet for your instance. Procedure Log in to your IBM Cloud account and select the Account, Region and Resource Group by running the following command: USD ibmcloud login -c <ACCOUNT_ID> -r <REGION> -g <RESOURCE_GROUP> Before launching your IBM Cloud instance on the CLI, you need to create several configuration variables for your instance. Install the infrastructure-service plugin for IBM Cloud by running the following command USD ibmcloud plugin install infrastructure-service You need to create an SSH public key for your IBM Cloud account. IBM Cloud supports RSA and ed25519 keys. The following example command uses the ed25519 key types and names it ibmcloud . USD ssh-keygen -f ibmcloud -t ed25519 You can now upload the public key to your IBM Cloud account by following the example command. USD ibmcloud is key-create my-ssh-key @ibmcloud.pub --key-type ed25519 You need to create a Floating IP for your IBM Cloud instance by following the example command. Ensure you change the region to your preferred zone. USD ibmcloud is floating-ip-reserve my-public-ip --zone <region> You need to select the instance profile that you want to use for the deployment. List all the profiles by running the following command: USD ibmcloud is instance-profiles Make a note of your preferred instance profile, you will need it for your instance deployment. You can now start creating your IBM Cloud instance. Populate environment variables for when you create the instance. name=my-rhelai-instance vpc=my-vpc-in-us-east zone=us-east-1 subnet=my-subnet-in-us-east-1 instance_profile=gx3-64x320x4l4 image=my-custom-rhelai-image sshkey=my-ssh-key floating_ip=my-public-ip disk_size=250 You can now launch your instance, by running the following command: USD ibmcloud is instance-create \ USDname \ USDvpc \ USDzone \ USDinstance_profile \ USDsubnet \ --image USDimage \ --keys USDsshkey \ --boot-volume '{"name": "'USD{name}'-boot", "volume": {"name": "'USD{name}'-boot", "capacity": 'USD{disk_size}', "profile": {"name": "general-purpose"}}}' \ --allow-ip-spoofing false Link the Floating IP to the instance by running the following command: USD ibmcloud is floating-ip-update USDfloating_ip --nic primary --in USDname User account The default user account in the RHEL AI AMI is cloud-user . It has all permissions via sudo without password. Verification To verify that your Red Hat Enterprise Linux AI tools are installed correctly, run the ilab command: USD ilab Example output USD ilab Usage: ilab [OPTIONS] COMMAND [ARGS]... CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by... model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model model_list serve model serve sysinfo system info test model test train model train 4.3. Adding more storage to your IBM Cloud instance In [ibm-c], there is a size restriction of 250 GB of storage in the main IBM Cloud disk. RHEL AI might require more storage for models and generation data. You can add more storage by attaching an extra disk to your instance and using it to hold data for RHEL AI. Prerequisites You have a IBM Cloud RHEL AI instance. Procedure Create an environment variable called name that has the name of your instance by running the following command: USD name=my-rhelai-instance Set the size of the new volume by running the following command: USD data_volume_size=1000 Create and attach the instance volume by running the following command: USD ibmcloud is instance-volume-attachment-add data USD{name} \ --new-volume-name USD{name}-data \ --profile general-purpose \ --capacity USD{data_volume_size} You can list all the disks with the following command: USD lsblk Create a disk variable with the content of the disk path your using. The following example command uses the /dev/vdb path. USD disk=/dev/vdb Create a partition on your disk by running the following command: USD sgdisk -n 1:0:0 USDdisk Format and label the partition by running the following command: USD mkfs.xfs -L ilab-data USD{disk}1 You can configure your system to auto mount to your preferred directory. The following example command uses the /mnt directory. USD echo LABEL=ilab-data /mnt xfs defaults 0 0 >> /etc/fstab Reload the systemd service to acknowledge the new configuration on mounts by running the following command: USD systemctl daemon-reload Mount the disk with the following command: USD mount -a Grant write permissions to all users in the new file system by running the following command: USD chmod 1777 /mnt/ 4.4. Adding a data storage directory to your instance By default RHEL AI holds configuration data in the USDHOME directory. You can change this default to a different directory for holding InstructLab data. Prerequisites You have a Red Hat Enterprise Linux AI instance You added an extra storage disk to your instance Procedure You can configure the ILAB_HOME environment variable by writing it to the USDHOME/.bash_profile file by running the following commands: USD echo 'export ILAB_HOME=/mnt' >> USDHOME/.bash_profile You can make that change effective by reloading the USDHOME/.bash_profile file with the following command: USD source USDHOME/.bash_profile | [
"ibmcloud login",
"ibmcloud login API endpoint: https://cloud.ibm.com Region: us-east Get a one-time code from https://identity-1.eu-central.iam.cloud.ibm.com/identity/passcode to proceed. Open the URL in the default browser? [Y/n] > One-time code > Authenticating OK Select an account: 1. <account-name> 2. <account-name-2> API endpoint: https://cloud.ibm.com Region: us-east User: <user-name> Account: <selected-account> Resource group: No resource group targeted, use 'ibmcloud target -g RESOURCE_GROUP'",
"ibmcloud plugin install cloud-object-storage infrastructure-service",
"ibmcloud target -g Default",
"ibmcloud target -r us-east",
"ibmcloud catalog service cloud-object-storage --output json | jq -r '.[].children[] | select(.children != null) | .children[].name'",
"cos_deploy_plan=premium-global-deployment",
"cos_si_name=THE_NAME_OF_YOUR_SERVICE_INSTANCE",
"ibmcloud resource service-instance-create USD{cos_si_name} cloud-object-storage standard global -d USD{cos_deploy_plan}",
"cos_crn=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains(\"cloud-object-storage\")) | .crn')",
"ibmcloud cos config crn --crn USD{cos_crn} --force",
"bucket_name=NAME_OF_MY_BUCKET",
"ibmcloud cos bucket-create --bucket USD{bucket_name}",
"cos_si_guid=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains(\"cloud-object-storage\")) | .guid')",
"ibmcloud iam authorization-policy-create is cloud-object-storage Reader --source-resource-type image --target-service-instance-id USD{cos_si_guid}",
"curl -Lo disk.qcow2 \"PASTE_HERE_THE_LINK_OF_THE_QCOW2_FILE\"",
"image_name=rhel-ai-20240703v0",
"ibmcloud cos upload --bucket USD{bucket_name} --key USD{image_name}.qcow2 --file disk.qcow2 --region <region>",
"ibmcloud is image-create USD{image_name} --file cos://<region>/USD{bucket_name}/USD{image_name}.qcow2 --os-name red-ai-9-amd64-nvidia-byol",
"image_id=USD(ibmcloud is images --visibility private --output json | jq -r '.[] | select(.name==\"'USDimage_name'\") | .id')",
"while ibmcloud is image --output json USD{image_id} | jq -r .status | grep -xq pending; do sleep 1; done",
"ibmcloud is image USD{image_id}",
"ibmcloud login -c <ACCOUNT_ID> -r <REGION> -g <RESOURCE_GROUP>",
"ibmcloud plugin install infrastructure-service",
"ssh-keygen -f ibmcloud -t ed25519",
"ibmcloud is key-create my-ssh-key @ibmcloud.pub --key-type ed25519",
"ibmcloud is floating-ip-reserve my-public-ip --zone <region>",
"ibmcloud is instance-profiles",
"name=my-rhelai-instance vpc=my-vpc-in-us-east zone=us-east-1 subnet=my-subnet-in-us-east-1 instance_profile=gx3-64x320x4l4 image=my-custom-rhelai-image sshkey=my-ssh-key floating_ip=my-public-ip disk_size=250",
"ibmcloud is instance-create USDname USDvpc USDzone USDinstance_profile USDsubnet --image USDimage --keys USDsshkey --boot-volume '{\"name\": \"'USD{name}'-boot\", \"volume\": {\"name\": \"'USD{name}'-boot\", \"capacity\": 'USD{disk_size}', \"profile\": {\"name\": \"general-purpose\"}}}' --allow-ip-spoofing false",
"ibmcloud is floating-ip-update USDfloating_ip --nic primary --in USDname",
"ilab",
"ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model model_list serve model serve sysinfo system info test model test train model train",
"name=my-rhelai-instance",
"data_volume_size=1000",
"ibmcloud is instance-volume-attachment-add data USD{name} --new-volume-name USD{name}-data --profile general-purpose --capacity USD{data_volume_size}",
"lsblk",
"disk=/dev/vdb",
"sgdisk -n 1:0:0 USDdisk",
"mkfs.xfs -L ilab-data USD{disk}1",
"echo LABEL=ilab-data /mnt xfs defaults 0 0 >> /etc/fstab",
"systemctl daemon-reload",
"mount -a",
"chmod 1777 /mnt/",
"echo 'export ILAB_HOME=/mnt' >> USDHOME/.bash_profile",
"source USDHOME/.bash_profile"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.1/html/installing/installing_ibm_cloud |
Chapter 10. Configuring your Logging deployment | Chapter 10. Configuring your Logging deployment 10.1. Configuring CPU and memory limits for logging components You can configure both the CPU and memory limits for each of the logging components as needed. 10.1.1. Configuring CPU and memory limits The logging components allow for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd 1 Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value. 2 3 Specify the CPU and memory limits and requests for the log visualizer as needed. 4 Specify the CPU and memory limits and requests for the log collector as needed. 10.2. Configuring systemd-journald and Fluentd Because Fluentd reads from the journal, and the journal default settings are very low, journal entries can be lost because the journal cannot keep up with the logging rate from system services. We recommend setting RateLimitIntervalSec=30s and RateLimitBurst=10000 (or even higher if necessary) to prevent the journal from losing entries. 10.2.1. Configuring systemd-journald for OpenShift Logging As you scale up your project, the default logging environment might need some adjustments. For example, if you are missing logs, you might have to increase the rate limits for journald. You can adjust the number of messages to retain for a specified period of time to ensure that OpenShift Logging does not use excessive resources without dropping logs. You can also determine if you want the logs compressed, how long to retain logs, how or if the logs are stored, and other settings. Procedure Create a Butane config file, 40-worker-custom-journald.bu , that includes an /etc/systemd/journald.conf file with the required settings. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.15.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: "worker" storage: files: - path: /etc/systemd/journald.conf mode: 0644 1 overwrite: true contents: inline: | Compress=yes 2 ForwardToConsole=no 3 ForwardToSyslog=no MaxRetentionSec=1month 4 RateLimitBurst=10000 5 RateLimitIntervalSec=30s Storage=persistent 6 SyncIntervalSec=1s 7 SystemMaxUse=8G 8 SystemKeepFree=20% 9 SystemMaxFileSize=10M 10 1 Set the permissions for the journald.conf file. It is recommended to set 0644 permissions. 2 Specify whether you want logs compressed before they are written to the file system. Specify yes to compress the message or no to not compress. The default is yes . 3 Configure whether to forward log messages. Defaults to no for each. Specify: ForwardToConsole to forward logs to the system console. ForwardToKMsg to forward logs to the kernel log buffer. ForwardToSyslog to forward to a syslog daemon. ForwardToWall to forward messages as wall messages to all logged-in users. 4 Specify the maximum time to store journal entries. Enter a number to specify seconds. Or include a unit: "year", "month", "week", "day", "h" or "m". Enter 0 to disable. The default is 1month . 5 Configure rate limiting. If more logs are received than what is specified in RateLimitBurst during the time interval defined by RateLimitIntervalSec , all further messages within the interval are dropped until the interval is over. It is recommended to set RateLimitIntervalSec=30s and RateLimitBurst=10000 , which are the defaults. 6 Specify how logs are stored. The default is persistent : volatile to store logs in memory in /run/log/journal/ . These logs are lost after rebooting. persistent to store logs to disk in /var/log/journal/ . systemd creates the directory if it does not exist. auto to store logs in /var/log/journal/ if the directory exists. If it does not exist, systemd temporarily stores logs in /run/systemd/journal . none to not store logs. systemd drops all logs. 7 Specify the timeout before synchronizing journal files to disk for ERR , WARNING , NOTICE , INFO , and DEBUG logs. systemd immediately syncs after receiving a CRIT , ALERT , or EMERG log. The default is 1s . 8 Specify the maximum size the journal can use. The default is 8G . 9 Specify how much disk space systemd must leave free. The default is 20% . 10 Specify the maximum size for individual journal files stored persistently in /var/log/journal . The default is 10M . Note If you are removing the rate limit, you might see increased CPU utilization on the system logging daemons as it processes any messages that would have previously been throttled. For more information on systemd settings, see https://www.freedesktop.org/software/systemd/man/journald.conf.html . The default settings listed on that page might not apply to OpenShift Container Platform. Use Butane to generate a MachineConfig object file, 40-worker-custom-journald.yaml , containing the configuration to be delivered to the nodes: USD butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml Apply the machine config. For example: USD oc apply -f 40-worker-custom-journald.yaml The controller detects the new MachineConfig object and generates a new rendered-worker-<hash> version. Monitor the status of the rollout of the new rendered configuration to each node: USD oc describe machineconfigpool/worker Example output Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool ... Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e | [
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd",
"variant: openshift version: 4.15.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: \"worker\" storage: files: - path: /etc/systemd/journald.conf mode: 0644 1 overwrite: true contents: inline: | Compress=yes 2 ForwardToConsole=no 3 ForwardToSyslog=no MaxRetentionSec=1month 4 RateLimitBurst=10000 5 RateLimitIntervalSec=30s Storage=persistent 6 SyncIntervalSec=1s 7 SystemMaxUse=8G 8 SystemKeepFree=20% 9 SystemMaxFileSize=10M 10",
"butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml",
"oc apply -f 40-worker-custom-journald.yaml",
"oc describe machineconfigpool/worker",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/logging/configuring-your-logging-deployment |
8.27. cvs | 8.27. cvs 8.27.1. RHBA-2013:1555 - cvs bug fix and enhancement update Updated cvs packages that fix one bug and add one enhancement are now available for Red Hat Enterprise Linux 6. The Concurrent Versions System (CVS) is a version control system that can record the history of your files. CVS only stores the differences between versions, instead of every version of every file you have ever created. CVS also keeps a log of who, when, and why changes occurred. Bug Fix BZ# 671460 When a CVS client tried to establish a GSSAPI-authenticated connection to a DNS load-balanced cluster node, the authentication failed because each node had a unique host name. With this update, the GSSAPI CVS server has been modified to search for any Kerberos key that matches the "cvs" service and any host name. As a result, the CVS server can now authenticate clients using GSSAPI even if the server's host name does not match the domain name, and thus Kerberos principal host name part, common for all cluster nodes. CVS server administrators are advised to deploy two Kerberos principals to each node: a principal matching the node's host name and a principal matching the cluster's domain name. Enhancement BZ# 684789 Previously, the CVS server did not pass the client address to the Pluggable Authentication Modules (PAM) system. As a consequence, it was not possible to distinguish clients by the network address with the PAM system and the system was not able to utilize the client address for authentication or authorization purposes. With this update, the client network address is passed to the PAM subsystem as a remote host item (PAM_RHOST). Also, the terminal item (PAM_TTY) is set to a dummy value "cvs" because some PAM modules cannot work with an unset value. Users of cvs are advised to upgrade to these updated packages, which fix this bug and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/cvs |
9.5. Pacemaker Support for Docker Containers (Technology Preview) | 9.5. Pacemaker Support for Docker Containers (Technology Preview) Important Pacemaker support for Docker containers is provided for technology preview only. For details on what "technology preview" means, see Technology Preview Features Support Scope . There is one exception to this feature being Technology Preview: As of Red Hat Enterprise Linux 7.4, Red Hat fully supports the usage of Pacemaker bundles for Red Hat Openstack Platform (RHOSP) deployments. Pacemaker supports a special syntax for launching a Docker container with any infrastructure it requires: the bundle . After you have created a Pacemaker bundle, you can create a Pacemaker resource that the bundle encapsulates. Section 9.5.1, "Configuring a Pacemaker Bundle Resource" describes the syntax for the command to create a Pacemaker bundle and provides tables summarizing the parameters you can define for each bundle parameter. Section 9.5.2, "Configuring a Pacemaker Resource in a Bundle" provides information on configuring a resource contained in a Pacemaker bundle. Section 9.5.3, "Limitations of Pacemaker Bundles" notes the limitations of Pacemaker bundles. Section 9.5.4, "Pacemaker Bundle Configuration Example" provides a Pacemaker bundle configuration example. 9.5.1. Configuring a Pacemaker Bundle Resource The syntax for the command to create a Pacemaker bundle for a Docker container is as follows. This command creates a bundle that encapsulates no other resources. For information on creating a cluster resource in a bundle see Section 9.5.2, "Configuring a Pacemaker Resource in a Bundle" . The required bundle_id parameter must be a unique name for the bundle. If the --disabled option is specified, the bundle is not started automatically. If the --wait option is specified, Pacemaker will wait up to n seconds for the bundle to start and then return 0 on success or 1 on error. If n is not specified it defaults to 60 minutes. The following sections describe the parameters you can configure for each element of a Pacemaker bundle. 9.5.1.1. Docker Parameters Table 9.6, "Docker Container Parameters" describes the docker container options you can set for a bundle. Note Before configuring a docker bundle in Pacemaker, you must install Docker and supply a fully configured Docker image on every node allowed to run the bundle. Table 9.6. Docker Container Parameters Field Default Description image Docker image tag (required) replicas Value of promoted-max if that is positive, otherwise 1. A positive integer specifying the number of container instances to launch replicas-per-host 1 A positive integer specifying the number of container instances allowed to run on a single node promoted-max 0 A non-negative integer that, if positive, indicates that the containerized service should be treated as a multistate service, with this many replicas allowed to run the service in the master role network If specified, this will be passed to the docker run command as the network setting for the Docker container. run-command /usr/sbin/pacemaker_remoted if the bundle contains a resource, otherwise none This command will be run inside the container when launching it ("PID 1"). If the bundle contains a resource, this command must start the pacemaker_remoted daemon (but it could, for example, be a script that performs others tasks as well). options Extra command-line options to pass to the docker run command 9.5.1.2. Bundle Network Parameters Table 9.7, "Bundle Resource Network Parameters" describes the network options you can set for a bundle. Table 9.7. Bundle Resource Network Parameters Field Default Description add-host TRUE If TRUE, and ip-range-start is used, Pacemaker will automatically ensure that the /etc/hosts file inside the containers has entries for each replica name and its assigned IP. ip-range-start If specified, Pacemaker will create an implicit ocf:heartbeat:IPaddr2 resource for each container instance, starting with this IP address, using as many sequential addresses as were specified as the replicas parameter for the Docker element. These addresses can be used from the host's network to reach the service inside the container, although it is not visible within the container itself. Only IPv4 addresses are currently supported. host-netmask 32 If ip-range-start is specified, the IP addresses are created with this CIDR netmask (as a number of bits). host-interface If ip-range-start is specified, the IP addresses are created on this host interface (by default, it will be determined from the IP address). control-port 3121 If the bundle contains a Pacemaker resource, the cluster will use this integer TCP port for communication with Pacemaker Remote inside the container. Changing this is useful when the container is unable to listen on the default port, which could happen when the container uses the host's network rather than ip-range-start (in which case replicas-per-host must be 1), or when the bundle may run on a Pacemaker Remote node that is already listening on the default port. Any PCMK_remote_port environment variable set on the host or in the container is ignored for bundle connections. When a Pacemaker bundle configuration uses the control-port parameter, then if the bundle has its own IP address the port needs to be open on that IP address on and from all full cluster nodes running corosync. If, instead, the bundle has set the network="host" container parameter, the port needs to be open on each cluster node's IP address from all cluster nodes. Note Replicas are named by the bundle ID plus a dash and an integer counter starting with zero. For example, if a bundle named httpd-bundle has configured replicas=2 , its containers will be named httpd-bundle-0 and httpd-bundle-1 . In addition to the network parameters, you can optionally specify port-map parameters for a bundle. Table 9.8, "Bundle Resource port-map Parameters" describes these port-map parameters. Table 9.8. Bundle Resource port-map Parameters Field Default Description id A unique name for the port mapping (required) port If this is specified, connections to this TCP port number on the host network (on the container's assigned IP address, if ip-range-start is specified) will be forwarded to the container network. Exactly one of port or range must be specified in a port-mapping. internal-port Value of port If port and internal-port are specified, connections to port on the host's network will be forwarded to this port on the container network. range If range is specified, connections to these TCP port numbers (expressed as first_port-last_port ) on the host network (on the container's assigned IP address, if ip-range-start is specified) will be forwarded to the same ports in the container network. Exactly one of port or range must be specified in a port mapping. Note If the bundle contains a resource, Pacemaker will automatically map the control-port , so it is not necessary to specify that port in a port mapping. 9.5.1.3. Bundle Storage Parameters You can optionally configure storage-map parameters for a bundle. Table 9.9, "Bundle Resource Storage Mapping Parameters" describes these parameters. Table 9.9. Bundle Resource Storage Mapping Parameters Field Default Description id A unique name for the storage mapping (required) source-dir The absolute path on the host's filesystem that will be mapped into the container. Exactly one of source-dir and source-dir-root parameter must be specified when configuring a storage-map parameter. source-dir-root The start of a path on the host's filesystem that will be mapped into the container, using a different subdirectory on the host for each container instance. The subdirectory will be named with the same name as the bundle name, plus a dash and an integer counter starting with 0. Exactly one source-dir and source-dir-root parameter must be specified when configuring a storage-map parameter. target-dir The path name within the container where the host storage will be mapped (required) options File system mount options to use when mapping the storage As an example of how subdirectories on a host are named using the source-dir-root parameter, if source-dir-root=/path/to/my/directory , target-dir=/srv/appdata , and the bundle is named mybundle with replicas=2 , then the cluster will create two container instances with host names mybundle-0 and mybundle-1 and create two directories on the host running the containers: /path/to/my/directory/mybundle-0 and /path/to/my/directory/mybundle-1 . Each container will be given one of those directories, and any application running inside the container will see the directory as /srv/appdata . Note Pacemaker does not define the behavior if the source directory does not already exist on the host. However, it is expected that the container technology or its resource agent will create the source directory in that case. Note If the bundle contains a Pacemaker resource, Pacemaker will automatically map the equivalent of source-dir=/etc/pacemaker/authkey target-dir=/etc/pacemaker/authkey and source-dir-root=/var/log/pacemaker/bundles target-dir=/var/log into the container, so it is not necessary to specify those paths in when configuring storage-map parameters. Important The PCMK_authkey_location environment variable must not be set to anything other than the default of /etc/pacemaker/authkey on any node in the cluster. 9.5.2. Configuring a Pacemaker Resource in a Bundle A bundle may optionally contain one Pacemaker cluster resource. As with a resource that is not contained in a bundle, the cluster resource may have operations, instance attributes, and metadata attributes defined. If a bundle contains a resource, the container image must include the Pacemaker Remote daemon, and ip-range-start or control-port must be configured in the bundle. Pacemaker will create an implicit ocf:pacemaker:remote resource for the connection, launch Pacemaker Remote within the container, and monitor and manage the resource by means of Pacemaker Remote. If the bundle has more than one container instance (replica), the Pacemaker resource will function as an implicit clone, which will be a multistate clone if the bundle has configured the promoted-max option as greater than zero. You create a resource in a Pacemaker bundle with the pcs resource create command by specifying the bundle parameter for the command and the bundle ID in which to include the resource. For an example of creating a Pacemaker bundle that contains a resource, see Section 9.5.4, "Pacemaker Bundle Configuration Example" . Important Containers in bundles that contain a resource must have an accessible networking environment, so that Pacemaker on the cluster nodes can contact Pacemaker Remote inside the container. For example, the docker option --net=none should not be used with a resource. The default (using a distinct network space inside the container) works in combination with the ip-range-start parameter. If the docker option --net=host is used (making the container share the host's network space), a unique control-port parameter should be specified for each bundle. Any firewall must allow access to the control-port . 9.5.2.1. Node Attributes and Bundle Resources If the bundle contains a cluster resource, the resource agent may want to set node attributes such as master scores. However, with containers, it is not apparent which node should get the attribute. If the container uses shared storage that is the same no matter which node the container is hosted on, then it is appropriate to use the master score on the bundle node itself. On the other hand, if the container uses storage exported from the underlying host, then it may be more appropriate to use the master score on the underlying host. Since this depends on the particular situation, the container-attribute-target resource metadata attribute allows the user to specify which approach to use. If it is set to host , then user-defined node attributes will be checked on the underlying host. If it is anything else, the local node (in this case the bundle node) is used. This behavior applies only to user-defined attributes; the cluster will always check the local node for cluster-defined attributes such as #uname . If container-attribute-target is set to host , the cluster will pass additional environment variables to the resource agent that allow it to set node attributes appropriately. 9.5.2.2. Metadata Attributes and Bundle Resources Any metadata attribute set on a bundle will be inherited by the resource contained in a bundle and any resources implicitly created by Pacemaker for the bundle. This includes options such as priority , target-role , and is-managed . 9.5.3. Limitations of Pacemaker Bundles Pacemaker bundles operate with the following limitations: Bundles may not be included in groups or explicitly cloned with a pcs command. This includes a resource that the bundle contains, and any resources implicitly created by Pacemaker for the bundle. Note, however, that if a bundle is configured with a value of replicas greater than one, the bundle behaves as if it were a clone. Restarting Pacemaker while a bundle is unmanaged or the cluster is in maintenance mode may cause the bundle to fail. Bundles do not have instance attributes, utilization attributes, or operations, although a resource contained in a bundle may have them. A bundle that contains a resource can run on a Pacemaker Remote node only if the bundle uses a distinct control-port . 9.5.4. Pacemaker Bundle Configuration Example The following example creates a Pacemaker bundle resource with a bundle ID of httpd-bundle that contains an ocf:heartbeat:apache resource with a resource ID of httpd . This procedure requires the following prerequisite configuration: Docker has been installed and enabled on every node in the cluster. There is an existing Docker image, named pcmktest:http The container image includes the Pacemaker Remote daemon. The container image includes a configured Apache web server. Every node in the cluster has directories /var/local/containers/httpd-bundle-0 , /var/local/containers/httpd-bundle-1 , and /var/local/containers/httpd-bundle-2 , containing an index.html file for the web server root. In production, a single, shared document root would be more likely, but for the example this configuration allows you to make the index.html file on each host different so that you can connect to the web server and verify which index.html file is being served. This procedure configures the following parameters for the Pacemaker bundle: The bundle ID is httpd-bundle . The previously-configured Docker container image is pcmktest:http . This example will launch three container instances. This example will pass the command-line option --log-driver=journald to the docker run command. This parameter is not required, but is included to show how to pass an extra option to the docker command. A value of --log-driver=journald means that the system logs inside the container will be logged in the underlying hosts's systemd journal. Pacemaker will create three sequential implicit ocf:heartbeat:IPaddr2 resources, one for each container image, starting with the IP address 192.168.122.131. The IP addresses are created on the host interface eth0. The IP addresses are created with a CIDR netmask of 24. This example creates a port map ID of http-port ; connections to port 80 on the container's assigned IP address will be forwarded to the container network. This example creates a storage map ID of httpd-root . For this storage mapping: The value of source-dir-root is /var/local/containers , which specifies the start of the path on the host's file system that will be mapped into the container, using a different subdirectory on the host for each container instance. The value of target-dir is /var/www/html , which specifies the path name within the container where the host storage will be mapped. The file system rw mount option will be used when mapping the storage. Since this example container includes a resource, Pacemaker will automatically map the equivalent of source-dir=/etc/pacemaker/authkey in the container, so you do not need to specify that path in the storage mapping. In this example, the existing cluster configuration is put into a temporary file named temp-cib.xml , which is then copied to a file named temp-cib.xml.deltasrc . All modifications to the cluster configuration are made to the tmp-cib.xml file. When the udpates are complete, this procedure uses the diff-against option of the pcs cluster cib-push command so that only the updates to the configuration file are pushed to the active configuration file. | [
"pcs resource bundle create bundle_id container docker [ container_options ] [network network_options ] [port-map port_options ]... [storage-map storage_options ]... [meta meta_options ] [--disabled] [--wait[=n]]",
"pcs cluster cib tmp-cib.xml cp tmp-cib.xml tmp-cib.xml.deltasrc pcs -f tmp.cib.xml resource bundle create httpd-bundle container docker image=pcmktest:http replicas=3 options=--log-driver=journald network ip-range-start=192.168.122.131 host-interface=eth0 host-netmask=24 port-map id=httpd-port port=80 storage-map id=httpd-root source-dir-root=/var/local/containers target-dir=/var/www/html options=rw pcs -f tmp-cib.xml resource create httpd ocf:heartbeat:apache statusurl=http://localhost/server-status bundle httpd-bundle pcs cluster cib-push tmp-cib.xml diff-against=tmp-cib.xml.deltasrc"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-containers-HAAR |
Chapter 11. SecretList [image.openshift.io/v1] | Chapter 11. SecretList [image.openshift.io/v1] Description SecretList is a list of Secret. Type object Required items 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Secret) Items is a list of secret objects. More info: https://kubernetes.io/docs/concepts/configuration/secret kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 11.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/secrets GET : read secrets of the specified ImageStream 11.2.1. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/secrets Table 11.1. Global path parameters Parameter Type Description name string name of the SecretList namespace string object name and auth scope, such as for teams and projects Table 11.2. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description read secrets of the specified ImageStream Table 11.3. HTTP responses HTTP code Reponse body 200 - OK SecretList schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/image_apis/secretlist-image-openshift-io-v1 |
Chapter 9. File Systems | Chapter 9. File Systems SELinux security labels are now supported on the OverlayFS file system With this update, the OverlayFS file system now supports SELinux security labels. When using Docker containers with the OverlayFS storage driver, you no longer have to configure Docker to disable SELinux support for the containers. (BZ# 1297929 ) NFSoRDMA server is now fully supported NFS over RDMA (NFSoRDMA) server, previously provided as a Technology Preview, is now fully supported when accessed by Red Hat Enterprise Linux clients. For more information on NFSoRDMA see the following section in the Red Hat Enterprise Linux 7 Storage Administration Guide: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html-single/Storage_Administration_Guide/index.html#nfs-rdma (BZ#1400501) autofs now supports the browse options of amd format maps The browse functionality of sun format maps makes available automount points visible in directory listings of mounted automount-managed mounts and is now also available for autofs amd format maps. You can now add mount point sections to the autofs configuration for amd format mounts, in the same way automount points are configured in amd , without the need to also add a corresponding entry to the master map. As a result, you can avoid having incompatible master map entries in the autofs master map within shared multi-vendor environments. The browsable_dirs option can be used in either the autofs [ amd ] configuration section, or following amd mount point sections. The browsable and utimeout map options of amd type auto map entries can also be used. Note that the browsable_dirs option can be set only to yes or no . (BZ# 1367576 ) To make searching logs easier, autofs now provides identifiers of mount request log entries For busy sites, it can be difficult to identify log entries for specific mount attempts when examining mount problems. The entries are often mixed with other concurrent mount requests and activities if the log recorded a lot of activity. Now, you can quickly filter entries for specific mount requests if you enable adding a mount request log identifier to mount request log entries in the autofs configuration. The new logging is turned off by default and is controlled by the use_mount_request_log_id option, as described in the autofs.conf file. (BZ#1382093) GFS2 on IBM z Systems is now supported in SSI environments Starting with Red Hat Enterprise Linux 7.4, GFS2 on IBM z Systems (Resilient Storage on the s390x add-on) is supported in z/VM Single System Image (SSI) environments, with multiple central electronics complexes (CECs). This allows the cluster to stay up even when logical partitions (LPARs) or CECs are restarted. Live migration is not supported due to the real-time requirements of High Availability (HA) clustering. The maximum node limit of 4 nodes on IBM z Systems still applies. For information on configuring high availability and resilient storage for IBM z systems, see https://access.redhat.com/articles/1543363 . (BZ#1273401) gfs2-utils rebased to version 3.1.10 The gfs2-utils packages have been upgraded to upstream version 3.1.10, which provides a number of bug fixes and enhancements over the version. Notably, this update provides: various checking and performance improvements of the fsck.gfs2 command better handling of odd block device geometry in the mkfs.gfs2 command. gfs2_edit savemeta leaf chain block handling bug fixes. handling UUIDs by the libuuid library instead of custom functions. new --enable-gprof configuration option for profiling. documentation improvements. (BZ#1413684) FUSE now supports SEEK_HOLE and SEEK_DATA in lseek calls This update provides the SEEK_HOLE and SEEK_DATA features for the Filesystem in Userspace (FUSE) lseek system call. Now, you can use FUSE lseek to adjust the offset of the file to the location in the file that contains data, with SEEK_DATA , or a hole, with SEEK_HOLE . (BZ#1306396) NFS server now supports limited copy-offload The NFS server-side copy feature now allows the NFS client to copy file data between two files that reside on the same file system on the same NFS server without the need to transmit data back and forth over the network through the NFS client. Note that the NFS protocol also allows copies between different file systems or servers, but the Red Hat Enterprise Linux implementation currently does not support such operations. (BZ#1356122) SELinux is supported for use with GFS2 file systems Security Enhanced Linux (SELinux) is now supported for use with GFS2 file systems. Since use of SELinux with GFS2 incurs a small performance penalty, you may choose not to use SELinux with GFS2 even on a system with SELinux in enforcing mode. For information on how to configure this, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Global_File_System_2/index.html . (BZ#437984) NFSoRDMA client and server now support Kerberos authentication This update adds Kerberos authentication support for NFS over RDMA (NFSoRDMA) client and server to allow you to use krb5, krb5i, and krb5p authentication with NFSoRDMA features. You can now use Kerberos with NFSoRDMA for secure authentication of each Remote Procedure Call (RPC) transaction. Note that you need version 1.3.0-0.36 or higher of the nfs-utils package to be installed to use Kerberos with NFSoRDMA. (BZ#1401797) rpc.idmapd now supports obtaining NFSv4 ID Domains from DNS The NFS domain name that is used in the ID mapping can now be retrieved from DNS. If the Domain variable is not set in the /etc/idmapd.conf file, DNS is queried to search for the _nfsv4idmapdomain Text record. If a value is found, it is used as the NFS domain. (BZ#980925) NFSv4.1 is now the default NFS mount protocol Prior to this update, NFSv4.0 was the default NFS mount protocol. NFSv4.1 provides significant feature improvements over NFSv4.0, such as sessions, pNFS, parallel OPENs, and session trunking. With this update, NFSv4.1 is the default NFS mount protocol. If you have already specified the mount protocol minor version, this update causes no change in behavior. This update causes a change in behavior if you have specified NFSv4 without a specific minor version, provided the server supports NFSv4.1. If the server only supports NFSv4.0, the mount remains a NFSv4.0 mount. You can retain the original behavior by specifying 0 as the minor version: on the mount command line, in the /etc/fstab file, or in the /etc/nfsmount.conf file. (BZ# 1375259 ) Setting nfs-utils configuration options has been centralized in nfs.conf With this update, nfs-utils uses configuration centralized in the nfs.conf file, which is structured into stanzas for each nfs-utils program. Each nfs-utils program can read the configuration directly from the file, so you no longer need to use the systemctl restart nfs-config.service command, but restart only the specific program. For more information, see the nfs.conf(5) manual page. For compatibility with earlier releases, the older /etc/sysconfig/nfs configuration method is still available. However, it is recommended to avoid specifying configuration settings in both the /etc/sysconfig/nfs and /etc/nfs.conf file. (BZ# 1418041 ) Locking performance for NFSv4.1 mounts has been improved for certain workloads NFSv4 clients poll the server at an interval to obtain a lock under contention. As a result, the locking performance for contented locks for NFSv4 is slower than the performance of NFSv3. The CB_NOTIFY_LOCK operation has been added to the NFS client and server, so NFSv4.1 and later allow servers to call back to clients waiting on a lock. This update improves the locking performance for contented locks on NFSv4.1 mounts for certain workloads. Note that the performance might not improve for longer lock contention times. (BZ#1377710) The CephFS kernel client is fully supported with Red Hat Ceph Storage 3 The Ceph File System (CephFS) kernel module enables Red Hat Enterprise Linux nodes to mount Ceph File Systems from Red Hat Ceph Storage clusters. The kernel client in Red Hat Enterprise Linux is a more efficient alternative to the Filesystem in Userspace (FUSE) client included with Red Hat Ceph Storage. Note that the kernel client currently lacks support for CephFS quotas. The CephFS kernel client was introduced in Red Hat Enterprise Linux 7.3 as a Technology Preview, and since the release of Red Hat Ceph Storage 3, CephFS is fully supported. For more information, see the Ceph File System Guide for Red Hat Ceph Storage 3: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/ceph_file_system_guide/ . (BZ#1626527) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/new_features_file_systems |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/understanding_rhel_for_real_time/proc_providing-feedback-on-red-hat-documentation_understanding-rhel-for-real-time-core-concepts |
Chapter 3. Troubleshooting logging | Chapter 3. Troubleshooting logging 3.1. Viewing Logging status You can view the status of the Red Hat OpenShift Logging Operator and other logging components. 3.1.1. Viewing the status of the Red Hat OpenShift Logging Operator You can view the status of the Red Hat OpenShift Logging Operator. Prerequisites The Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator are installed. Procedure Change to the openshift-logging project by running the following command: USD oc project openshift-logging Get the ClusterLogging instance status by running the following command: USD oc get clusterlogging instance -o yaml Example output apiVersion: logging.openshift.io/v1 kind: ClusterLogging # ... status: 1 collection: logs: fluentdStatus: daemonSet: fluentd 2 nodes: collector-2rhqp: ip-10-0-169-13.ec2.internal collector-6fgjh: ip-10-0-165-244.ec2.internal collector-6l2ff: ip-10-0-128-218.ec2.internal collector-54nx5: ip-10-0-139-30.ec2.internal collector-flpnn: ip-10-0-147-228.ec2.internal collector-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - collector-2rhqp - collector-54nx5 - collector-6fgjh - collector-6l2ff - collector-flpnn - collector-n2frh logstore: 3 elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization: 4 kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1 1 In the output, the cluster status fields appear in the status stanza. 2 Information on the Fluentd pods. 3 Information on the Elasticsearch pods, including Elasticsearch cluster health, green , yellow , or red . 4 Information on the Kibana pods. 3.1.1.1. Example condition messages The following are examples of some condition messages from the Status.Nodes section of the ClusterLogging instance. A status message similar to the following indicates a node has exceeded the configured low watermark and no shard will be allocated to this node: Example output nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: "True" type: NodeStorage deploymentName: example-elasticsearch-clientdatamaster-0-1 upgradeStatus: {} A status message similar to the following indicates a node has exceeded the configured high watermark and shards will be relocated to other nodes: Example output nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: "True" type: NodeStorage deploymentName: cluster-logging-operator upgradeStatus: {} A status message similar to the following indicates the Elasticsearch node selector in the CR does not match any nodes in the cluster: Example output Elasticsearch Status: Shard Allocation Enabled: shard allocation unknown Cluster: Active Primary Shards: 0 Active Shards: 0 Initializing Shards: 0 Num Data Nodes: 0 Num Nodes: 0 Pending Tasks: 0 Relocating Shards: 0 Status: cluster health unknown Unassigned Shards: 0 Cluster Name: elasticsearch Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: 0/5 nodes are available: 5 node(s) didn't match node selector. Reason: Unschedulable Status: True Type: Unschedulable elasticsearch-cdm-mkkdys93-2: Node Count: 2 Pods: Client: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Data: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Master: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: A status message similar to the following indicates that the requested PVC could not bind to PV: Example output Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) Reason: Unschedulable Status: True Type: Unschedulable A status message similar to the following indicates that the Fluentd pods cannot be scheduled because the node selector did not match any nodes: Example output Status: Collection: Logs: Fluentd Status: Daemon Set: fluentd Nodes: Pods: Failed: Not Ready: Ready: 3.1.2. Viewing the status of logging components You can view the status for a number of logging components. Prerequisites The Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator are installed. Procedure Change to the openshift-logging project. USD oc project openshift-logging View the status of logging environment: USD oc describe deployment cluster-logging-operator Example output Name: cluster-logging-operator .... Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1---- View the status of the logging replica set: Get the name of a replica set: Example output USD oc get replicaset Example output NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m Get the status of the replica set: USD oc describe replicaset cluster-logging-operator-574b8987df Example output Name: cluster-logging-operator-574b8987df .... Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv---- 3.2. Troubleshooting log forwarding 3.2.1. Redeploying Fluentd pods When you create a ClusterLogForwarder custom resource (CR), if the Red Hat OpenShift Logging Operator does not redeploy the Fluentd pods automatically, you can delete the Fluentd pods to force them to redeploy. Prerequisites You have created a ClusterLogForwarder custom resource (CR) object. Procedure Delete the Fluentd pods to force them to redeploy by running the following command: USD oc delete pod --selector logging-infra=collector 3.2.2. Troubleshooting Loki rate limit errors If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit ( 429 ) errors. These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). Important The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. Conditions The Log Forwarder API is configured to forward logs to Loki. Your system sends a block of messages that is larger than 2 MB to Loki. For example: "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ ....... ...... ...... ...... \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} After you enter oc logs -n openshift-logging -l component=collector , the collector logs in your cluster show a line containing one of the following error messages: 429 Too Many Requests Ingestion rate limit exceeded Example Vector error message 2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true Example Fluentd error message 2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n" The error is also visible on the receiving end. For example, in the LokiStack ingester pod: Example Loki ingester error message level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream Procedure Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2 # ... 1 The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. 2 The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. 3.3. Troubleshooting logging alerts You can use the following procedures to troubleshoot logging alerts on your cluster. 3.3.1. Elasticsearch cluster health status is red At least one primary shard and its replicas are not allocated to a node. Use the following procedure to troubleshoot this alert. Tip Some commands in this documentation reference an Elasticsearch pod by using a USDES_POD_NAME shell variable. If you want to copy and paste the commands directly from this documentation, you must set this variable to a value that is valid for your Elasticsearch cluster. You can list the available Elasticsearch pods by running the following command: USD oc -n openshift-logging get pods -l component=elasticsearch Choose one of the pods listed and set the USDES_POD_NAME variable, by running the following command: USD export ES_POD_NAME=<elasticsearch_pod_name> You can now use the USDES_POD_NAME variable in commands. Procedure Check the Elasticsearch cluster health and verify that the cluster status is red by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- health List the nodes that have joined the cluster by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_cat/nodes?v List the Elasticsearch pods and compare them with the nodes in the command output from the step, by running the following command: USD oc -n openshift-logging get pods -l component=elasticsearch If some of the Elasticsearch nodes have not joined the cluster, perform the following steps. Confirm that Elasticsearch has an elected master node by running the following command and observing the output: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_cat/master?v Review the pod logs of the elected master node for issues by running the following command and observing the output: USD oc logs <elasticsearch_master_pod_name> -c elasticsearch -n openshift-logging Review the logs of nodes that have not joined the cluster for issues by running the following command and observing the output: USD oc logs <elasticsearch_node_name> -c elasticsearch -n openshift-logging If all the nodes have joined the cluster, check if the cluster is in the process of recovering by running the following command and observing the output: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_cat/recovery?active_only=true If there is no command output, the recovery process might be delayed or stalled by pending tasks. Check if there are pending tasks by running the following command and observing the output: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- health | grep number_of_pending_tasks If there are pending tasks, monitor their status. If their status changes and indicates that the cluster is recovering, continue waiting. The recovery time varies according to the size of the cluster and other factors. Otherwise, if the status of the pending tasks does not change, this indicates that the recovery has stalled. If it seems like the recovery has stalled, check if the cluster.routing.allocation.enable value is set to none , by running the following command and observing the output: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_cluster/settings?pretty If the cluster.routing.allocation.enable value is set to none , set it to all , by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_cluster/settings?pretty \ -X PUT -d '{"persistent": {"cluster.routing.allocation.enable":"all"}}' Check if any indices are still red by running the following command and observing the output: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_cat/indices?v If any indices are still red, try to clear them by performing the following steps. Clear the cache by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=<elasticsearch_index_name>/_cache/clear?pretty Increase the max allocation retries by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=<elasticsearch_index_name>/_settings?pretty \ -X PUT -d '{"index.allocation.max_retries":10}' Delete all the scroll items by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_search/scroll/_all -X DELETE Increase the timeout by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=<elasticsearch_index_name>/_settings?pretty \ -X PUT -d '{"index.unassigned.node_left.delayed_timeout":"10m"}' If the preceding steps do not clear the red indices, delete the indices individually. Identify the red index name by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_cat/indices?v Delete the red index by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=<elasticsearch_red_index_name> -X DELETE If there are no red indices and the cluster status is red, check for a continuous heavy processing load on a data node. Check if the Elasticsearch JVM Heap usage is high by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_nodes/stats?pretty In the command output, review the node_name.jvm.mem.heap_used_percent field to determine the JVM Heap usage. Check for high CPU utilization. For more information about CPU utilitzation, see the OpenShift Container Platform "Reviewing monitoring dashboards" documentation. Additional resources Reviewing monitoring dashboards Fix a red or yellow cluster status 3.3.2. Elasticsearch cluster health status is yellow Replica shards for at least one primary shard are not allocated to nodes. Increase the node count by adjusting the nodeCount value in the ClusterLogging custom resource (CR). Additional resources Fix a red or yellow cluster status 3.3.3. Elasticsearch node disk low watermark reached Elasticsearch does not allocate shards to nodes that reach the low watermark. Tip Some commands in this documentation reference an Elasticsearch pod by using a USDES_POD_NAME shell variable. If you want to copy and paste the commands directly from this documentation, you must set this variable to a value that is valid for your Elasticsearch cluster. You can list the available Elasticsearch pods by running the following command: USD oc -n openshift-logging get pods -l component=elasticsearch Choose one of the pods listed and set the USDES_POD_NAME variable, by running the following command: USD export ES_POD_NAME=<elasticsearch_pod_name> You can now use the USDES_POD_NAME variable in commands. Procedure Identify the node on which Elasticsearch is deployed by running the following command: USD oc -n openshift-logging get po -o wide Check if there are unassigned shards by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_cluster/health?pretty | grep unassigned_shards If there are unassigned shards, check the disk space on each node, by running the following command: USD for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; \ do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod \ -- df -h /elasticsearch/persistent; done In the command output, check the Use column to determine the used disk percentage on that node. Example output elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent If the used disk percentage is above 85%, the node has exceeded the low watermark, and shards can no longer be allocated to this node. To check the current redundancyPolicy , run the following command: USD oc -n openshift-logging get es elasticsearch \ -o jsonpath='{.spec.redundancyPolicy}' If you are using a ClusterLogging resource on your cluster, run the following command: USD oc -n openshift-logging get cl \ -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}' If the cluster redundancyPolicy value is higher than the SingleRedundancy value, set it to the SingleRedundancy value and save this change. If the preceding steps do not fix the issue, delete the old indices. Check the status of all indices on Elasticsearch by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices Identify an old index that can be deleted. Delete the index by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=<elasticsearch_index_name> -X DELETE 3.3.4. Elasticsearch node disk high watermark reached Elasticsearch attempts to relocate shards away from a node that has reached the high watermark to a node with low disk usage that has not crossed any watermark threshold limits. To allocate shards to a particular node, you must free up some space on that node. If increasing the disk space is not possible, try adding a new data node to the cluster, or decrease the total cluster redundancy policy. Tip Some commands in this documentation reference an Elasticsearch pod by using a USDES_POD_NAME shell variable. If you want to copy and paste the commands directly from this documentation, you must set this variable to a value that is valid for your Elasticsearch cluster. You can list the available Elasticsearch pods by running the following command: USD oc -n openshift-logging get pods -l component=elasticsearch Choose one of the pods listed and set the USDES_POD_NAME variable, by running the following command: USD export ES_POD_NAME=<elasticsearch_pod_name> You can now use the USDES_POD_NAME variable in commands. Procedure Identify the node on which Elasticsearch is deployed by running the following command: USD oc -n openshift-logging get po -o wide Check the disk space on each node: USD for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; \ do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod \ -- df -h /elasticsearch/persistent; done Check if the cluster is rebalancing: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_cluster/health?pretty | grep relocating_shards If the command output shows relocating shards, the high watermark has been exceeded. The default value of the high watermark is 90%. Increase the disk space on all nodes. If increasing the disk space is not possible, try adding a new data node to the cluster, or decrease the total cluster redundancy policy. To check the current redundancyPolicy , run the following command: USD oc -n openshift-logging get es elasticsearch \ -o jsonpath='{.spec.redundancyPolicy}' If you are using a ClusterLogging resource on your cluster, run the following command: USD oc -n openshift-logging get cl \ -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}' If the cluster redundancyPolicy value is higher than the SingleRedundancy value, set it to the SingleRedundancy value and save this change. If the preceding steps do not fix the issue, delete the old indices. Check the status of all indices on Elasticsearch by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices Identify an old index that can be deleted. Delete the index by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=<elasticsearch_index_name> -X DELETE 3.3.5. Elasticsearch node disk flood watermark reached Elasticsearch enforces a read-only index block on every index that has both of these conditions: One or more shards are allocated to the node. One or more disks exceed the flood stage . Use the following procedure to troubleshoot this alert. Tip Some commands in this documentation reference an Elasticsearch pod by using a USDES_POD_NAME shell variable. If you want to copy and paste the commands directly from this documentation, you must set this variable to a value that is valid for your Elasticsearch cluster. You can list the available Elasticsearch pods by running the following command: USD oc -n openshift-logging get pods -l component=elasticsearch Choose one of the pods listed and set the USDES_POD_NAME variable, by running the following command: USD export ES_POD_NAME=<elasticsearch_pod_name> You can now use the USDES_POD_NAME variable in commands. Procedure Get the disk space of the Elasticsearch node: USD for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; \ do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod \ -- df -h /elasticsearch/persistent; done In the command output, check the Avail column to determine the free disk space on that node. Example output elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent Increase the disk space on all nodes. If increasing the disk space is not possible, try adding a new data node to the cluster, or decrease the total cluster redundancy policy. To check the current redundancyPolicy , run the following command: USD oc -n openshift-logging get es elasticsearch \ -o jsonpath='{.spec.redundancyPolicy}' If you are using a ClusterLogging resource on your cluster, run the following command: USD oc -n openshift-logging get cl \ -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}' If the cluster redundancyPolicy value is higher than the SingleRedundancy value, set it to the SingleRedundancy value and save this change. If the preceding steps do not fix the issue, delete the old indices. Check the status of all indices on Elasticsearch by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices Identify an old index that can be deleted. Delete the index by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=<elasticsearch_index_name> -X DELETE Continue freeing up and monitoring the disk space. After the used disk space drops below 90%, unblock writing to this node by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=_all/_settings?pretty \ -X PUT -d '{"index.blocks.read_only_allow_delete": null}' 3.3.6. Elasticsearch JVM heap usage is high The Elasticsearch node Java virtual machine (JVM) heap memory used is above 75%. Consider increasing the heap size . 3.3.7. Aggregated logging system CPU is high System CPU usage on the node is high. Check the CPU of the cluster node. Consider allocating more CPU resources to the node. 3.3.8. Elasticsearch process CPU is high Elasticsearch process CPU usage on the node is high. Check the CPU of the cluster node. Consider allocating more CPU resources to the node. 3.3.9. Elasticsearch disk space is running low Elasticsearch is predicted to run out of disk space within the 6 hours based on current disk usage. Use the following procedure to troubleshoot this alert. Procedure Get the disk space of the Elasticsearch node: USD for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; \ do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod \ -- df -h /elasticsearch/persistent; done In the command output, check the Avail column to determine the free disk space on that node. Example output elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent Increase the disk space on all nodes. If increasing the disk space is not possible, try adding a new data node to the cluster, or decrease the total cluster redundancy policy. To check the current redundancyPolicy , run the following command: USD oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}' If you are using a ClusterLogging resource on your cluster, run the following command: USD oc -n openshift-logging get cl \ -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}' If the cluster redundancyPolicy value is higher than the SingleRedundancy value, set it to the SingleRedundancy value and save this change. If the preceding steps do not fix the issue, delete the old indices. Check the status of all indices on Elasticsearch by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices Identify an old index that can be deleted. Delete the index by running the following command: USD oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME \ -- es_util --query=<elasticsearch_index_name> -X DELETE Additional resources Fix a red or yellow cluster status 3.3.10. Elasticsearch FileDescriptor usage is high Based on current usage trends, the predicted number of file descriptors on the node is insufficient. Check the value of max_file_descriptors for each node as described in the Elasticsearch File Descriptors documentation. 3.4. Viewing the status of the Elasticsearch log store You can view the status of the OpenShift Elasticsearch Operator and for a number of Elasticsearch components. 3.4.1. Viewing the status of the Elasticsearch log store You can view the status of the Elasticsearch log store. Prerequisites The Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator are installed. Procedure Change to the openshift-logging project by running the following command: USD oc project openshift-logging To view the status: Get the name of the Elasticsearch log store instance by running the following command: USD oc get Elasticsearch Example output NAME AGE elasticsearch 5h9m Get the Elasticsearch log store status by running the following command: USD oc get Elasticsearch <Elasticsearch-instance> -o yaml For example: USD oc get Elasticsearch elasticsearch -n openshift-logging -o yaml The output includes information similar to the following: Example output status: 1 cluster: 2 activePrimaryShards: 30 activeShards: 60 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: "" conditions: [] 3 nodes: 4 - deploymentName: elasticsearch-cdm-zjf34ved-1 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-2 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-3 upgradeStatus: {} pods: 5 client: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt data: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt master: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt shardAllocationEnabled: all 1 In the output, the cluster status fields appear in the status stanza. 2 The status of the Elasticsearch log store: The number of active primary shards. The number of active shards. The number of shards that are initializing. The number of Elasticsearch log store data nodes. The total number of Elasticsearch log store nodes. The number of pending tasks. The Elasticsearch log store status: green , red , yellow . The number of unassigned shards. 3 Any status conditions, if present. The Elasticsearch log store status indicates the reasons from the scheduler if a pod could not be placed. Any events related to the following conditions are shown: Container Waiting for both the Elasticsearch log store and proxy containers. Container Terminated for both the Elasticsearch log store and proxy containers. Pod unschedulable. Also, a condition is shown for a number of issues; see Example condition messages . 4 The Elasticsearch log store nodes in the cluster, with upgradeStatus . 5 The Elasticsearch log store client, data, and master pods in the cluster, listed under failed , notReady , or ready state. 3.4.1.1. Example condition messages The following are examples of some condition messages from the Status section of the Elasticsearch instance. The following status message indicates that a node has exceeded the configured low watermark, and no shard will be allocated to this node. status: nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: "True" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {} The following status message indicates that a node has exceeded the configured high watermark, and shards will be relocated to other nodes. status: nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: "True" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {} The following status message indicates that the Elasticsearch log store node selector in the custom resource (CR) does not match any nodes in the cluster: status: nodes: - conditions: - lastTransitionTime: 2019-04-10T02:26:24Z message: '0/8 nodes are available: 8 node(s) didn''t match node selector.' reason: Unschedulable status: "True" type: Unschedulable The following status message indicates that the Elasticsearch log store CR uses a non-existent persistent volume claim (PVC). status: nodes: - conditions: - last Transition Time: 2019-04-10T05:55:51Z message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) reason: Unschedulable status: True type: Unschedulable The following status message indicates that your Elasticsearch log store cluster does not have enough nodes to support the redundancy policy. status: clusterHealth: "" conditions: - lastTransitionTime: 2019-04-17T20:01:31Z message: Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles reason: Invalid Settings status: "True" type: InvalidRedundancy This status message indicates your cluster has too many control plane nodes: status: clusterHealth: green conditions: - lastTransitionTime: '2019-04-17T20:12:34Z' message: >- Invalid master nodes count. Please ensure there are no more than 3 total nodes with master roles reason: Invalid Settings status: 'True' type: InvalidMasters The following status message indicates that Elasticsearch storage does not support the change you tried to make. For example: status: clusterHealth: green conditions: - lastTransitionTime: "2021-05-07T01:05:13Z" message: Changing the storage structure for a custom resource is not supported reason: StorageStructureChangeIgnored status: 'True' type: StorageStructureChangeIgnored The reason and type fields specify the type of unsupported change: StorageClassNameChangeIgnored Unsupported change to the storage class name. StorageSizeChangeIgnored Unsupported change the storage size. StorageStructureChangeIgnored Unsupported change between ephemeral and persistent storage structures. Important If you try to configure the ClusterLogging CR to switch from ephemeral to persistent storage, the OpenShift Elasticsearch Operator creates a persistent volume claim (PVC) but does not create a persistent volume (PV). To clear the StorageStructureChangeIgnored status, you must revert the change to the ClusterLogging CR and delete the PVC. 3.4.2. Viewing the status of the log store components You can view the status for a number of the log store components. Elasticsearch indices You can view the status of the Elasticsearch indices. Get the name of an Elasticsearch pod: USD oc get pods --selector component=elasticsearch -o name Example output pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7 Get the status of the indices: USD oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices Example output Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -n openshift-logging' to see all of the containers in this pod. green open infra-000002 S4QANnf1QP6NgCegfnrnbQ 3 1 119926 0 157 78 green open audit-000001 8_EQx77iQCSTzFOXtxRqFw 3 1 0 0 0 0 green open .security iDjscH7aSUGhIdq0LheLBQ 1 1 5 0 0 0 green open .kibana_-377444158_kubeadmin yBywZ9GfSrKebz5gWBZbjw 3 1 1 0 0 0 green open infra-000001 z6Dpe__ORgiopEpW6Yl44A 3 1 871000 0 874 436 green open app-000001 hIrazQCeSISewG3c2VIvsQ 3 1 2453 0 3 1 green open .kibana_1 JCitcBMSQxKOvIq6iQW6wg 1 1 0 0 0 0 green open .kibana_-1595131456_user1 gIYFIEGRRe-ka0W3okS-mQ 3 1 1 0 0 0 Log store pods You can view the status of the pods that host the log store. Get the name of a pod: USD oc get pods --selector component=elasticsearch -o name Example output pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7 Get the status of a pod: USD oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw The output includes the following status information: Example output .... Status: Running .... Containers: elasticsearch: Container ID: cri-o://b7d44e0a9ea486e27f47763f5bb4c39dfd2 State: Running Started: Mon, 08 Jun 2020 10:17:56 -0400 Ready: True Restart Count: 0 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 .... proxy: Container ID: cri-o://3f77032abaddbb1652c116278652908dc01860320b8a4e741d06894b2f8f9aa1 State: Running Started: Mon, 08 Jun 2020 10:18:38 -0400 Ready: True Restart Count: 0 .... Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True .... Events: <none> Log storage pod deployment configuration You can view the status of the log store deployment configuration. Get the name of a deployment configuration: USD oc get deployment --selector component=elasticsearch -o name Example output deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3 Get the deployment configuration status: USD oc describe deployment elasticsearch-cdm-1gon-1 The output includes the following status information: Example output .... Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 .... Conditions: Type Status Reason ---- ------ ------ Progressing Unknown DeploymentPaused Available True MinimumReplicasAvailable .... Events: <none> Log store replica set You can view the status of the log store replica set. Get the name of a replica set: USD oc get replicaSet --selector component=elasticsearch -o name replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495 replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7d Get the status of the replica set: USD oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495 The output includes the following status information: Example output .... Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8@sha256:4265742c7cdd85359140e2d7d703e4311b6497eec7676957f455d6908e7b1c25 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 .... Events: <none> 3.4.3. Elasticsearch cluster status A dashboard in the Observe section of the OpenShift Container Platform web console displays the status of the Elasticsearch cluster. To get the status of the OpenShift Elasticsearch cluster, visit the dashboard in the Observe section of the OpenShift Container Platform web console at <cluster_url>/monitoring/dashboards/grafana-dashboard-cluster-logging . Elasticsearch status fields eo_elasticsearch_cr_cluster_management_state Shows whether the Elasticsearch cluster is in a managed or unmanaged state. For example: eo_elasticsearch_cr_cluster_management_state{state="managed"} 1 eo_elasticsearch_cr_cluster_management_state{state="unmanaged"} 0 eo_elasticsearch_cr_restart_total Shows the number of times the Elasticsearch nodes have restarted for certificate restarts, rolling restarts, or scheduled restarts. For example: eo_elasticsearch_cr_restart_total{reason="cert_restart"} 1 eo_elasticsearch_cr_restart_total{reason="rolling_restart"} 1 eo_elasticsearch_cr_restart_total{reason="scheduled_restart"} 3 es_index_namespaces_total Shows the total number of Elasticsearch index namespaces. For example: Total number of Namespaces. es_index_namespaces_total 5 es_index_document_count Shows the number of records for each namespace. For example: es_index_document_count{namespace="namespace_1"} 25 es_index_document_count{namespace="namespace_2"} 10 es_index_document_count{namespace="namespace_3"} 5 The "Secret Elasticsearch fields are either missing or empty" message If Elasticsearch is missing the admin-cert , admin-key , logging-es.crt , or logging-es.key files, the dashboard shows a status message similar to the following example: message": "Secret \"elasticsearch\" fields are either missing or empty: [admin-cert, admin-key, logging-es.crt, logging-es.key]", "reason": "Missing Required Secrets", | [
"oc project openshift-logging",
"oc get clusterlogging instance -o yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging status: 1 collection: logs: fluentdStatus: daemonSet: fluentd 2 nodes: collector-2rhqp: ip-10-0-169-13.ec2.internal collector-6fgjh: ip-10-0-165-244.ec2.internal collector-6l2ff: ip-10-0-128-218.ec2.internal collector-54nx5: ip-10-0-139-30.ec2.internal collector-flpnn: ip-10-0-147-228.ec2.internal collector-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - collector-2rhqp - collector-54nx5 - collector-6fgjh - collector-6l2ff - collector-flpnn - collector-n2frh logstore: 3 elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization: 4 kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1",
"nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-clientdatamaster-0-1 upgradeStatus: {}",
"nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: cluster-logging-operator upgradeStatus: {}",
"Elasticsearch Status: Shard Allocation Enabled: shard allocation unknown Cluster: Active Primary Shards: 0 Active Shards: 0 Initializing Shards: 0 Num Data Nodes: 0 Num Nodes: 0 Pending Tasks: 0 Relocating Shards: 0 Status: cluster health unknown Unassigned Shards: 0 Cluster Name: elasticsearch Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: 0/5 nodes are available: 5 node(s) didn't match node selector. Reason: Unschedulable Status: True Type: Unschedulable elasticsearch-cdm-mkkdys93-2: Node Count: 2 Pods: Client: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Data: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Master: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready:",
"Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) Reason: Unschedulable Status: True Type: Unschedulable",
"Status: Collection: Logs: Fluentd Status: Daemon Set: fluentd Nodes: Pods: Failed: Not Ready: Ready:",
"oc project openshift-logging",
"oc describe deployment cluster-logging-operator",
"Name: cluster-logging-operator . Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1----",
"oc get replicaset",
"NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m",
"oc describe replicaset cluster-logging-operator-574b8987df",
"Name: cluster-logging-operator-574b8987df . Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv----",
"oc delete pod --selector logging-infra=collector",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- health",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/nodes?v",
"oc -n openshift-logging get pods -l component=elasticsearch",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/master?v",
"oc logs <elasticsearch_master_pod_name> -c elasticsearch -n openshift-logging",
"oc logs <elasticsearch_node_name> -c elasticsearch -n openshift-logging",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/recovery?active_only=true",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- health | grep number_of_pending_tasks",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/settings?pretty",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/settings?pretty -X PUT -d '{\"persistent\": {\"cluster.routing.allocation.enable\":\"all\"}}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/indices?v",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name>/_cache/clear?pretty",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.allocation.max_retries\":10}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_search/scroll/_all -X DELETE",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.unassigned.node_left.delayed_timeout\":\"10m\"}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/indices?v",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_red_index_name> -X DELETE",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_nodes/stats?pretty",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"oc -n openshift-logging get po -o wide",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/health?pretty | grep unassigned_shards",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"oc -n openshift-logging get po -o wide",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/health?pretty | grep relocating_shards",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_all/_settings?pretty -X PUT -d '{\"index.blocks.read_only_allow_delete\": null}'",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc project openshift-logging",
"oc get Elasticsearch",
"NAME AGE elasticsearch 5h9m",
"oc get Elasticsearch <Elasticsearch-instance> -o yaml",
"oc get Elasticsearch elasticsearch -n openshift-logging -o yaml",
"status: 1 cluster: 2 activePrimaryShards: 30 activeShards: 60 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: \"\" conditions: [] 3 nodes: 4 - deploymentName: elasticsearch-cdm-zjf34ved-1 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-2 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-3 upgradeStatus: {} pods: 5 client: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt data: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt master: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt shardAllocationEnabled: all",
"status: nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}",
"status: nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}",
"status: nodes: - conditions: - lastTransitionTime: 2019-04-10T02:26:24Z message: '0/8 nodes are available: 8 node(s) didn''t match node selector.' reason: Unschedulable status: \"True\" type: Unschedulable",
"status: nodes: - conditions: - last Transition Time: 2019-04-10T05:55:51Z message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) reason: Unschedulable status: True type: Unschedulable",
"status: clusterHealth: \"\" conditions: - lastTransitionTime: 2019-04-17T20:01:31Z message: Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles reason: Invalid Settings status: \"True\" type: InvalidRedundancy",
"status: clusterHealth: green conditions: - lastTransitionTime: '2019-04-17T20:12:34Z' message: >- Invalid master nodes count. Please ensure there are no more than 3 total nodes with master roles reason: Invalid Settings status: 'True' type: InvalidMasters",
"status: clusterHealth: green conditions: - lastTransitionTime: \"2021-05-07T01:05:13Z\" message: Changing the storage structure for a custom resource is not supported reason: StorageStructureChangeIgnored status: 'True' type: StorageStructureChangeIgnored",
"oc get pods --selector component=elasticsearch -o name",
"pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7",
"oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices",
"Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -n openshift-logging' to see all of the containers in this pod. green open infra-000002 S4QANnf1QP6NgCegfnrnbQ 3 1 119926 0 157 78 green open audit-000001 8_EQx77iQCSTzFOXtxRqFw 3 1 0 0 0 0 green open .security iDjscH7aSUGhIdq0LheLBQ 1 1 5 0 0 0 green open .kibana_-377444158_kubeadmin yBywZ9GfSrKebz5gWBZbjw 3 1 1 0 0 0 green open infra-000001 z6Dpe__ORgiopEpW6Yl44A 3 1 871000 0 874 436 green open app-000001 hIrazQCeSISewG3c2VIvsQ 3 1 2453 0 3 1 green open .kibana_1 JCitcBMSQxKOvIq6iQW6wg 1 1 0 0 0 0 green open .kibana_-1595131456_user1 gIYFIEGRRe-ka0W3okS-mQ 3 1 1 0 0 0",
"oc get pods --selector component=elasticsearch -o name",
"pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7",
"oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw",
". Status: Running . Containers: elasticsearch: Container ID: cri-o://b7d44e0a9ea486e27f47763f5bb4c39dfd2 State: Running Started: Mon, 08 Jun 2020 10:17:56 -0400 Ready: True Restart Count: 0 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . proxy: Container ID: cri-o://3f77032abaddbb1652c116278652908dc01860320b8a4e741d06894b2f8f9aa1 State: Running Started: Mon, 08 Jun 2020 10:18:38 -0400 Ready: True Restart Count: 0 . Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True . Events: <none>",
"oc get deployment --selector component=elasticsearch -o name",
"deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3",
"oc describe deployment elasticsearch-cdm-1gon-1",
". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Conditions: Type Status Reason ---- ------ ------ Progressing Unknown DeploymentPaused Available True MinimumReplicasAvailable . Events: <none>",
"oc get replicaSet --selector component=elasticsearch -o name replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495 replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7d",
"oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495",
". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8@sha256:4265742c7cdd85359140e2d7d703e4311b6497eec7676957f455d6908e7b1c25 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Events: <none>",
"eo_elasticsearch_cr_cluster_management_state{state=\"managed\"} 1 eo_elasticsearch_cr_cluster_management_state{state=\"unmanaged\"} 0",
"eo_elasticsearch_cr_restart_total{reason=\"cert_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"rolling_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"scheduled_restart\"} 3",
"Total number of Namespaces. es_index_namespaces_total 5",
"es_index_document_count{namespace=\"namespace_1\"} 25 es_index_document_count{namespace=\"namespace_2\"} 10 es_index_document_count{namespace=\"namespace_3\"} 5",
"message\": \"Secret \\\"elasticsearch\\\" fields are either missing or empty: [admin-cert, admin-key, logging-es.crt, logging-es.key]\", \"reason\": \"Missing Required Secrets\","
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/logging/troubleshooting-logging |
Chapter 2. Configuring the Overcloud before Creation | Chapter 2. Configuring the Overcloud before Creation The following chapter provides the configuration required before running the openstack overcloud deploy command. This includes preparing nodes for provisioning, configuring an IPv6 address on the Undercloud, and creating a network environment file that defines the IPv6 parameters for the Overcloud. 2.1. Initializing the Stack User Log into the director host as the stack user and run the following command to initialize your director configuration: This sets up environment variables containing authentication details to access the director's CLI tools. 2.2. Configuring an IPv6 Address on the Undercloud The Undercloud requires access to the Overcloud's Public API, which is on the External network. To accomplish this, the Undercloud host requires an IPv6 address on the interface accessing the External network. Note The Provisioning network still requires IPv4 connectivity for every node. The Undercloud and the Overcloud nodes use this network for PXE boot, introspection, and deployment. In addition, the nodes use this network to access DNS and NTP services over IPv4. Native VLAN or Dedicated Interface If the Undercloud uses a native VLAN or a dedicated interface attached to the External network, use the ip command to add an IPv6 address to the interface. In this example, the dedicated interface is eth0 : Trunked VLAN Interface If the Undercloud uses a trunked VLAN on the same interface as the control plane bridge ( br-ctlplane ) to access the External network, create a new VLAN interface, attach it to the control plane, and add an IPv6 address to the VLAN. For example, our scenario uses 100 for the External network's VLAN ID: Confirming the IPv6 Address Confirm the addition of the IPv6 address with the ip command: The IPv6 address appears on the chosen interface. Setting a Persistent IPv6 Address In addition to the above, you might want to make the IPv6 address permanent. In this case, modify or create the appropriate interface file in /etc/sysconfig/network-scripts/ (In our example, either ifcfg-eth0 or ifcfg-vlan100 ). Include the following lines: For more information, see How do I configure a network interface for IPv6? on the Red Hat Customer Portal. 2.3. Setting up your Environment This section uses a cutdown version of the process from Configuring Basic Overcloud Requirements with the CLI Tools in the Director Installation and Usage . Use the following workflow to setup your environment: Create a node definition template and register blank nodes in the director. Inspect hardware of all nodes. Manually tag nodes into roles. Create flavors and tag them into roles. 2.3.1. Registering Nodes A node definition template ( instackenv.json ) is a JSON format file and contains the hardware and power management details for registering nodes. For example: Note The Provisioning network uses IPv4 addresses. The IPMI addresses must also be IPv4 addresses, and they must either be directly attached or reachable through routing over the Provisioning network. After creating the template, save the file to the stack user's home directory ( /home/stack/instackenv.json ), then import it into the director. Use the following command to accomplish this: This imports the template and registers each node from the template into the director. Assign the kernel and ramdisk images to all nodes: The nodes are now registered and configured in the director. 2.3.2. Inspecting the Hardware of Nodes After registering the nodes, inspect the hardware attribute of each node. Run the following command to inspect the hardware attributes of each node: Important The nodes must be in the manageable state. Make sure this process runs to completion. This process usually takes 15 minutes for bare metal nodes. 2.3.3. Manually Tagging the Nodes After registering and inspecting the hardware of each node, tag them into specific profiles. These profile tags match your nodes to flavors, and in turn the flavors are assigned to a deployment role. Retrieve a list of your nodes to identify their UUIDs: To manually tag a node to a specific profile, add a profile option to the properties/capabilities parameter for each node. For example, to tag three nodes to use a controller profile and one node to use a compute profile, use the following commands: The addition of the profile:compute and profile:control options tag the nodes into each respective profiles. Note As an alternative to manual tagging, use the automatic profile tagging to tag larger numbers of nodes based on benchmarking data. 2.4. Configuring the Network This section examines the network configuration for the Overcloud. This includes isolating our services to use specific network traffic and configuring the Overcloud with our IPv6 options. 2.4.1. Configuring Composable Network Details Copy the default network_data file: Edit the local copy of the network_data.yaml file and modify the parameters to suit your IPv6 networking requirements. For example, the External network contains the following default network details: name is the only mandatory value, however you can also use name_lower to normalize names for readability. For example, changing InternalApi to internal_api . vip: true creates a virtual IP address (VIP) on the new network with the remaining parameters setting the defaults for the new network. ipv6 defines whether to enable IPv6. ipv6_subnet and ipv6_allocation_pools , and gateway_ip6 set the default IPv6 subnet and IP range for the network. Include the custom network_data file with your deployment using the -n option. Without the -n option, the deployment command uses the default network details. 2.4.2. Network Isolation The overcloud assigns services to the provisioning network by default. However, Red Hat OpenStack Platform director can divide overcloud network traffic into isolated networks. These networks are defined in a file that you include in the deployment command line, by default named network_data.yaml . When services are listening on networks using IPv6 addresses, you must provide parameter defaults to indicate the service is running on an IPv6 network. The network each service runs on is defined by the file network/service_net_map.yaml , and may be overridden by declaring parameter defaults for individual ServiceNetMap entries. These services require the parameter default to be set in an environment file: The environments/network-isolation.j2.yaml file in the director's core Heat templates is a Jinja2 file that defines all ports and VIPs for each IPv6 network in your composable network file. When rendered, it results in a network-isolation.yaml file in the same location with the full resource registry. 2.4.3. Configuring Interfaces The Overcloud requires a set of network interface templates. The director contains a set of Jinja2-based Heat templates, which render based on your network_data file: NIC directory Description Environment file single-nic-vlans Single NIC ( nic1 ) with control plane and VLANs attached to default Open vSwitch bridge. environments/net-single-nic-with-vlans-v6.j2.yaml single-nic-linux-bridge-vlans Single NIC ( nic1 ) with control plane and VLANs attached to default Linux bridge. environments/net-single-nic-linux-bridge-with-vlans-v6.yaml bond-with-vlans Control plane attached to nic1 . Default Open vSwitch bridge with bonded NIC configuration ( nic2 and nic3 ) and VLANs attached. environments/net-bond-with-vlans-v6.yaml multiple-nics Control plane attached to nic1 . Assigns each sequential NIC to each network defined in the network_data file. By default, this is Storage to nic2 , Storage Management to nic3 , Internal API to nic4 , Tenant to nic5 on the br-tenant bridge, and External to nic6 on the default Open vSwitch bridge. environments/net-multiple-nics-v6.yaml For this example, we use the sinle-nic-vlans template collection. 2.4.4. Configuring the IPv6 Isolated Network The default Heat template collection contains a Jinja2-based environment file for the default networking configuration. This file is environments/network-environment.j2.yaml . When rendered with our network_data file, it results in a standard YAML file called network-environment.yaml . Some parts of this file might require overrides, which is why you should create your own custom network-environment.yaml file. For this scenario, create a custom environment file ( /home/stack/network-environment.yaml ) with the following details: The parameter_defaults section contains the customization for certain services that remain on IPv4. 2.5. Completing Overcloud Configuration This completes the necessary steps to configure an IPv6-based Overcloud. The chapter uses the openstack overcloud deploy command to create the Overcloud using the configuration from this chapter. | [
"source ~/stackrc",
"sudo ip link set dev eth0 up; sudo ip addr add 2001:db8::1/64 dev eth0",
"sudo ovs-vsctl add-port br-ctlplane vlan100 tag=100 -- set interface vlan100 type=internal sudo ip l set dev vlan100 up; sudo ip addr add 2001:db8::1/64 dev vlan100",
"ip addr",
"IPV6INIT=yes IPV6ADDR=2001:db8::1/64",
"{ \"nodes\":[ { \"mac\":[ \"bb:bb:bb:bb:bb:bb\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"pxe_ipmitool\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.205\" }, { \"mac\":[ \"cc:cc:cc:cc:cc:cc\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"pxe_ipmitool\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.206\" }, { \"mac\":[ \"dd:dd:dd:dd:dd:dd\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"pxe_ipmitool\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.207\" }, { \"mac\":[ \"ee:ee:ee:ee:ee:ee\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"pxe_ipmitool\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.208\" } { \"mac\":[ \"ff:ff:ff:ff:ff:ff\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"pxe_ipmitool\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.209\" } { \"mac\":[ \"gg:gg:gg:gg:gg:gg\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"pxe_ipmitool\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.210\" } ] }",
"openstack overcloud node import ~/instackenv.json",
"openstack overcloud node configure",
"openstack overcloud node introspect --all-manageable",
"openstack baremetal node list",
"openstack baremetal node set 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 --property capabilities=\"profile:control,boot_option:local\" openstack baremetal node set 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a --property capabilities=\"profile:control,boot_option:local\" openstack baremetal node set 5e3b2f50-fcd9-4404-b0a2-59d79924b38e --property capabilities=\"profile:control,boot_option:local\" openstack baremetal node set 484587b2-b3b3-40d5-925b-a26a2fa3036f --property capabilities=\"profile:compute,boot_option:local\" openstack baremetal node set d010460b-38f2-4800-9cc4-d69f0d067efe --property capabilities=\"profile:compute,boot_option:local\" openstack baremetal node set d930e613-3e14-44b9-8240-4f3559801ea6 --property capabilities=\"profile:compute,boot_option:local\"",
"cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml /home/stack/.",
"- name: External vip: true name_lower: external vlan: 10 ipv6: true ipv6_subnet: '2001:db8:fd00:1000::/64' ipv6_allocation_pools: [{'start': '2001:db8:fd00:1000::10', 'end': '2001:db8:fd00:1000:ffff:ffff:ffff:fffe'}] gateway_ipv6: '2001:db8:fd00:1000::1'",
"parameter_defaults: # Enable IPv6 for Ceph. CephIPv6: True # Enable IPv6 for Corosync. This is required when Corosync is using an IPv6 IP in the cluster. CorosyncIPv6: True # Enable IPv6 for MongoDB. This is required when MongoDB is using an IPv6 IP. MongoDbIPv6: True # Enable various IPv6 features in Nova. NovaIPv6: True # Enable IPv6 environment for RabbitMQ. RabbitIPv6: True # Enable IPv6 environment for Memcached. MemcachedIPv6: True # Enable IPv6 environment for MySQL. MysqlIPv6: True # Enable IPv6 environment for Manila ManilaIPv6: True # Enable IPv6 environment for Redis. RedisIPv6: True",
"parameter_defaults: DnsServers: [\"8.8.8.8\",\"8.8.4.4\"] ControlPlaneDefaultRoute: 192.0.2.1 ControlPlaneSubnetCidr: \"24\" EC2MetadataIp: 192.0.2.1"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/ipv6_networking_for_the_overcloud/configuring_the_overcloud_before_creation |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/auto-scaling_for_instances/making-open-source-more-inclusive |
Chapter 11. Network configuration | Chapter 11. Network configuration This section describes the basics of network configuration using the Assisted Installer. 11.1. Cluster networking There are various network types and addresses used by OpenShift and listed in the table below. Type DNS Description clusterNetwork The IP address pools from which Pod IP addresses are allocated. serviceNetwork The IP address pool for services. machineNetwork The IP address blocks for machines forming the cluster. apiVIP api.<clustername.clusterdomain> The VIP to use for API communication. This setting must either be provided or pre-configured in the DNS so that the default name resolves correctly. If you are deploying with dual-stack networking, this must be the IPv4 address. apiVIPs api.<clustername.clusterdomain> The VIPs to use for API communication. This setting must either be provided or pre-configured in the DNS so that the default name resolves correctly. If using dual stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the apiVIP setting. ingressVIP *.apps.<clustername.clusterdomain> The VIP to use for ingress traffic. If you are deploying with dual-stack networking, this must be the IPv4 address. ingressVIPs *.apps.<clustername.clusterdomain> The VIPs to use for ingress traffic. If you are deploying with dual-stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the ingressVIP setting. Note OpenShift Container Platform 4.12 introduces the new apiVIPs and ingressVIPs settings to accept multiple IP addresses for dual-stack networking. When using dual-stack networking, the first IP address must be the IPv4 address and the second IP address must be the IPv6 address. The new settings will replace apiVIP and IngressVIP , but you must set both the new and old settings when modifying the configuration using the API. Depending on the desired network stack, you can choose different network controllers. Currently, the Assisted Service can deploy OpenShift Container Platform clusters using one of the following configurations: IPv4 IPv6 Dual-stack (IPv4 + IPv6) Supported network controllers depend on the selected stack and are summarized in the table below. For a detailed Container Network Interface (CNI) network provider feature comparison, refer to the OCP Networking documentation . Stack SDN OVN IPv4 Yes Yes IPv6 No Yes Dual-stack No Yes Note OVN is the default Container Network Interface (CNI) in OpenShift Container Platform 4.12 and later releases. 11.1.1. Limitations 11.1.1.1. SDN With Single Node OpenShift (SNO), the SDN controller is not supported. The SDN controller does not support IPv6. 11.1.1.2. OVN-Kubernetes Please see the OVN-Kubernetes limitations section in the OCP documentation . 11.1.2. Cluster network The cluster network is a network from which every Pod deployed in the cluster gets its IP address. Given that the workload may live across many nodes forming the cluster, it's important for the network provider to be able to easily find an individual node based on the Pod's IP address. To do this, clusterNetwork.cidr is further split into subnets of the size defined in clusterNetwork.hostPrefix . The host prefix specifies a length of the subnet assigned to each individual node in the cluster. An example of how a cluster may assign addresses for the multi-node cluster: --- clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 --- Creating a 3-node cluster using the snippet above may create the following network topology: Pods scheduled in node #1 get IPs from 10.128.0.0/23 Pods scheduled in node #2 get IPs from 10.128.2.0/23 Pods scheduled in node #3 get IPs from 10.128.4.0/23 Explaining OVN-K8s internals is out of scope for this document, but the pattern described above provides a way to route Pod-to-Pod traffic between different nodes without keeping a big list of mapping between Pods and their corresponding nodes. 11.1.3. Machine network The machine network is a network used by all the hosts forming the cluster to communicate with each other. This is also the subnet that must include the API and Ingress VIPs. 11.1.4. SNO compared to multi-node cluster Depending on whether you are deploying a Single Node OpenShift or a multi-node cluster, different values are mandatory. The table below explains this in more detail. Parameter SNO Multi-Node Cluster with DHCP mode Multi-Node Cluster without DHCP mode clusterNetwork Required Required Required serviceNetwork Required Required Required machineNetwork Auto-assign possible (*) Auto-assign possible (*) Auto-assign possible (*) apiVIP Forbidden Forbidden Required apiVIPs Forbidden Forbidden Required in 4.12 and later releases ingressVIP Forbidden Forbidden Required ingressVIPs Forbidden Forbidden Required in 4.12 and later releases (*) Auto assignment of the machine network CIDR happens if there is only a single host network. Otherwise you need to specify it explicitly. 11.1.5. Air-gapped environments The workflow for deploying a cluster without Internet access has some prerequisites which are out of scope of this document. You may consult the Zero Touch Provisioning the hard way Git repository for some insights. 11.2. DHCP VIP allocation The VIP DHCP allocation is a feature allowing users to skip the requirement of manually providing virtual IPs for API and Ingress by leveraging the ability of a service to automatically assign those IP addresses from the DHCP server. If you enable the feature, instead of using api_vips and ingress_vips from the cluster configuration, the service will send a lease allocation request and based on the reply it will use VIPs accordingly. The service will allocate the IP addresses from the Machine Network. Please note this is not an OpenShift Container Platform feature and it has been implemented in the Assisted Service to make the configuration easier. 11.2.1. Example payload to enable autoallocation --- { "vip_dhcp_allocation": true, "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 } ], "service_networks": [ { "cidr": "172.30.0.0/16" } ], "machine_networks": [ { "cidr": "192.168.127.0/24" } ] } --- 11.2.2. Example payload to disable autoallocation --- { "api_vips": [ { "ip": "192.168.127.100" } ], "ingress_vips": [ { "ip": "192.168.127.101" } ], "vip_dhcp_allocation": false, "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 } ], "service_networks": [ { "cidr": "172.30.0.0/16" } ] } --- 11.3. Additional resources Bare metal IPI documentation provides additional explanation of the syntax for the VIP addresses. 11.4. Understanding differences between User Managed Networking and Cluster Managed Networking User managed networking is a feature in the Assisted Installer that allows customers with non-standard network topologies to deploy OpenShift Container Platform clusters. Examples include: Customers with an external load balancer who do not want to use keepalived and VRRP for handling VIP addressses. Deployments with cluster nodes distributed across many distinct L2 network segments. 11.4.1. Validations There are various network validations happening in the Assisted Installer before it allows the installation to start. When you enable User Managed Networking, the following validations change: L3 connectivity check (ICMP) is performed instead of L2 check (ARP) 11.5. Static network configuration You may use static network configurations when generating or updating the discovery ISO. 11.5.1. Prerequisites You are familiar with NMState . 11.5.2. NMState configuration The NMState file in YAML format specifies the desired network configuration for the host. It has the logical names of the interfaces that will be replaced with the actual name of the interface at discovery time. 11.5.2.1. Example of NMState configuration --- dns-resolver: config: server: - 192.168.126.1 interfaces: - ipv4: address: - ip: 192.168.126.30 prefix-length: 24 dhcp: false enabled: true name: eth0 state: up type: ethernet - ipv4: address: - ip: 192.168.141.30 prefix-length: 24 dhcp: false enabled: true name: eth1 state: up type: ethernet routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.126.1 -hop-interface: eth0 table-id: 254 --- 11.5.3. MAC interface mapping MAC interface map is an attribute that maps logical interfaces defined in the NMState configuration with the actual interfaces present on the host. The mapping should always use physical interfaces present on the host. For example, when the NMState configuration defines a bond or VLAN, the mapping should only contain an entry for parent interfaces. 11.5.3.1. Example of MAC interface mapping --- mac_interface_map: [ { mac_address: 02:00:00:2c:23:a5, logical_nic_name: eth0 }, { mac_address: 02:00:00:68:73:dc, logical_nic_name: eth1 } ] --- 11.5.4. Additional NMState configuration examples The examples below are only meant to show a partial configuration. They are not meant to be used as-is, and you should always adjust to the environment where they will be used. If used incorrectly, they may leave your machines with no network connectivity. 11.5.4.1. Tagged VLAN --- interfaces: - ipv4: address: - ip: 192.168.143.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: eth0.404 state: up type: vlan vlan: base-iface: eth0 id: 404 --- 11.5.4.2. Network bond --- interfaces: - ipv4: address: - ip: 192.168.138.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false link-aggregation: mode: active-backup options: all_slaves_active: delivered miimon: "140" slaves: - eth0 - eth1 name: bond0 state: up type: bond --- 11.6. Applying a static network configuration with the API You can apply a static network configuration using the Assisted Installer API. Prerequisites You have created an infrastructure environment using the API or have created a cluster using the UI. You have your infrastructure environment ID exported in your shell as USDINFRA_ENV_ID . You have credentials to use when accessing the API and have exported a token as USDAPI_TOKEN in your shell. You have YAML files with a static network configuration available as server-a.yaml and server-b.yaml . Procedure Create a temporary file /tmp/request-body.txt with the API request: --- jq -n --arg NMSTATE_YAML1 "USD(cat server-a.yaml)" --arg NMSTATE_YAML2 "USD(cat server-b.yaml)" \ '{ "static_network_config": [ { "network_yaml": USDNMSTATE_YAML1, "mac_interface_map": [{"mac_address": "02:00:00:2c:23:a5", "logical_nic_name": "eth0"}, {"mac_address": "02:00:00:68:73:dc", "logical_nic_name": "eth1"}] }, { "network_yaml": USDNMSTATE_YAML2, "mac_interface_map": [{"mac_address": "02:00:00:9f:85:eb", "logical_nic_name": "eth1"}, {"mac_address": "02:00:00:c8:be:9b", "logical_nic_name": "eth0"}] } ] }' >> /tmp/request-body.txt --- Refresh the API token: USD source refresh-token Send the request to the Assisted Service API endpoint: --- curl -H "Content-Type: application/json" \ -X PATCH -d @/tmp/request-body.txt \ -H "Authorization: Bearer USD{API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID --- 11.7. Additional resources Applying a static network configuration with the UI 11.8. Converting to dual-stack networking Dual-stack IPv4/IPv6 configuration allows deployment of a cluster with pods residing in both IPv4 and IPv6 subnets. 11.8.1. Prerequisites You are familiar with OVN-K8s documentation 11.8.2. Example payload for Single Node OpenShift --- { "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 }, { "cidr": "fd01::/48", "host_prefix": 64 } ], "service_networks": [ {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"} ], "machine_networks": [ {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"} ] } --- 11.8.3. Example payload for an OpenShift Container Platform cluster consisting of many nodes --- { "vip_dhcp_allocation": false, "network_type": "OVNKubernetes", "user_managed_networking": false, "api_vips": [ { "ip": "192.168.127.100" }, { "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7334" } ], "ingress_vips": [ { "ip": "192.168.127.101" }, { "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7335" } ], "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 }, { "cidr": "fd01::/48", "host_prefix": 64 } ], "service_networks": [ {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"} ], "machine_networks": [ {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"} ] } --- 11.8.4. Limitations The api_vips IP address and ingress_vips IP address settings must be of the primary IP address family when using dual-stack networking, which must be IPv4 addresses. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. Red Hat supports dual-stack networking with IPv4 as the primary IP address family and IPv6 as the secondary IP address family. Therefore, you must place the IPv4 entries before the IPv6 entries when entering the IP address values. 11.9. Additional resources Understanding OpenShift networking OpenShift SDN - CNI network provider OVN-Kubernetes - CNI network provider Dual-stack Service configuration scenarios Installing on bare metal OCP . Cluster Network Operator configuration . | [
"--- clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 ---",
"--- { \"vip_dhcp_allocation\": true, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 } ], \"service_networks\": [ { \"cidr\": \"172.30.0.0/16\" } ], \"machine_networks\": [ { \"cidr\": \"192.168.127.0/24\" } ] } ---",
"--- { \"api_vips\": [ { \"ip\": \"192.168.127.100\" } ], \"ingress_vips\": [ { \"ip\": \"192.168.127.101\" } ], \"vip_dhcp_allocation\": false, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 } ], \"service_networks\": [ { \"cidr\": \"172.30.0.0/16\" } ] } ---",
"--- dns-resolver: config: server: - 192.168.126.1 interfaces: - ipv4: address: - ip: 192.168.126.30 prefix-length: 24 dhcp: false enabled: true name: eth0 state: up type: ethernet - ipv4: address: - ip: 192.168.141.30 prefix-length: 24 dhcp: false enabled: true name: eth1 state: up type: ethernet routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.126.1 next-hop-interface: eth0 table-id: 254 ---",
"--- mac_interface_map: [ { mac_address: 02:00:00:2c:23:a5, logical_nic_name: eth0 }, { mac_address: 02:00:00:68:73:dc, logical_nic_name: eth1 } ] ---",
"--- interfaces: - ipv4: address: - ip: 192.168.143.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: eth0.404 state: up type: vlan vlan: base-iface: eth0 id: 404 ---",
"--- interfaces: - ipv4: address: - ip: 192.168.138.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false link-aggregation: mode: active-backup options: all_slaves_active: delivered miimon: \"140\" slaves: - eth0 - eth1 name: bond0 state: up type: bond ---",
"--- jq -n --arg NMSTATE_YAML1 \"USD(cat server-a.yaml)\" --arg NMSTATE_YAML2 \"USD(cat server-b.yaml)\" '{ \"static_network_config\": [ { \"network_yaml\": USDNMSTATE_YAML1, \"mac_interface_map\": [{\"mac_address\": \"02:00:00:2c:23:a5\", \"logical_nic_name\": \"eth0\"}, {\"mac_address\": \"02:00:00:68:73:dc\", \"logical_nic_name\": \"eth1\"}] }, { \"network_yaml\": USDNMSTATE_YAML2, \"mac_interface_map\": [{\"mac_address\": \"02:00:00:9f:85:eb\", \"logical_nic_name\": \"eth1\"}, {\"mac_address\": \"02:00:00:c8:be:9b\", \"logical_nic_name\": \"eth0\"}] } ] }' >> /tmp/request-body.txt ---",
"source refresh-token",
"--- curl -H \"Content-Type: application/json\" -X PATCH -d @/tmp/request-body.txt -H \"Authorization: Bearer USD{API_TOKEN}\" https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID ---",
"--- { \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 }, { \"cidr\": \"fd01::/48\", \"host_prefix\": 64 } ], \"service_networks\": [ {\"cidr\": \"172.30.0.0/16\"}, {\"cidr\": \"fd02::/112\"} ], \"machine_networks\": [ {\"cidr\": \"192.168.127.0/24\"},{\"cidr\": \"1001:db8::/120\"} ] } ---",
"--- { \"vip_dhcp_allocation\": false, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"api_vips\": [ { \"ip\": \"192.168.127.100\" }, { \"ip\": \"2001:0db8:85a3:0000:0000:8a2e:0370:7334\" } ], \"ingress_vips\": [ { \"ip\": \"192.168.127.101\" }, { \"ip\": \"2001:0db8:85a3:0000:0000:8a2e:0370:7335\" } ], \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 }, { \"cidr\": \"fd01::/48\", \"host_prefix\": 64 } ], \"service_networks\": [ {\"cidr\": \"172.30.0.0/16\"}, {\"cidr\": \"fd02::/112\"} ], \"machine_networks\": [ {\"cidr\": \"192.168.127.0/24\"},{\"cidr\": \"1001:db8::/120\"} ] } ---"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/assisted_installer_for_openshift_container_platform/assembly_network-configuration |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_process_automation_manager/snip-conscious-language_getting-started-kogito |
Chapter 1. Overview of Insights for Red Hat Enterprise Linux advisor service assessment and monitoring | Chapter 1. Overview of Insights for Red Hat Enterprise Linux advisor service assessment and monitoring Use the advisor service to assess and monitor the health of your Red Hat Enterprise Linux (RHEL) infrastructure. Whether you are concerned with individual or groups of systems, or with your whole infrastructure, be aware of the exposure of your systems to configuration issues that can affect availability, stability, performance, and security. After installing and registering the Insights for Red Hat Enterprise Linux client, the client runs daily to check systems against a database of Recommendations , which are sets of conditions that can leave your RHEL systems at risk. Your data is then uploaded to the Operations > Advisor > Recommendations page where you can perform the following actions: See all of the recommendations for your entire RHEL infrastructure. Use robust filtering capabilities to refine your results to those recommendations, systems, groups, or workloads that are of greatest concern to you, including SAP workloads, Satellite host collections, and custom tags. Learn more about individual recommendations, details about the risks they present, and get resolutions tailored to your individual systems. Share results with other stakeholders. For more information, see Generating Advisor Service Reports . Create and manage remediation playbooks to fix issues right from the Insights for Red Hat Enterprise Linux application. For more information, see Red Hat Insights Remediations Guide . 1.1. User Access settings in the Red Hat Hybrid Cloud Console User Access is the Red Hat implementation of role-based access control (RBAC). Your Organization Administrator uses User Access to configure what users can see and do on the Red Hat Hybrid Cloud Console (the console): Control user access by organizing roles instead of assigning permissions individually to users. Create groups that include roles and their corresponding permissions. Assign users to these groups, allowing them to inherit the permissions associated with their group's roles. 1.1.1. Predefined User Access groups and roles To make groups and roles easier to manage, Red Hat provides two predefined groups and a set of predefined roles. 1.1.1.1. Predefined groups The Default access group contains all users in your organization. Many predefined roles are assigned to this group. It is automatically updated by Red Hat. Note If the Organization Administrator makes changes to the Default access group its name changes to Custom default access group and it is no longer updated by Red Hat. The Default admin access group contains only users who have Organization Administrator permissions. This group is automatically maintained and users and roles in this group cannot be changed. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (⚙) > Identity & Access Management > User Access > Groups to see the current groups in your account. This view is limited to the Organization Administrator. 1.1.1.2. Predefined roles assigned to groups The Default access group contains many of the predefined roles. Because all users in your organization are members of the Default access group, they inherit all permissions assigned to that group. The Default admin access group includes many (but not all) predefined roles that provide update and delete permissions. The roles in this group usually include administrator in their name. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (⚙) > Identity & Access Management > User Access > Roles to see the current roles in your account. You can see how many groups each role is assigned to. This view is limited to the Organization Administrator. 1.1.2. Access permissions The Prerequisites for each procedure list which predefined role provides the permissions you must have. As a user, you can navigate to Red Hat Hybrid Cloud Console > the Settings icon (⚙) > My User Access to view the roles and application permissions currently inherited by you. If you try to access Insights for Red Hat Enterprise Linux features and see a message that you do not have permission to perform this action, you must obtain additional permissions. The Organization Administrator or the User Access administrator for your organization configures those permissions. Use the Red Hat Hybrid Cloud Console Virtual Assistant to ask "Contact my Organization Administrator". The assistant sends an email to the Organization Administrator on your behalf. Additional resources For more information about user access and permissions, see User Access Configuration Guide for Role-based Access Control (RBAC) . 1.1.3. User Access roles for advisor service users The following roles enable standard or enhanced access to remediations features in Insights for Red Hat Enterprise Linux: RHEL Advisor administrator. Perform any available operation against any Insights for Red Hat Enterprise Linux advisor-service resource. RHEL Advisor viewer. Be able to read advisor data. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_rhel_configuration_issues_using_the_red_hat_insights_advisor_service/assembly-adv-assess-overview |
Chapter 10. Provisioning Virtual Machines on Red Hat Virtualization | Chapter 10. Provisioning Virtual Machines on Red Hat Virtualization Red Hat Virtualization is an enterprise-grade server and desktop virtualization platform. In Red Hat Satellite, you can manage virtualization functions through Red Hat Virtualization's REST API. This includes creating virtual machines and controlling their power states. You can use Red Hat Virtualization provisioning to create virtual machines over a network connection or from an existing image. You can use cloud-init to configure the virtual machines that you provision. Using cloud-init avoids any special configuration on the network, such as a managed DHCP and TFTP, to finish the installation of the virtual machine. This method does not require Satellite to connect to the provisioned virtual machine using SSH to run the finish script. Prerequisites You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in the Content Management Guide . Provide an activation key for host registration. For more information, see Creating An Activation Key in the Content Management guide. A Capsule Server managing a logical network on the Red Hat Virtualization environment. Ensure no other DHCP services run on this network to avoid conflicts with Capsule Server. For more information, see Configuring Networking in the Provisioning guide. An existing template, other than the blank template, if you want to use image-based provisioning. For more information about creating templates for virtual machines, see Templates in the Red Hat Virtualization Virtual Machine Management Guide . An administration-like user on Red Hat Virtualization for communication with Satellite Server. Do not use the admin@internal user for this communication. Instead, create a new Red Hat Virtualization user with the following permissions: System > Configure System > Login Permissions Network > Configure vNIC Profile > Create Network > Configure vNIC Profile > Edit Properties Network > Configure vNIC Profile > Delete Network > Configure vNIC Profile > Assign vNIC Profile to VM Network > Configure vNIC Profile > Assign vNIC Profile to Template Template > Provisioning Operations > Import/Export VM > Provisioning Operations > Create VM > Provisioning Operations > Delete VM > Provisioning Operations > Import/Export VM > Provisioning Operations > Edit Storage Disk > Provisioning Operations > Create Disk > Disk Profile > Attach Disk Profile For more information about how to create a user and add permissions in Red Hat Virtualization, see Administering User Tasks From the Administration Portal in the Red Hat Virtualization Administration Guide . 10.1. Adding the Red Hat Virtualization Connection to Satellite Server Use this procedure to add Red Hat Virtualization as a compute resource in Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources and click Create Compute Resource . In the Name field, enter a name for the new compute resource. From the Provider list, select RHV . In the Description field, enter a description for the compute resource. In the URL field, enter the connection URL for the Red Hat Virtualization Manager's API in the following form: https://rhv.example.com/ovirt-engine/api/v4 . In the User field, enter the name of a user with permissions to access Red Hat Virtualization Manager's resources. In the Password field, enter the password of the user. Click Load Datacenters to populate the Datacenter list with data centers from your Red Hat Virtualization environment. From the Datacenter list, select a data center. From the Quota ID list, select a quota to limit resources available to Satellite. In the X509 Certification Authorities field, enter the certificate authority for SSL/TLS access. Alternatively, if you leave the field blank, a self-signed certificate is generated on the first API request by the server. Click the Locations tab and select the location you want to use. Click the Organizations tab and select the organization you want to use. Click Submit to save the compute resource. CLI procedure Enter the hammer compute-resource create command with Ovirt for --provider and the name of the data center you want to use for --datacenter . 10.2. Preparing Cloud-init Images in Red Hat Virtualization To use cloud-init during provisioning, you must prepare an image with cloud-init installed in Red Hat Virtualization, and then import the image to Satellite to use for provisioning. Procedure In Red Hat Virtualization, create a virtual machine to use for image-based provisioning in Satellite. On the virtual machine, install cloud-init : To the /etc/cloud/cloud.cfg file, add the following information: In Red Hat Virtualization, create an image from this virtual machine. When you add this image to Satellite, ensure that you select the User Data checkbox. 10.3. Adding Red Hat Virtualization Images to Satellite Server To create hosts using image-based provisioning, you must add information about the image, such as access details and the image location, to your Satellite Server. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources and click the name of the Red Hat Virtualization connection. Click Create Image . In the Name field, enter a name for the image. From the Operating System list, select the base operating system of the image. From the Architecture list, select the operating system architecture. In the Username field, enter the SSH user name for image access. This is normally the root user. In the Password field, enter the SSH password for image access. From the Image list, select an image from the Red Hat Virtualization compute resource. Optional: Select the User Data checkbox if the image supports user data input, such as cloud-init data. Click Submit to save the image details. CLI procedure Create the image with the hammer compute-resource image create command. Use the --uuid option to store the template UUID on the Red Hat Virtualization server. 10.4. Preparing a Cloud-init Template Procedure In the Satellite web UI, navigate to Hosts > Provisioning Templates , and click Create Template . In the Name field, enter a name for the template. In the Editor field, enter the following template details: Click the Type tab and from the Type list, select User data template . Click the Association tab, and from the Applicable Operating Systems list, select the operating system that you want associate with the template. Click the Locations tab, and from the Locations list, select the location that you want to associate with the template. Click the Organizations tab, and from the Organization list, select the organization that you want to associate with the template. Click Submit . In the Satellite web UI, navigate to Hosts > Operating Systems , and select the operating system you want to associate with the template. Click the Templates tab, and from the User data template list, select the name of the new template. Click Submit . 10.5. Adding Red Hat Virtualization Details to a Compute Profile Use this procedure to add Red Hat Virtualization hardware settings to a compute profile. When you create a host on KVM using this compute profile, these settings are automatically populated. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Profiles . In the Compute Profiles window, click the name of an existing compute profile, or click Create Compute Profile , enter a Name , and click Submit . Click the name of the Red Hat Virtualization compute resource. From the Cluster list, select the target host cluster in the Red Hat Virtualization environment. From the Template list, select the RHV template to use for the Cores and Memory settings. In the Cores field, enter the number of CPU cores to allocate to the new host. In the Memory field, enter the amount of memory to allocate to the new host. From the Image list, select image to use for image-based provisioning. In the Network Interfaces area, enter the network parameters for the host's network interface. You can create multiple network interfaces. However, at least one interface must point to a Capsule-managed network. For each network interface, enter the following details: In the Name field, enter the name of the network interface. From the Network list, select The logical network that you want to use. In the Storage area, enter the storage parameters for the host. You can create multiple volumes for the host. For each volume, enter the following details: In the Size (GB) enter the size, in GB, for the new volume. From the Storage domain list, select the storage domain for the volume. From the Preallocate disk , select either thin provisioning or preallocation of the full disk. From the Bootable list, select whether you want a bootable or non-bootable volume. Click Submit to save the compute profile. CLI procedure To create a compute profile, enter the following command: To set the values for the compute profile, enter the following command: 10.6. Creating Hosts on Red Hat Virtualization In Satellite, you can use Red Hat Virtualization provisioning to create hosts over a network connection or from an existing image: If you want to create a host over a network connection, the new host must be able to access either Satellite Server's integrated Capsule or an external Capsule Server on a Red Hat Virtualization virtual network, so that the host has access to PXE provisioning services. This new host entry triggers the Red Hat Virtualization server to create and start a virtual machine. If the virtual machine detects the defined Capsule Server through the virtual network, the virtual machine boots to PXE and begins to install the chosen operating system. If you want to create a host with an existing image, the new host entry triggers the Red Hat Virtualization server to create the virtual machine using a pre-existing image as a basis for the new volume. To use the CLI instead of the Satellite web UI, see the CLI procedure . DHCP Conflicts For network-based provisioning, if you use a virtual network on the Red Hat Virtualization server for provisioning, select a network that does not provide DHCP assignments. This causes DHCP conflicts with Satellite Server when booting new hosts. Procedure In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name for the host. Click the Organization and Location tabs to ensure that the provisioning context is automatically set to the current context. From the Host Group list, select the host group that you want to use to populate the form. From the Deploy on list, select the Red Hat Virtualization connection. From the Compute Profile list, select a profile to use to automatically populate virtual machine settings. Click the Interface tab and click Edit on the host's interface. Verify that the fields are automatically populated, particularly the following items: The Name from the Host tab becomes the DNS name . Satellite Server automatically assigns an IP address for the new host. The MAC address field is blank. The Red Hat Virtualization server assigns a MAC address to the host. The Managed , Primary , and Provision options are automatically selected for the first interface on the host. If not, select them. The Red Hat Virtualization-specific fields are populated with settings from your compute profile. Modify these settings if required. Click the Operating System tab, and confirm that all fields automatically contain values. Select the Provisioning Method that you want to use: For network-based provisioning, click Network Based . For image-based provisioning, click Image Based . Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your needs. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key. Click Submit to save the host entry. CLI procedure To use network-based provisioning, create the host with the hammer host create command and include --provision-method build . Replace the values in the following example with the appropriate values for your environment. To use image-based provisioning, create the host with the hammer host create command and include --provision-method image . Replace the values in the following example with the appropriate values for your environment. For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command. | [
"hammer compute-resource create --name \" My_RHV \" --provider \"Ovirt\" --description \"RHV server at rhv.example.com \" --url \" https://rhv.example.com/ovirt-engine/api \" --user \" Satellite_User \" --password \" My_Password \" --locations \"New York\" --organizations \" My_Organization \" --datacenter \" My_Datacenter \"",
"install cloud-init",
"datasource_list: [\"NoCloud\", \"ConfigDrive\"]",
"hammer compute-resource image create --name \" RHV_Image \" --compute-resource \" My_RHV \" --operatingsystem \"RedHat version \" --architecture \"x86_64\" --username root --uuid \"9788910c-4030-4ae0-bad7-603375dd72b1\" \\",
"<%# kind: user_data name: Cloud-init -%> #cloud-config hostname: <%= @host.shortname %> <%# Allow user to specify additional SSH key as host parameter -%> <% if @host.params['sshkey'].present? || @host.params['remote_execution_ssh_keys'].present? -%> ssh_authorized_keys: <% if @host.params['sshkey'].present? -%> - <%= @host.params['sshkey'] %> <% end -%> <% if @host.params['remote_execution_ssh_keys'].present? -%> <% @host.params['remote_execution_ssh_keys'].each do |key| -%> - <%= key %> <% end -%> <% end -%> <% end -%> runcmd: - | #!/bin/bash <%= indent 4 do snippet 'subscription_manager_registration' end %> <% if @host.info['parameters']['realm'] && @host.realm && @host.realm.realm_type == 'Red Hat Identity Management' -%> <%= indent 4 do snippet 'freeipa_register' end %> <% end -%> <% unless @host.operatingsystem.atomic? -%> # update all the base packages from the updates repository yum -t -y -e 0 update <% end -%> <% # safemode renderer does not support unary negation non_atomic = @host.operatingsystem.atomic? ? false : true pm_set = @host.puppetmaster.empty? ? false : true puppet_enabled = non_atomic && (pm_set || @host.params['force-puppet']) %> <% if puppet_enabled %> yum install -y puppet cat > /etc/puppet/puppet.conf << EOF <%= indent 4 do snippet 'puppet.conf' end %> EOF # Setup puppet to run on system reboot /sbin/chkconfig --level 345 puppet on /usr/bin/puppet agent --config /etc/puppet/puppet.conf --onetime --tags no_such_tag <%= @host.puppetmaster.blank? ? '' : \"--server #{@host.puppetmaster}\" %> --no-daemonize /sbin/service puppet start <% end -%> phone_home: url: <%= foreman_url('built') %> post: [] tries: 10pp",
"hammer compute-profile create --name \"Red Hat Virtualization CP\"",
"hammer compute-profile values create --compute-profile \"Red Hat Virtualization CP\" --compute-resource \" My_RHV \" --interface \"compute_interface= Interface_Type ,compute_name=eth0,compute_network=satnetwork\" --volume \"size_gb=20G,storage_domain=Data,bootable=true\" --compute-attributes \"cluster=Default,cores=1,memory=1073741824,start=true\"\"",
"hammer host create --name \"RHV-vm1\" --organization \" My_Organization \" --location \"New York\" --hostgroup \"Base\" --compute-resource \" My_RHV \" --provision-method build --build true --enabled true --managed true --interface \"managed=true,primary=true,provision=true,compute_name=eth0,compute_network=satnetwork\" --compute-attributes=\"cluster=Default,cores=1,memory=1073741824,start=true\" --volume=\"size_gb=20G,storage_domain=Data,bootable=true\"",
"hammer host create --name \"RHV-vm2\" --organization \" My_Organization \" --location \"New York\" --hostgroup \"Base\" --compute-resource \" My_RHV \" --provision-method image --image \" RHV_Image \" --enabled true --managed true --interface \"managed=true,primary=true,provision=true,compute_name=eth0,compute_network=satnetwork\" --compute-attributes=\"cluster=Default,cores=1,memory=1073741824,start=true\" --volume=\"size_gb=20G,storage_domain=Data,bootable=true\""
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/provisioning_hosts/provisioning_virtual_machines_on_ovirt_provisioning |
Chapter 75. Bindy | Chapter 75. Bindy The goal of this component is to allow the parsing/binding of non-structured data (or to be more precise non-XML data) to/from Java Beans that have binding mappings defined with annotations. Using Bindy, you can bind data from sources such as : CSV records, Fixed-length records, FIX messages, or almost any other non-structured data to one or many Plain Old Java Object (POJO). Bindy converts the data according to the type of the java property. POJOs can be linked together with one-to-many relationships available in some cases. Moreover, for data type like Date, Double, Float, Integer, Short, Long and BigDecimal, you can provide the pattern to apply during the formatting of the property. For the BigDecimal numbers, you can also define the precision and the decimal or grouping separators. Type Format Type Pattern example Link Date DateFormat dd-MM-yyyy https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/SimpleDateFormat.html Decimal* DecimalFormat . . ## https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/DecimalFormat.html Where Decimal = Double, Integer, Float, Short, Long Format supported This first release only support comma separated values fields and key value pair fields (e.g. : FIX messages). To work with camel-bindy, you must first define your model in a package (e.g. com.acme.model) and for each model class (e.g. Order, Client, Instrument, ... ) add the required annotations (described hereafter) to the Class or field. Multiple models As you configure bindy using class names instead of package names you can put multiple models in the same package. 75.1. Options The Bindy dataformat supports 5 options, which are listed below. Name Default Java Type Description type Enum Required Whether to use Csv, Fixed, or KeyValue. Enum values: Csv Fixed KeyValue classType String Name of model class to use. locale String To configure a default locale to use, such as us for united states. To use the JVM platform default locale then use the name default. unwrapSingleInstance Boolean When unmarshalling should a single instance be unwrapped and returned instead of wrapped in a java.util.List. allowEmptyStream Boolean Whether to allow empty streams in the unmarshal process. If true, no exception will be thrown when a body without records is provided. 75.2. Annotations The annotations created allow to map different concept of your model to the POJO like: Type of record (CSV, key value pair (e.g. FIX message), fixed length ... ), Link (to link object in another object), DataField and their properties (int, type, ... ), KeyValuePairField (for key = value format like we have in FIX financial messages), Section (to identify header, body and footer section), OneToMany, BindyConverter, FormatFactories This section will describe them. 75.2.1. 1. CsvRecord The CsvRecord annotation is used to identified the root class of the model. It represents a record = "a line of a CSV file" and can be linked to several children model classes. Annotation name Record type Level CsvRecord CSV Class Parameter name Type Required Default value Info separator String [✓] Separator used to split a record in tokens (mandatory) - can be ',' or ';' or 'anything'. The only whitespace character supported is tab (\t). No other whitespace characters (spaces) are not supported. This value is interpreted as a regular expression. If you want to use a sign which has a special meaning in regular expressions, e.g. the '|' sign, then you have to mask it, like '|'. allowEmptyStream boolean false The allowEmptyStream parameter will allow to prcoess the unavaiable stream for CSV file. autospanLine boolean false Last record spans rest of line (optional) - if enabled then the last column is auto spanned to end of line, for example if its a comment, etc this allows the line to contain all characters, also the delimiter char. crlf String WINDOWS Character to be used to add a carriage return after each record (optional) - allow to define the carriage return character to use. If you specify a value other than the three listed before, the value you enter (custom) will be used as the CRLF character(s). Three values can be used : WINDOWS, UNIX, MAC, or custom. endWithLineBreak boolean true The endWithLineBreak parameter flags if the CSV file should end with a line break or not (optional) generateHeaderColumns boolean false The generateHeaderColumns parameter allow to add in the CSV generated the header containing names of the columns isOrdered boolean false Indicates if the message must be ordered in output name String Name describing the record (optional) quote String " Whether to marshal columns with the given quote character (optional) - allow to specify a quote character of the fields when CSV is generated. This annotation is associated to the root class of the model and must be declared one time. quoting boolean false Indicate if the values (and headers) must be quoted when marshaling (optional) quotingEscaped boolean false Indicate if the values must be escaped when quoting (optional) removeQuotes boolean true The remove quotes parameter flags if unmarshalling should try to remove quotes for each field skipField boolean false The skipField parameter will allow to skip fields of a CSV file. If some fields are not necessary, they can be skipped. skipFirstLine boolean false The skipFirstLine parameter will allow to skip or not the first line of a CSV file. This line often contains columns definition case 1 : separator = ',' The separator used to segregate the fields in the CSV record is , : @CsvRecord( separator = "," ) public Class Order { } case 2 : separator = ';' Compare to the case, the separator here is ; instead of , : @CsvRecord( separator = ";" ) public Class Order { } case 3 : separator = '|' Compare to the case, the separator here is | instead of ; : @CsvRecord( separator = "\\|" ) public Class Order { } case 4 : separator = '\",\"' Applies for Camel 2.8.2 or older When the field to be parsed of the CSV record contains , or ; which is also used as separator, we should find another strategy to tell camel bindy how to handle this case. To define the field containing the data with a comma, you will use single or double quotes as delimiter (e.g : '10', 'Street 10, NY', 'USA' or "10", "Street 10, NY", "USA"). __ In this case, the first and last character of the line which are a single or double quotes will be removed by bindy. @CsvRecord( separator = "\",\"" ) public Class Order { } Bindy automatically detects if the record is enclosed with either single or double quotes and automatic remove those quotes when unmarshalling from CSV to Object. Therefore do not include the quotes in the separator, but simply do as below: @CsvRecord( separator = "," ) public Class Order { } Notice that if you want to marshal from Object to CSV and use quotes, then you need to specify which quote character to use, using the quote attribute on the @CsvRecord as shown below: @CsvRecord( separator = ",", quote = "\"" ) public Class Order { } case 5 : separator & skipFirstLine The feature is interesting when the client wants to have in the first line of the file, the name of the data fields : To inform bindy that this first line must be skipped during the parsing process, then we use the attribute : @CsvRecord(separator = ",", skipFirstLine = true) public Class Order { } case 6 : generateHeaderColumns To add at the first line of the CSV generated, the attribute generateHeaderColumns must be set to true in the annotation like this : @CsvRecord( generateHeaderColumns = true ) public Class Order { } As a result, Bindy during the unmarshaling process will generate CSV like this : case 7 : carriage return If the platform where camel-bindy will run is not Windows but Macintosh or Unix, then you can change the crlf property like this. Three values are available : WINDOWS, UNIX or MAC @CsvRecord(separator = ",", crlf="MAC") public Class Order { } Additionally, if for some reason you need to add a different line ending character, you can opt to specify it using the crlf parameter. In the following example, we can end the line with a comma followed by the newline character: @CsvRecord(separator = ",", crlf=",\n") public Class Order { } case 8 : isOrdered Sometimes, the order to follow during the creation of the CSV record from the model is different from the order used during the parsing. Then, in this case, we can use the attribute isOrdered = true to indicate this in combination with attribute position of the DataField annotation. @CsvRecord(isOrdered = true) public Class Order { @DataField(pos = 1, position = 11) private int orderNr; @DataField(pos = 2, position = 10) private String clientNr; } __ pos is used to parse the file stream, while position is used to generate the CSV. 75.2.2. 2. Link The link annotation will allow to link objects together. Annotation name Record type Level Link all Class & Property Parameter name Type Required Default value Info linkType LinkType OneToOne Type of link identifying the relation between the classes Only one-to-one relation is allowed as of the current version. E.g : If the model class Client is linked to the Order class, then use annotation Link in the Order class like this : Property Link @CsvRecord(separator = ",") public class Order { @DataField(pos = 1) private int orderNr; @Link private Client client; } And for the class Client : Class Link @Link public class Client { } 75.2.3. 3. DataField The DataField annotation defines the property of the field. Each datafield is identified by its position in the record, a type (string, int, date, ... ) and optionally of a pattern. Annotation name Record type Level DataField all Property Parameter name Type Required Default value Info pos int [✓] Position of the data in the input record, must start from 1 (mandatory). See the position parameter. align String R Align the text to the right or left. Use values <tt>R</tt> or <tt>L</tt>. clip boolean false Indicates to clip data in the field if it exceeds the allowed length when using fixed length. columnName String Name of the header column (optional). Uses the name of the property as default. Only applicable when CsvRecord has generateHeaderColumns = true decimalSeparator String Decimal Separator to be used with BigDecimal number defaultValue String Field's default value in case no value is set delimiter String Optional delimiter to be used if the field has a variable length groupingSeparator String Grouping Separator to be used with BigDecimal number when we would like to format/parse to number with grouping e.g. 123,456.789 impliedDecimalSeparator boolean false Indicates if there is a decimal point implied at a specified location length int 0 Length of the data block (number of characters) if the record is set to a fixed length lengthPos int 0 Identifies a data field in the record that defines the expected fixed length for this field method String Method name to call to apply such customization on DataField. This must be the method on the datafield itself or you must provide static fully qualified name of the class's method e.g: see unit test org.apache.camel.dataformat.bindy.csv.BindySimpleCsvFunctionWithExternalMethodTest.replaceToBar name String Name of the field (optional) paddingChar char The char to pad with if the record is set to a fixed length pattern String Pattern that the Java formatter (SimpleDateFormat by example) will use to transform the data (optional). If using pattern, then setting locale on bindy data format is recommended. Either set to a known locale such as "us" or use "default" to use platform default locale. position int 0 Position of the field in the output message generated (should start from 1). Must be used when the position of the field in the CSV generated (output message) must be different compare to input position (pos). See the pos parameter. precision int 0 precision of the \{@link java.math.BigDecimal} number to be created required boolean false Indicates if the field is mandatory rounding String CEILING Round mode to be used to round/scale a BigDecimal Values : UP, DOWN, CEILING, FLOOR, HALF_UP, HALF_DOWN,HALF_EVEN, UNNECESSARY e.g : Number = 123456.789, Precision = 2, Rounding = CEILING Result : 123456.79 timezone String Timezone to be used. trim boolean false Indicates if the value should be trimmed case 1 : pos This parameter/attribute represents the position of the field in the CSV record. Position @CsvRecord(separator = ",") public class Order { @DataField(pos = 1) private int orderNr; @DataField(pos = 5) private String isinCode; } As you can see in this example the position starts at 1 but continues at 5 in the class Order. The numbers from 2 to 4 are defined in the class Client (see here after). Position continues in another model class public class Client { @DataField(pos = 2) private String clientNr; @DataField(pos = 3) private String firstName; @DataField(pos = 4) private String lastName; } case 2 : pattern The pattern allows to enrich or validates the format of your data Pattern @CsvRecord(separator = ",") public class Order { @DataField(pos = 1) private int orderNr; @DataField(pos = 5) private String isinCode; @DataField(name = "Name", pos = 6) private String instrumentName; @DataField(pos = 7, precision = 2) private BigDecimal amount; @DataField(pos = 8) private String currency; // pattern used during parsing or when the date is created @DataField(pos = 9, pattern = "dd-MM-yyyy") private Date orderDate; } case 3 : precision The precision is helpful when you want to define the decimal part of your number. Precision @CsvRecord(separator = ",") public class Order { @DataField(pos = 1) private int orderNr; @Link private Client client; @DataField(pos = 5) private String isinCode; @DataField(name = "Name", pos = 6) private String instrumentName; @DataField(pos = 7, precision = 2) private BigDecimal amount; @DataField(pos = 8) private String currency; @DataField(pos = 9, pattern = "dd-MM-yyyy") private Date orderDate; } case 4 : Position is different in output The position attribute will inform bindy how to place the field in the CSV record generated. By default, the position used corresponds to the position defined with the attribute pos . If the position is different (that means that we have an asymetric processus comparing marshaling from unmarshaling) then we can use position to indicate this. Here is an example: Position is different in output @CsvRecord(separator = ",", isOrdered = true) public class Order { // Positions of the fields start from 1 and not from 0 @DataField(pos = 1, position = 11) private int orderNr; @DataField(pos = 2, position = 10) private String clientNr; @DataField(pos = 3, position = 9) private String firstName; @DataField(pos = 4, position = 8) private String lastName; @DataField(pos = 5, position = 7) private String instrumentCode; @DataField(pos = 6, position = 6) private String instrumentNumber; } This attribute of the annotation @DataField must be used in combination with attribute isOrdered = true of the annotation @CsvRecord . case 5 : required If a field is mandatory, simply use the attribute required set to true. Required @CsvRecord(separator = ",") public class Order { @DataField(pos = 1) private int orderNr; @DataField(pos = 2, required = true) private String clientNr; @DataField(pos = 3, required = true) private String firstName; @DataField(pos = 4, required = true) private String lastName; } If this field is not present in the record, then an error will be raised by the parser with the following information : case 6 : trim If a field has leading and/or trailing spaces which should be removed before they are processed, simply use the attribute trim set to true. Trim @CsvRecord(separator = ",") public class Order { @DataField(pos = 1, trim = true) private int orderNr; @DataField(pos = 2, trim = true) private Integer clientNr; @DataField(pos = 3, required = true) private String firstName; @DataField(pos = 4) private String lastName; } case 7 : defaultValue If a field is not defined then uses the value indicated by the defaultValue attribute. Default value @CsvRecord(separator = ",") public class Order { @DataField(pos = 1) private int orderNr; @DataField(pos = 2) private Integer clientNr; @DataField(pos = 3, required = true) private String firstName; @DataField(pos = 4, defaultValue = "Barin") private String lastName; } case 8 : columnName Specifies the column name for the property only if @CsvRecord has annotation generateHeaderColumns = true . Column Name @CsvRecord(separator = ",", generateHeaderColumns = true) public class Order { @DataField(pos = 1) private int orderNr; @DataField(pos = 5, columnName = "ISIN") private String isinCode; @DataField(name = "Name", pos = 6) private String instrumentName; } This attribute is only applicable to optional fields. 75.2.4. 4. FixedLengthRecord The FixedLengthRecord annotation is used to identified the root class of the model. It represents a record = "a line of a file/message containing data fixed length (number of characters) formatted" and can be linked to several children model classes. This format is a bit particular because data of a field can be aligned to the right or to the left. When the size of the data does not fill completely the length of the field, we can then add 'pad' characters. Annotation name Record type Level FixedLengthRecord fixed Class Parameter name Type Required Default value Info countGrapheme boolean false Indicates how chars are counted crlf String WINDOWS Character to be used to add a carriage return after each record (optional). Possible values: WINDOWS, UNIX, MAC, or custom. This option is used only during marshalling, whereas unmarshalling uses system default JDK provided line delimiter unless eol is customized. eol String Character to be used to process considering end of line after each record while unmarshalling (optional - default: "", which help default JDK provided line delimiter to be used unless any other line delimiter provided) This option is used only during unmarshalling, where marshalling uses system default provided line delimiter as "WINDOWS" unless any other value is provided. footer Class void Indicates that the record(s) of this type may be followed by a single footer record at the end of the file header Class void Indicates that the record(s) of this type may be preceded by a single header record at the beginning of in the file ignoreMissingChars boolean false Indicates whether too short lines will be ignored ignoreTrailingChars boolean false Indicates that characters beyond the last mapped filed can be ignored when unmarshalling / parsing. This annotation is associated to the root class of the model and must be declared one time. length int 0 The fixed length of the record (number of characters). It means that the record will always be that long padded with \{#paddingChar()}'s name String Name describing the record (optional) paddingChar char The char to pad with. skipFooter boolean false Configures the data format to skip marshalling / unmarshalling of the footer record. Configure this parameter on the primary record (e.g., not the header or footer). skipHeader boolean false Configures the data format to skip marshalling / unmarshalling of the header record. Configure this parameter on the primary record (e.g., not the header or footer). A record may not be both a header/footer and a primary fixed-length record. case 1 : Simple fixed length record This simple example shows how to design the model to parse/format a fixed message Fixed-simple @FixedLengthRecord(length=54, paddingChar=' ') public static class Order { @DataField(pos = 1, length=2) private int orderNr; @DataField(pos = 3, length=2) private String clientNr; @DataField(pos = 5, length=7) private String firstName; @DataField(pos = 12, length=1, align="L") private String lastName; @DataField(pos = 13, length=4) private String instrumentCode; @DataField(pos = 17, length=10) private String instrumentNumber; @DataField(pos = 27, length=3) private String orderType; @DataField(pos = 30, length=5) private String instrumentType; @DataField(pos = 35, precision = 2, length=7) private BigDecimal amount; @DataField(pos = 42, length=3) private String currency; @DataField(pos = 45, length=10, pattern = "dd-MM-yyyy") private Date orderDate; } case 2 : Fixed length record with alignment and padding This more elaborated example show how to define the alignment for a field and how to assign a padding character which is ' ' here: Fixed-padding-align @FixedLengthRecord(length=60, paddingChar=' ') public static class Order { @DataField(pos = 1, length=2) private int orderNr; @DataField(pos = 3, length=2) private String clientNr; @DataField(pos = 5, length=9) private String firstName; @DataField(pos = 14, length=5, align="L") // align text to the LEFT zone of the block private String lastName; @DataField(pos = 19, length=4) private String instrumentCode; @DataField(pos = 23, length=10) private String instrumentNumber; @DataField(pos = 33, length=3) private String orderType; @DataField(pos = 36, length=5) private String instrumentType; @DataField(pos = 41, precision = 2, length=7) private BigDecimal amount; @DataField(pos = 48, length=3) private String currency; @DataField(pos = 51, length=10, pattern = "dd-MM-yyyy") private Date orderDate; } case 3 : Field padding Sometimes, the default padding defined for record cannnot be applied to the field as we have a number format where we would like to pad with '0' instead of ' '. In this case, you can use in the model the attribute paddingChar on @DataField to set this value. Fixed-padding-field @FixedLengthRecord(length = 65, paddingChar = ' ') public static class Order { @DataField(pos = 1, length = 2) private int orderNr; @DataField(pos = 3, length = 2) private String clientNr; @DataField(pos = 5, length = 9) private String firstName; @DataField(pos = 14, length = 5, align = "L") private String lastName; @DataField(pos = 19, length = 4) private String instrumentCode; @DataField(pos = 23, length = 10) private String instrumentNumber; @DataField(pos = 33, length = 3) private String orderType; @DataField(pos = 36, length = 5) private String instrumentType; @DataField(pos = 41, precision = 2, length = 12, paddingChar = '0') private BigDecimal amount; @DataField(pos = 53, length = 3) private String currency; @DataField(pos = 56, length = 10, pattern = "dd-MM-yyyy") private Date orderDate; } case 4: Fixed length record with delimiter Fixed-length records sometimes have delimited content within the record. The firstName and lastName fields are delimited with the ^ character in the following example: Fixed-delimited @FixedLengthRecord public static class Order { @DataField(pos = 1, length = 2) private int orderNr; @DataField(pos = 2, length = 2) private String clientNr; @DataField(pos = 3, delimiter = "^") private String firstName; @DataField(pos = 4, delimiter = "^") private String lastName; @DataField(pos = 5, length = 4) private String instrumentCode; @DataField(pos = 6, length = 10) private String instrumentNumber; @DataField(pos = 7, length = 3) private String orderType; @DataField(pos = 8, length = 5) private String instrumentType; @DataField(pos = 9, precision = 2, length = 12, paddingChar = '0') private BigDecimal amount; @DataField(pos = 10, length = 3) private String currency; @DataField(pos = 11, length = 10, pattern = "dd-MM-yyyy") private Date orderDate; } The pos value(s) in a fixed-length record may optionally be defined using ordinal, sequential values instead of precise column numbers. case 5 : Fixed length record with record-defined field length Occasionally a fixed-length record may contain a field that define the expected length of another field within the same record. In the following example the length of the instrumentNumber field value is defined by the value of instrumentNumberLen field in the record. Fixed-delimited @FixedLengthRecord public static class Order { @DataField(pos = 1, length = 2) private int orderNr; @DataField(pos = 2, length = 2) private String clientNr; @DataField(pos = 3, delimiter = "^") private String firstName; @DataField(pos = 4, delimiter = "^") private String lastName; @DataField(pos = 5, length = 4) private String instrumentCode; @DataField(pos = 6, length = 2, align = "R", paddingChar = '0') private int instrumentNumberLen; @DataField(pos = 7, lengthPos=6) private String instrumentNumber; @DataField(pos = 8, length = 3) private String orderType; @DataField(pos = 9, length = 5) private String instrumentType; @DataField(pos = 10, precision = 2, length = 12, paddingChar = '0') private BigDecimal amount; @DataField(pos = 11, length = 3) private String currency; @DataField(pos = 12, length = 10, pattern = "dd-MM-yyyy") private Date orderDate; } case 6 : Fixed length record with header and footer Bindy will discover fixed-length header and footer records that are configured as part of the model - provided that the annotated classes exist either in the same package as the primary @FixedLengthRecord class, or within one of the configured scan packages. The following text illustrates two fixed-length records that are bracketed by a header record and footer record. Fixed-header-and-footer-main-class @FixedLengthRecord(header = OrderHeader.class, footer = OrderFooter.class) public class Order { @DataField(pos = 1, length = 2) private int orderNr; @DataField(pos = 2, length = 2) private String clientNr; @DataField(pos = 3, length = 9) private String firstName; @DataField(pos = 4, length = 5, align = "L") private String lastName; @DataField(pos = 5, length = 4) private String instrumentCode; @DataField(pos = 6, length = 10) private String instrumentNumber; @DataField(pos = 7, length = 3) private String orderType; @DataField(pos = 8, length = 5) private String instrumentType; @DataField(pos = 9, precision = 2, length = 12, paddingChar = '0') private BigDecimal amount; @DataField(pos = 10, length = 3) private String currency; @DataField(pos = 11, length = 10, pattern = "dd-MM-yyyy") private Date orderDate; } @FixedLengthRecord public class OrderHeader { @DataField(pos = 1, length = 1) private int recordType = 1; @DataField(pos = 2, length = 10, pattern = "dd-MM-yyyy") private Date recordDate; } @FixedLengthRecord public class OrderFooter { @DataField(pos = 1, length = 1) private int recordType = 9; @DataField(pos = 2, length = 9, align = "R", paddingChar = '0') private int numberOfRecordsInTheFile; } case 7 : Skipping content when parsing a fixed length record It is common to integrate with systems that provide fixed-length records containing more information than needed for the target use case. It is useful in this situation to skip the declaration and parsing of those fields that we do not need. To accomodate this, Bindy will skip forward to the mapped field within a record if the pos value of the declared field is beyond the cursor position of the last parsed field. Using absolute pos locations for the fields of interest (instead of ordinal values) causes Bindy to skip content between two fields. Similarly, it is possible that none of the content beyond some field is of interest. In this case, you can tell Bindy to skip parsing of everything beyond the last mapped field by setting the ignoreTrailingChars property on the @FixedLengthRecord declaration. @FixedLengthRecord(ignoreTrailingChars = true) public static class Order { @DataField(pos = 1, length = 2) private int orderNr; @DataField(pos = 3, length = 2) private String clientNr; // any characters that appear beyond the last mapped field will be ignored } 75.2.5. 5. Message The Message annotation is used to identified the class of your model who will contain key value pairs fields. This kind of format is used mainly in Financial Exchange Protocol Messages (FIX). Nevertheless, this annotation can be used for any other format where data are identified by keys. The key pair values are separated each other by a separator which can be a special character like a tab delimitor (unicode representation : \u0009 ) or a start of heading (unicode representation : \u0001 ) Note To work with FIX messages, the model must contain a Header and Trailer classes linked to the root message class which could be a Order class. This is not mandatory but will be very helpful when you will use camel-bindy in combination with camel-fix which is a Fix gateway based on quickFix project . Annotation name Record type Level Message key value pair Class Parameter name Type Required Default value Info keyValuePairSeparator String [✓] Key value pair separator is used to split the values from their keys (mandatory). Can be '\u0001', '\u0009', '#', or 'anything'. pairSeparator String [✓] Pair separator used to split the key value pairs in tokens (mandatory). Can be '=', ';', or 'anything'. crlf String WINDOWS Character to be used to add a carriage return after each record (optional). Possible values = WINDOWS, UNIX, MAC, or custom. If you specify a value other than the three listed before, the value you enter (custom) will be used as the CRLF character(s). isOrdered boolean false Indicates if the message must be ordered in output. This annotation is associated to the message class of the model and must be declared one time. name String Name describing the message (optional) type String FIX type is used to define the type of the message (e.g. FIX, EMX, ... ) (optional) version String 4.1 version defines the version of the message (e.g. 4.1, ... ) (optional) case 1 : separator = 'u0001' The separator used to segregate the key value pair fields in a FIX message is the ASCII 01 character or in unicode format \u0001 . This character must be escaped a second time to avoid a java runtime error. Here is an example : and how to use the annotation: FIX - message @Message(keyValuePairSeparator = "=", pairSeparator = "\u0001", type="FIX", version="4.1") public class Order { } Look at test cases The ASCII character like tab, ... cannot be displayed in WIKI page. So, have a look to the test case of camel-bindy to see exactly how the FIX message looks like ( https://github.com/apache/camel/blob/main/components/camel-bindy/src/test/data/fix/fix.txt ) and the Order, Trailer, Header classes ( https://github.com/apache/camel/blob/main/components/camel-bindy/src/test/java/org/apache/camel/dataformat/bindy/model/fix/simple/Order.java ). 75.2.6. 6. KeyValuePairField The KeyValuePairField annotation defines the property of a key value pair field. Each KeyValuePairField is identified by a tag (= key) and its value associated, a type (string, int, date, ... ), optionaly a pattern and if the field is required. Annotation name Record type Level KeyValuePairField Key Value Pair - FIX Property Parameter name Type Required Default value Info tag int [✓] tag identifying the field in the message (mandatory) - must be unique impliedDecimalSeparator boolean false <b>Camel 2.11:</b> Indicates if there is a decimal point implied at a specified location name String name of the field (optional) pattern String pattern that the formater will use to transform the data (optional) position int 0 Position of the field in the message generated - must be used when the position of the key/tag in the FIX message must be different precision int 0 precision of the BigDecimal number to be created required boolean false Indicates if the field is mandatory timezone String Timezone to be used. case 1 : tag This parameter represents the key of the field in the message: FIX message - Tag @Message(keyValuePairSeparator = "=", pairSeparator = "\u0001", type="FIX", version="4.1") public class Order { @Link Header header; @Link Trailer trailer; @KeyValuePairField(tag = 1) // Client reference private String Account; @KeyValuePairField(tag = 11) // Order reference private String ClOrdId; @KeyValuePairField(tag = 22) // Fund ID type (Sedol, ISIN, ...) private String IDSource; @KeyValuePairField(tag = 48) // Fund code private String SecurityId; @KeyValuePairField(tag = 54) // Movement type ( 1 = Buy, 2 = sell) private String Side; @KeyValuePairField(tag = 58) // Free text private String Text; } case 2 : Different position in output If the tags/keys that we will put in the FIX message must be sorted according to a predefine order, then use the attribute position of the annotation @KeyValuePairField . FIX message - Tag - sort @Message(keyValuePairSeparator = "=", pairSeparator = "\\u0001", type = "FIX", version = "4.1", isOrdered = true) public class Order { @Link Header header; @Link Trailer trailer; @KeyValuePairField(tag = 1, position = 1) // Client reference private String account; @KeyValuePairField(tag = 11, position = 3) // Order reference private String clOrdId; } 75.2.7. 7. Section In FIX message of fixed length records, it is common to have different sections in the representation of the information : header, body and section. The purpose of the annotation @Section is to inform bindy about which class of the model represents the header (= section 1), body (= section 2) and footer (= section 3) Only one attribute/parameter exists for this annotation. Annotation name Record type Level Section FIX Class Parameter name Type Required Default value Info number int [✓] Number of the section case 1 : Section Definition of the header section: FIX message - Section - Header @Section(number = 1) public class Header { @KeyValuePairField(tag = 8, position = 1) // Message Header private String beginString; @KeyValuePairField(tag = 9, position = 2) // Checksum private int bodyLength; } Definition of the body section: FIX message - Section - Body @Section(number = 2) @Message(keyValuePairSeparator = "=", pairSeparator = "\\u0001", type = "FIX", version = "4.1", isOrdered = true) public class Order { @Link Header header; @Link Trailer trailer; @KeyValuePairField(tag = 1, position = 1) // Client reference private String account; @KeyValuePairField(tag = 11, position = 3) // Order reference private String clOrdId; Definition of the footer section: FIX message - Section - Footer @Section(number = 3) public class Trailer { @KeyValuePairField(tag = 10, position = 1) // CheckSum private int checkSum; public int getCheckSum() { return checkSum; } 75.2.8. 8. OneToMany The purpose of the annotation @OneToMany is to allow to work with a List<?> field defined a POJO class or from a record containing repetitive groups. Note Restrictions for OneToMany Be careful, the one to many of bindy does not allow to handle repetitions defined on several levels of the hierarchy. The relation OneToMany ONLY WORKS in the following cases : Reading a FIX message containing repetitive groups (= group of tags/keys) Generating a CSV with repetitive data Annotation name Record type Level OneToMany all Property Parameter name Type Required Default value Info mappedTo String Class name associated to the type of the List<Type of the Class> case 1 : Generating CSV with repetitive data Here is the CSV output that we want : Note The repetitive data concern the title of the book and its publication date while first, last name and age are common and the classes used to modeling this. The Author class contains a List of Book. Generate CSV with repetitive data @CsvRecord(separator=",") public class Author { @DataField(pos = 1) private String firstName; @DataField(pos = 2) private String lastName; @OneToMany private List<Book> books; @DataField(pos = 5) private String Age; } public class Book { @DataField(pos = 3) private String title; @DataField(pos = 4) private String year; } case 2 : Reading FIX message containing group of tags/keys Here is the message that we would like to process in our model : Tags 22, 48 and 54 are repeated. And the code: Reading FIX message containing group of tags/keys public class Order { @Link Header header; @Link Trailer trailer; @KeyValuePairField(tag = 1) // Client reference private String account; @KeyValuePairField(tag = 11) // Order reference private String clOrdId; @KeyValuePairField(tag = 58) // Free text private String text; @OneToMany(mappedTo = "org.apache.camel.dataformat.bindy.model.fix.complex.onetomany.Security") List<Security> securities; } public class Security { @KeyValuePairField(tag = 22) // Fund ID type (Sedol, ISIN, ...) private String idSource; @KeyValuePairField(tag = 48) // Fund code private String securityCode; @KeyValuePairField(tag = 54) // Movement type ( 1 = Buy, 2 = sell) private String side; } 75.2.9. 9. BindyConverter The purpose of the annotation @BindyConverter is define a converter to be used on field level. The provided class must implement the Format interface. @FixedLengthRecord(length = 10, paddingChar = ' ') public static class DataModel { @DataField(pos = 1, length = 10, trim = true) @BindyConverter(CustomConverter.class) public String field1; } public static class CustomConverter implements Format<String> { @Override public String format(String object) throws Exception { return (new StringBuilder(object)).reverse().toString(); } @Override public String parse(String string) throws Exception { return (new StringBuilder(string)).reverse().toString(); } } 75.2.10. 10. FormatFactories The purpose of the annotation @FormatFactories is to define a set of converters at record-level. The provided classes must implement the FormatFactoryInterface interface. @CsvRecord(separator = ",") @FormatFactories({OrderNumberFormatFactory.class}) public static class Order { @DataField(pos = 1) private OrderNumber orderNr; @DataField(pos = 2) private String firstName; } public static class OrderNumber { private int orderNr; public static OrderNumber ofString(String orderNumber) { OrderNumber result = new OrderNumber(); result.orderNr = Integer.valueOf(orderNumber); return result; } } public static class OrderNumberFormatFactory extends AbstractFormatFactory { { supportedClasses.add(OrderNumber.class); } @Override public Format<?> build(FormattingOptions formattingOptions) { return new Format<OrderNumber>() { @Override public String format(OrderNumber object) throws Exception { return String.valueOf(object.orderNr); } @Override public OrderNumber parse(String string) throws Exception { return OrderNumber.ofString(string); } }; } } 75.3. Supported Datatypes The DefaultFormatFactory makes formatting of the following datatype available by returning an instance of the interface FormatFactoryInterface based on the provided FormattingOptions: BigDecimal BigInteger Boolean Byte Character Date Double Enums Float Integer LocalDate LocalDateTime LocalTime Long Short String The DefaultFormatFactory can be overridden by providing an instance of FactoryRegistry in the registry in use (e.g. spring or JNDI). 75.4. Using the Java DSL The step instantiates the DataFormat bindy class associated with this record type and providing a class as a parameter. For example the following uses the class BindyCsvDataFormat (which corresponds to the class associated with the CSV record type) which is configured with com.acme.model.MyModel.class to initialize the model objects configured in this package. DataFormat bindy = new BindyCsvDataFormat(com.acme.model.MyModel.class); 75.4.1. Setting locale Bindy supports configuring the locale on the dataformat, such as BindyCsvDataFormat bindy = new BindyCsvDataFormat(com.acme.model.MyModel.class); bindy.setLocale("us"); Or to use the platform default locale then use "default" as the locale name. BindyCsvDataFormat bindy = new BindyCsvDataFormat(com.acme.model.MyModel.class); bindy.setLocale("default"); 75.4.2. Unmarshaling from("file://inbox") .unmarshal(bindy) .to("direct:handleOrders"); Alternatively, you can use a named reference to a data format which can then be defined in your Registry e.g. your Spring XML file: from("file://inbox") .unmarshal("myBindyDataFormat") .to("direct:handleOrders"); The Camel route will pick-up files in the inbox directory, unmarshall CSV records into a collection of model objects and send the collection to the route referenced by handleOrders . The collection returned is a List of Map objects. Each Map within the list contains the model objects that were marshalled out of each line of the CSV. The reason behind this is that each line can correspond to more than one object . This can be confusing when you simply expect one object to be returned per line. Each object can be retrieve using its class name. List<Map<String, Object>> unmarshaledModels = (List<Map<String, Object>>) exchange.getIn().getBody(); int modelCount = 0; for (Map<String, Object> model : unmarshaledModels) { for (String className : model.keySet()) { Object obj = model.get(className); LOG.info("Count : " + modelCount + ", " + obj.toString()); } modelCount++; } LOG.info("Total CSV records received by the csv bean : " + modelCount); Assuming that you want to extract a single Order object from this map for processing in a route, you could use a combination of a Splitter and a Processor as per the following: from("file://inbox") .unmarshal(bindy) .split(body()) .process(new Processor() { public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Map<String, Object> modelMap = (Map<String, Object>) in.getBody(); in.setBody(modelMap.get(Order.class.getCanonicalName())); } }) .to("direct:handleSingleOrder") .end(); Take care of the fact that Bindy uses CHARSET_NAME property or the CHARSET_NAME header as define in the Exchange interface to do a characterset conversion of the inputstream received for unmarshalling. In some producers (e.g. file-endpoint) you can define a characterset. The characterset conversion can already been done by this producer. Sometimes you need to remove this property or header from the exchange before sending it to the unmarshal. If you don't remove it the conversion might be done twice which might lead to unwanted results. from("file://inbox?charset=Cp922") .removeProperty(Exchange.CHARSET_NAME) .unmarshal("myBindyDataFormat") .to("direct:handleOrders"); 75.4.3. Marshaling To generate CSV records from a collection of model objects, you create the following route : from("direct:handleOrders") .marshal(bindy) .to("file://outbox") 75.5. Using Spring XML This is really easy to use Spring as your favorite DSL language to declare the routes to be used for camel-bindy. The following example shows two routes where the first will pick-up records from files, unmarshal the content and bind it to their model. The result is then send to a pojo (doing nothing special) and place them into a queue. The second route will extract the pojos from the queue and marshal the content to generate a file containing the CSV record. Spring DSL <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <!-- Queuing engine - ActiveMq - work locally in mode virtual memory --> <bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent"> <property name="brokerURL" value="vm://localhost:61616"/> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <dataFormats> <bindy id="bindyDataformat" type="Csv" classType="org.apache.camel.bindy.model.Order"/> </dataFormats> <route> <from uri="file://src/data/csv/?noop=true" /> <unmarshal ref="bindyDataformat" /> <to uri="bean:csv" /> <to uri="activemq:queue:in" /> </route> <route> <from uri="activemq:queue:in" /> <marshal ref="bindyDataformat" /> <to uri="file://src/data/csv/out/" /> </route> </camelContext> </beans> Note Please verify that your model classes implements serializable otherwise the queue manager will raise an error. 75.6. Dependencies To use Bindy in your camel routes you need to add the a dependency on camel-bindy which implements this data format. If you use maven you could just add the following to your pom.xml, substituting the version number for the latest & greatest release (see the download page for the latest versions). <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-bindy</artifactId> <version>{CamelSBVersion}</version> </dependency> 75.7. Spring Boot Auto-Configuration When using bindy-csv with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-bindy-starter</artifactId> </dependency> The component supports 18 options, which are listed below. Name Description Default Type camel.dataformat.bindy-csv.allow-empty-stream Whether to allow empty streams in the unmarshal process. If true, no exception will be thrown when a body without records is provided. false Boolean camel.dataformat.bindy-csv.class-type Name of model class to use. String camel.dataformat.bindy-csv.enabled Whether to enable auto configuration of the bindy-csv data format. This is enabled by default. Boolean camel.dataformat.bindy-csv.locale To configure a default locale to use, such as us for united states. To use the JVM platform default locale then use the name default. String camel.dataformat.bindy-csv.type Whether to use Csv, Fixed, or KeyValue. String camel.dataformat.bindy-csv.unwrap-single-instance When unmarshalling should a single instance be unwrapped and returned instead of wrapped in a java.util.List. true Boolean camel.dataformat.bindy-fixed.allow-empty-stream Whether to allow empty streams in the unmarshal process. If true, no exception will be thrown when a body without records is provided. false Boolean camel.dataformat.bindy-fixed.class-type Name of model class to use. String camel.dataformat.bindy-fixed.enabled Whether to enable auto configuration of the bindy-fixed data format. This is enabled by default. Boolean camel.dataformat.bindy-fixed.locale To configure a default locale to use, such as us for united states. To use the JVM platform default locale then use the name default. String camel.dataformat.bindy-fixed.type Whether to use Csv, Fixed, or KeyValue. String camel.dataformat.bindy-fixed.unwrap-single-instance When unmarshalling should a single instance be unwrapped and returned instead of wrapped in a java.util.List. true Boolean camel.dataformat.bindy-kvp.allow-empty-stream Whether to allow empty streams in the unmarshal process. If true, no exception will be thrown when a body without records is provided. false Boolean camel.dataformat.bindy-kvp.class-type Name of model class to use. String camel.dataformat.bindy-kvp.enabled Whether to enable auto configuration of the bindy-kvp data format. This is enabled by default. Boolean camel.dataformat.bindy-kvp.locale To configure a default locale to use, such as us for united states. To use the JVM platform default locale then use the name default. String camel.dataformat.bindy-kvp.type Whether to use Csv, Fixed, or KeyValue. String camel.dataformat.bindy-kvp.unwrap-single-instance When unmarshalling should a single instance be unwrapped and returned instead of wrapped in a java.util.List. true Boolean | [
"10, J, Pauline, M, XD12345678, Fortis Dynamic 15/15, 2500, USD, 08-01-2009",
"@CsvRecord( separator = \",\" ) public Class Order { }",
"10; J; Pauline; M; XD12345678; Fortis Dynamic 15/15; 2500; USD; 08-01-2009",
"@CsvRecord( separator = \";\" ) public Class Order { }",
"10| J| Pauline| M| XD12345678| Fortis Dynamic 15/15| 2500| USD| 08-01-2009",
"@CsvRecord( separator = \"\\\\|\" ) public Class Order { }",
"\"10\",\"J\",\"Pauline\",\" M\",\"XD12345678\",\"Fortis Dynamic 15,15\",\"2500\",\"USD\",\"08-01-2009\"",
"@CsvRecord( separator = \"\\\",\\\"\" ) public Class Order { }",
"\"10\",\"J\",\"Pauline\",\" M\",\"XD12345678\",\"Fortis Dynamic 15,15\",\"2500\",\"USD\",\"08-01-2009\"",
"@CsvRecord( separator = \",\" ) public Class Order { }",
"@CsvRecord( separator = \",\", quote = \"\\\"\" ) public Class Order { }",
"order id, client id, first name, last name, isin code, instrument name, quantity, currency, date",
"@CsvRecord(separator = \",\", skipFirstLine = true) public Class Order { }",
"@CsvRecord( generateHeaderColumns = true ) public Class Order { }",
"order id, client id, first name, last name, isin code, instrument name, quantity, currency, date 10, J, Pauline, M, XD12345678, Fortis Dynamic 15/15, 2500, USD, 08-01-2009",
"@CsvRecord(separator = \",\", crlf=\"MAC\") public Class Order { }",
"@CsvRecord(separator = \",\", crlf=\",\\n\") public Class Order { }",
"@CsvRecord(isOrdered = true) public Class Order { @DataField(pos = 1, position = 11) private int orderNr; @DataField(pos = 2, position = 10) private String clientNr; }",
"@CsvRecord(separator = \",\") public class Order { @DataField(pos = 1) private int orderNr; @Link private Client client; }",
"@Link public class Client { }",
"@CsvRecord(separator = \",\") public class Order { @DataField(pos = 1) private int orderNr; @DataField(pos = 5) private String isinCode; }",
"public class Client { @DataField(pos = 2) private String clientNr; @DataField(pos = 3) private String firstName; @DataField(pos = 4) private String lastName; }",
"@CsvRecord(separator = \",\") public class Order { @DataField(pos = 1) private int orderNr; @DataField(pos = 5) private String isinCode; @DataField(name = \"Name\", pos = 6) private String instrumentName; @DataField(pos = 7, precision = 2) private BigDecimal amount; @DataField(pos = 8) private String currency; // pattern used during parsing or when the date is created @DataField(pos = 9, pattern = \"dd-MM-yyyy\") private Date orderDate; }",
"@CsvRecord(separator = \",\") public class Order { @DataField(pos = 1) private int orderNr; @Link private Client client; @DataField(pos = 5) private String isinCode; @DataField(name = \"Name\", pos = 6) private String instrumentName; @DataField(pos = 7, precision = 2) private BigDecimal amount; @DataField(pos = 8) private String currency; @DataField(pos = 9, pattern = \"dd-MM-yyyy\") private Date orderDate; }",
"@CsvRecord(separator = \",\", isOrdered = true) public class Order { // Positions of the fields start from 1 and not from 0 @DataField(pos = 1, position = 11) private int orderNr; @DataField(pos = 2, position = 10) private String clientNr; @DataField(pos = 3, position = 9) private String firstName; @DataField(pos = 4, position = 8) private String lastName; @DataField(pos = 5, position = 7) private String instrumentCode; @DataField(pos = 6, position = 6) private String instrumentNumber; }",
"@CsvRecord(separator = \",\") public class Order { @DataField(pos = 1) private int orderNr; @DataField(pos = 2, required = true) private String clientNr; @DataField(pos = 3, required = true) private String firstName; @DataField(pos = 4, required = true) private String lastName; }",
"Some fields are missing (optional or mandatory), line :",
"@CsvRecord(separator = \",\") public class Order { @DataField(pos = 1, trim = true) private int orderNr; @DataField(pos = 2, trim = true) private Integer clientNr; @DataField(pos = 3, required = true) private String firstName; @DataField(pos = 4) private String lastName; }",
"@CsvRecord(separator = \",\") public class Order { @DataField(pos = 1) private int orderNr; @DataField(pos = 2) private Integer clientNr; @DataField(pos = 3, required = true) private String firstName; @DataField(pos = 4, defaultValue = \"Barin\") private String lastName; }",
"@CsvRecord(separator = \",\", generateHeaderColumns = true) public class Order { @DataField(pos = 1) private int orderNr; @DataField(pos = 5, columnName = \"ISIN\") private String isinCode; @DataField(name = \"Name\", pos = 6) private String instrumentName; }",
"10A9PaulineMISINXD12345678BUYShare2500.45USD01-08-2009",
"@FixedLengthRecord(length=54, paddingChar=' ') public static class Order { @DataField(pos = 1, length=2) private int orderNr; @DataField(pos = 3, length=2) private String clientNr; @DataField(pos = 5, length=7) private String firstName; @DataField(pos = 12, length=1, align=\"L\") private String lastName; @DataField(pos = 13, length=4) private String instrumentCode; @DataField(pos = 17, length=10) private String instrumentNumber; @DataField(pos = 27, length=3) private String orderType; @DataField(pos = 30, length=5) private String instrumentType; @DataField(pos = 35, precision = 2, length=7) private BigDecimal amount; @DataField(pos = 42, length=3) private String currency; @DataField(pos = 45, length=10, pattern = \"dd-MM-yyyy\") private Date orderDate; }",
"10A9 PaulineM ISINXD12345678BUYShare2500.45USD01-08-2009",
"@FixedLengthRecord(length=60, paddingChar=' ') public static class Order { @DataField(pos = 1, length=2) private int orderNr; @DataField(pos = 3, length=2) private String clientNr; @DataField(pos = 5, length=9) private String firstName; @DataField(pos = 14, length=5, align=\"L\") // align text to the LEFT zone of the block private String lastName; @DataField(pos = 19, length=4) private String instrumentCode; @DataField(pos = 23, length=10) private String instrumentNumber; @DataField(pos = 33, length=3) private String orderType; @DataField(pos = 36, length=5) private String instrumentType; @DataField(pos = 41, precision = 2, length=7) private BigDecimal amount; @DataField(pos = 48, length=3) private String currency; @DataField(pos = 51, length=10, pattern = \"dd-MM-yyyy\") private Date orderDate; }",
"10A9 PaulineM ISINXD12345678BUYShare000002500.45USD01-08-2009",
"@FixedLengthRecord(length = 65, paddingChar = ' ') public static class Order { @DataField(pos = 1, length = 2) private int orderNr; @DataField(pos = 3, length = 2) private String clientNr; @DataField(pos = 5, length = 9) private String firstName; @DataField(pos = 14, length = 5, align = \"L\") private String lastName; @DataField(pos = 19, length = 4) private String instrumentCode; @DataField(pos = 23, length = 10) private String instrumentNumber; @DataField(pos = 33, length = 3) private String orderType; @DataField(pos = 36, length = 5) private String instrumentType; @DataField(pos = 41, precision = 2, length = 12, paddingChar = '0') private BigDecimal amount; @DataField(pos = 53, length = 3) private String currency; @DataField(pos = 56, length = 10, pattern = \"dd-MM-yyyy\") private Date orderDate; }",
"10A9Pauline^M^ISINXD12345678BUYShare000002500.45USD01-08-2009",
"@FixedLengthRecord public static class Order { @DataField(pos = 1, length = 2) private int orderNr; @DataField(pos = 2, length = 2) private String clientNr; @DataField(pos = 3, delimiter = \"^\") private String firstName; @DataField(pos = 4, delimiter = \"^\") private String lastName; @DataField(pos = 5, length = 4) private String instrumentCode; @DataField(pos = 6, length = 10) private String instrumentNumber; @DataField(pos = 7, length = 3) private String orderType; @DataField(pos = 8, length = 5) private String instrumentType; @DataField(pos = 9, precision = 2, length = 12, paddingChar = '0') private BigDecimal amount; @DataField(pos = 10, length = 3) private String currency; @DataField(pos = 11, length = 10, pattern = \"dd-MM-yyyy\") private Date orderDate; }",
"10A9Pauline^M^ISIN10XD12345678BUYShare000002500.45USD01-08-2009",
"@FixedLengthRecord public static class Order { @DataField(pos = 1, length = 2) private int orderNr; @DataField(pos = 2, length = 2) private String clientNr; @DataField(pos = 3, delimiter = \"^\") private String firstName; @DataField(pos = 4, delimiter = \"^\") private String lastName; @DataField(pos = 5, length = 4) private String instrumentCode; @DataField(pos = 6, length = 2, align = \"R\", paddingChar = '0') private int instrumentNumberLen; @DataField(pos = 7, lengthPos=6) private String instrumentNumber; @DataField(pos = 8, length = 3) private String orderType; @DataField(pos = 9, length = 5) private String instrumentType; @DataField(pos = 10, precision = 2, length = 12, paddingChar = '0') private BigDecimal amount; @DataField(pos = 11, length = 3) private String currency; @DataField(pos = 12, length = 10, pattern = \"dd-MM-yyyy\") private Date orderDate; }",
"101-08-2009 10A9 PaulineM ISINXD12345678BUYShare000002500.45USD01-08-2009 10A9 RichN ISINXD12345678BUYShare000002700.45USD01-08-2009 9000000002",
"@FixedLengthRecord(header = OrderHeader.class, footer = OrderFooter.class) public class Order { @DataField(pos = 1, length = 2) private int orderNr; @DataField(pos = 2, length = 2) private String clientNr; @DataField(pos = 3, length = 9) private String firstName; @DataField(pos = 4, length = 5, align = \"L\") private String lastName; @DataField(pos = 5, length = 4) private String instrumentCode; @DataField(pos = 6, length = 10) private String instrumentNumber; @DataField(pos = 7, length = 3) private String orderType; @DataField(pos = 8, length = 5) private String instrumentType; @DataField(pos = 9, precision = 2, length = 12, paddingChar = '0') private BigDecimal amount; @DataField(pos = 10, length = 3) private String currency; @DataField(pos = 11, length = 10, pattern = \"dd-MM-yyyy\") private Date orderDate; } @FixedLengthRecord public class OrderHeader { @DataField(pos = 1, length = 1) private int recordType = 1; @DataField(pos = 2, length = 10, pattern = \"dd-MM-yyyy\") private Date recordDate; } @FixedLengthRecord public class OrderFooter { @DataField(pos = 1, length = 1) private int recordType = 9; @DataField(pos = 2, length = 9, align = \"R\", paddingChar = '0') private int numberOfRecordsInTheFile; }",
"@FixedLengthRecord(ignoreTrailingChars = true) public static class Order { @DataField(pos = 1, length = 2) private int orderNr; @DataField(pos = 3, length = 2) private String clientNr; // any characters that appear beyond the last mapped field will be ignored }",
"8=FIX.4.1 9=20 34=1 35=0 49=INVMGR 56=BRKR 1=BE.CHM.001 11=CHM0001-01 22=4",
"@Message(keyValuePairSeparator = \"=\", pairSeparator = \"\\u0001\", type=\"FIX\", version=\"4.1\") public class Order { }",
"@Message(keyValuePairSeparator = \"=\", pairSeparator = \"\\u0001\", type=\"FIX\", version=\"4.1\") public class Order { @Link Header header; @Link Trailer trailer; @KeyValuePairField(tag = 1) // Client reference private String Account; @KeyValuePairField(tag = 11) // Order reference private String ClOrdId; @KeyValuePairField(tag = 22) // Fund ID type (Sedol, ISIN, ...) private String IDSource; @KeyValuePairField(tag = 48) // Fund code private String SecurityId; @KeyValuePairField(tag = 54) // Movement type ( 1 = Buy, 2 = sell) private String Side; @KeyValuePairField(tag = 58) // Free text private String Text; }",
"@Message(keyValuePairSeparator = \"=\", pairSeparator = \"\\\\u0001\", type = \"FIX\", version = \"4.1\", isOrdered = true) public class Order { @Link Header header; @Link Trailer trailer; @KeyValuePairField(tag = 1, position = 1) // Client reference private String account; @KeyValuePairField(tag = 11, position = 3) // Order reference private String clOrdId; }",
"@Section(number = 1) public class Header { @KeyValuePairField(tag = 8, position = 1) // Message Header private String beginString; @KeyValuePairField(tag = 9, position = 2) // Checksum private int bodyLength; }",
"@Section(number = 2) @Message(keyValuePairSeparator = \"=\", pairSeparator = \"\\\\u0001\", type = \"FIX\", version = \"4.1\", isOrdered = true) public class Order { @Link Header header; @Link Trailer trailer; @KeyValuePairField(tag = 1, position = 1) // Client reference private String account; @KeyValuePairField(tag = 11, position = 3) // Order reference private String clOrdId;",
"@Section(number = 3) public class Trailer { @KeyValuePairField(tag = 10, position = 1) // CheckSum private int checkSum; public int getCheckSum() { return checkSum; }",
"Claus,Ibsen,Camel in Action 1,2010,35 Claus,Ibsen,Camel in Action 2,2012,35 Claus,Ibsen,Camel in Action 3,2013,35 Claus,Ibsen,Camel in Action 4,2014,35",
"@CsvRecord(separator=\",\") public class Author { @DataField(pos = 1) private String firstName; @DataField(pos = 2) private String lastName; @OneToMany private List<Book> books; @DataField(pos = 5) private String Age; } public class Book { @DataField(pos = 3) private String title; @DataField(pos = 4) private String year; }",
"8=FIX 4.19=2034=135=049=INVMGR56=BRKR 1=BE.CHM.00111=CHM0001-0158=this is a camel - bindy test 22=448=BE000124567854=1 22=548=BE000987654354=2 22=648=BE000999999954=3 10=220",
"public class Order { @Link Header header; @Link Trailer trailer; @KeyValuePairField(tag = 1) // Client reference private String account; @KeyValuePairField(tag = 11) // Order reference private String clOrdId; @KeyValuePairField(tag = 58) // Free text private String text; @OneToMany(mappedTo = \"org.apache.camel.dataformat.bindy.model.fix.complex.onetomany.Security\") List<Security> securities; } public class Security { @KeyValuePairField(tag = 22) // Fund ID type (Sedol, ISIN, ...) private String idSource; @KeyValuePairField(tag = 48) // Fund code private String securityCode; @KeyValuePairField(tag = 54) // Movement type ( 1 = Buy, 2 = sell) private String side; }",
"@FixedLengthRecord(length = 10, paddingChar = ' ') public static class DataModel { @DataField(pos = 1, length = 10, trim = true) @BindyConverter(CustomConverter.class) public String field1; } public static class CustomConverter implements Format<String> { @Override public String format(String object) throws Exception { return (new StringBuilder(object)).reverse().toString(); } @Override public String parse(String string) throws Exception { return (new StringBuilder(string)).reverse().toString(); } }",
"@CsvRecord(separator = \",\") @FormatFactories({OrderNumberFormatFactory.class}) public static class Order { @DataField(pos = 1) private OrderNumber orderNr; @DataField(pos = 2) private String firstName; } public static class OrderNumber { private int orderNr; public static OrderNumber ofString(String orderNumber) { OrderNumber result = new OrderNumber(); result.orderNr = Integer.valueOf(orderNumber); return result; } } public static class OrderNumberFormatFactory extends AbstractFormatFactory { { supportedClasses.add(OrderNumber.class); } @Override public Format<?> build(FormattingOptions formattingOptions) { return new Format<OrderNumber>() { @Override public String format(OrderNumber object) throws Exception { return String.valueOf(object.orderNr); } @Override public OrderNumber parse(String string) throws Exception { return OrderNumber.ofString(string); } }; } }",
"DataFormat bindy = new BindyCsvDataFormat(com.acme.model.MyModel.class);",
"BindyCsvDataFormat bindy = new BindyCsvDataFormat(com.acme.model.MyModel.class); bindy.setLocale(\"us\");",
"BindyCsvDataFormat bindy = new BindyCsvDataFormat(com.acme.model.MyModel.class); bindy.setLocale(\"default\");",
"from(\"file://inbox\") .unmarshal(bindy) .to(\"direct:handleOrders\");",
"from(\"file://inbox\") .unmarshal(\"myBindyDataFormat\") .to(\"direct:handleOrders\");",
"List<Map<String, Object>> unmarshaledModels = (List<Map<String, Object>>) exchange.getIn().getBody(); int modelCount = 0; for (Map<String, Object> model : unmarshaledModels) { for (String className : model.keySet()) { Object obj = model.get(className); LOG.info(\"Count : \" + modelCount + \", \" + obj.toString()); } modelCount++; } LOG.info(\"Total CSV records received by the csv bean : \" + modelCount);",
"from(\"file://inbox\") .unmarshal(bindy) .split(body()) .process(new Processor() { public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Map<String, Object> modelMap = (Map<String, Object>) in.getBody(); in.setBody(modelMap.get(Order.class.getCanonicalName())); } }) .to(\"direct:handleSingleOrder\") .end();",
"from(\"file://inbox?charset=Cp922\") .removeProperty(Exchange.CHARSET_NAME) .unmarshal(\"myBindyDataFormat\") .to(\"direct:handleOrders\");",
"from(\"direct:handleOrders\") .marshal(bindy) .to(\"file://outbox\")",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <!-- Queuing engine - ActiveMq - work locally in mode virtual memory --> <bean id=\"activemq\" class=\"org.apache.activemq.camel.component.ActiveMQComponent\"> <property name=\"brokerURL\" value=\"vm://localhost:61616\"/> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <dataFormats> <bindy id=\"bindyDataformat\" type=\"Csv\" classType=\"org.apache.camel.bindy.model.Order\"/> </dataFormats> <route> <from uri=\"file://src/data/csv/?noop=true\" /> <unmarshal ref=\"bindyDataformat\" /> <to uri=\"bean:csv\" /> <to uri=\"activemq:queue:in\" /> </route> <route> <from uri=\"activemq:queue:in\" /> <marshal ref=\"bindyDataformat\" /> <to uri=\"file://src/data/csv/out/\" /> </route> </camelContext> </beans>",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-bindy</artifactId> <version>{CamelSBVersion}</version> </dependency>",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-bindy-starter</artifactId> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-bindy-dataformat-starter |
function::returnval | function::returnval Name function::returnval - Possible return value of probed function Synopsis Arguments None Description Return the value of the register in which function values are typically returned. Can be used in probes where USDreturn isn't available. This is only a guess of the actual return value and can be totally wrong. Normally only used in dwarfless probes. | [
"returnval:long()"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-returnval |
Chapter 6. Securing Kafka | Chapter 6. Securing Kafka A secure deployment of AMQ Streams can encompass: Encryption for data exchange Authentication to prove identity Authorization to allow or decline actions executed by users 6.1. Encryption AMQ Streams supports Transport Layer Security (TLS), a protocol for encrypted communication. Communication is always encrypted for communication between: Kafka brokers ZooKeeper nodes Operators and Kafka brokers Operators and ZooKeeper nodes Kafka Exporter You can also configure TLS between Kafka brokers and clients by applying TLS encryption to the listeners of the Kafka broker. TLS is specified for external clients when configuring an external listener. AMQ Streams components and Kafka clients use digital certificates for encryption. The Cluster Operator sets up certificates to enable encryption within the Kafka cluster. You can provide your own server certificates, referred to as Kafka listener certificates , for communication between Kafka clients and Kafka brokers, and inter-cluster communication. AMQ Streams uses Secrets to store the certificates and private keys required for TLS in PEM and PKCS #12 format. A TLS Certificate Authority (CA) issues certificates to authenticate the identity of a component. AMQ Streams verifies the certificates for the components against the CA certificate. AMQ Streams components are verified against the cluster CA Certificate Authority (CA) Kafka clients are verified against the clients CA Certificate Authority (CA) 6.2. Authentication Kafka listeners use authentication to ensure a secure client connection to the Kafka cluster. Supported authentication mechanisms: Mutual TLS client authentication (on listeners with TLS enabled encryption) SASL SCRAM-SHA-512 OAuth 2.0 token based authentication The User Operator manages user credentials for TLS and SCRAM authentication, but not OAuth 2.0. For example, through the User Operator you can create a user representing a client that requires access to the Kafka cluster, and specify TLS as the authentication type. Using OAuth 2.0 token-based authentication, application clients can access Kafka brokers without exposing account credentials. An authorization server handles the granting of access and inquiries about access. 6.3. Authorization Kafka clusters use authorization to control the operations that are permitted on Kafka brokers by specific clients or users. If applied to a Kafka cluster, authorization is enabled for all listeners used for client connection. If a user is added to a list of super users in a Kafka broker configuration, the user is allowed unlimited access to the cluster regardless of any authorization constraints implemented through authorization mechanisms. Supported authorization mechanisms: Simple authorization OAuth 2.0 authorization (if you are using OAuth 2.0 token-based authentication) Open Policy Agent (OPA) authorization Simple authorization uses AclAuthorizer , the default Kafka authorization plugin. AclAuthorizer uses Access Control Lists (ACLs) to define which users have access to which resources. OAuth 2.0 and OPA provide policy-based control from an authorization server. Security policies and permissions used to grant access to resources on Kafka brokers are defined in the authorization server. URLs are used to connect to the authorization server and verify that an operation requested by a client or user is allowed or denied. Users and clients are matched against the policies created in the authorization server that permit access to perform specific actions on Kafka brokers. | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/amq_streams_on_openshift_overview/security-overview_str |
Chapter 14. Allowing JavaScript-based access to the API server from additional hosts | Chapter 14. Allowing JavaScript-based access to the API server from additional hosts 14.1. Allowing JavaScript-based access to the API server from additional hosts The default OpenShift Container Platform configuration only allows the web console to send requests to the API server. If you need to access the API server or OAuth server from a JavaScript application using a different hostname, you can configure additional hostnames to allow. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Edit the APIServer resource: USD oc edit apiserver.config.openshift.io cluster Add the additionalCORSAllowedOrigins field under the spec section and specify one or more additional hostnames: apiVersion: config.openshift.io/v1 kind: APIServer metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-07-11T17:35:37Z" generation: 1 name: cluster resourceVersion: "907" selfLink: /apis/config.openshift.io/v1/apiservers/cluster uid: 4b45a8dd-a402-11e9-91ec-0219944e0696 spec: additionalCORSAllowedOrigins: - (?i)//my\.subdomain\.domain\.com(:|\z) 1 1 The hostname is specified as a Golang regular expression that matches against CORS headers from HTTP requests against the API server and OAuth server. Note This example uses the following syntax: The (?i) makes it case-insensitive. The // pins to the beginning of the domain and matches the double slash following http: or https: . The \. escapes dots in the domain name. The (:|\z) matches the end of the domain name (\z) or a port separator (:) . Save the file to apply the changes. | [
"oc edit apiserver.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-07-11T17:35:37Z\" generation: 1 name: cluster resourceVersion: \"907\" selfLink: /apis/config.openshift.io/v1/apiservers/cluster uid: 4b45a8dd-a402-11e9-91ec-0219944e0696 spec: additionalCORSAllowedOrigins: - (?i)//my\\.subdomain\\.domain\\.com(:|\\z) 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/security_and_compliance/allowing-javascript-based-access-api-server |
2.5. Internal Materialization | 2.5. Internal Materialization Internal materialization creates Data Virtualization temporary tables to hold the materialized table. While these tables are not fully durable, they perform well in most circumstances and the data is present in each Red Hat JBoss Data Virtualization instance which removes the single point of failure and network overhead of an external database. Internal materialization also provides more built-in facilities for refreshing and monitoring. The materialized option must be set for the view to be materialized. The Cache Hint, when used in the context of an internal materialized view transformation query, provides the ability to fine tune the materialized table. The caching options are also settable via extension metadata: Table 2.2. Mapping Property Name Description teiid_rel:ALLOW_MATVIEW_MANAGEMENT Allow Teiid based management of the ttl and initial load rather than the implicit behavior teiid_rel:MATVIEW_PREFER_MEMORY Same as the pref_mem cache hint option teiid_rel:MATVIEW_TTL Same as the ttl cache hint option teiid_rel:MATVIEW_UPDATABLE Same as the updatable cache hint option teiid_rel:MATVIEW_SCOPE Same as the scope cache hint option The pref_mem option also applies to internal materialized views. Internal table index pages already have a memory preference, so the perf_mem option indicates that the data pages should prefer memory as well. All internal materialized view refresh and updates happen atomically. Internal materialized views support READ_COMMITTED (used also for READ_UNCOMMITED) and SERIALIZABLE (used also for REPEATABLE_READ) transaction isolation levels. Here is a sample Dynamic VDB defining an internal materialization: An internal materialized view table is initially in an invalid state (there is no data). If teiid_rel:ALLOW_MATVIEW_MANAGEMENT is not specified, the first user query will trigger an implicit loading of the data. All other queries against the materialized view will block until the load completes. In some situations administrators may wish to better control when the cache is loaded with a call to SYSADMIN.refreshMatView. The initial load may itself trigger the initial load of dependent materialized views. After the initial load user queries against the materialized view table will only block if it is in an invalid state. The valid state may also be controlled through the SYSADMIN.refreshMatView procedure. This is how you invalidate a refresh: Through this, the matview will be refreshed and user queries will block until the refresh is complete (or fails). If you set the teiid_rel:ALLOW_MATVIEW_MANAGEMENT property to "true", this will trigger the loading when the Virtual Database is deployed. While the initial load may trigger a transitive loading of dependent materialized views, subsequent refreshes performed with refreshMatView will use dependent materialized view tables if they exist. Only one load may occur at a time. If a load is already in progress when the SYSADMIN.refreshMatView procedure is called, it will return -1 immediately rather than preempting the current load. The Cache Hint may be used to automatically trigger a full snapshot refresh after a specified time to live (ttl). The ttl starts from the time the table is finished loading. The refresh is equivalent to CALL SYSADMIN.refreshMatView('view name', *), where the invalidation behavior is determined by the vdb property lazy-invalidate. By default ttl refreshes are invalidating, which will cause other user queries to block while loading. That is once the ttl has expired, the access will be required to refresh the materialized table in a blocking manner. If you would rather that the ttl is enforced lazily, such that the refresh task is performed asynchronously with the current contents not replaced until the refresh completes, set the vdb property lazy-invalidate=true. The resulting materialized view will be reloaded every hour (3600000 milliseconds). It has these limitations: The automatic ttl refresh may not be suitable for complex loading scenarios as nested materialized views will be used by the refresh query. The non-managed ttl refresh is performed lazily, that is it is only trigger by using the table after the ttl has expired. For infrequently used tables with long load times, this means that data may be used well past the intended ttl. In advanced use-cases the cache hint may also be used to mark an internal materialized view as updatable. An updatable internal materialized view may use the SYSADMIN.refreshMatViewRow procedure to update a single row in the materialized table. If the source row exists, the materialized view table row will be updated. If the source row does not exist, the correpsonding materialized row will be deleted. To be updatable the materialized view must have a single column primary key. Composite keys are not yet supported by SYSADMIN.refreshMatViewRow. Here is a sample transformation query: Here is the update SQL: Given that the schema.matview defines an integer column col as its primary key, the update will check the live source(s) for the row values. The update query will not use dependent materialized view tables, so care should be taken to ensure that getting a single row from this transformation query performs well. See the Reference Guide for information on controlling dependent joins, which may be applicable to increasing the performance of retrieving a single row. The refresh query does use nested caches, so this refresh method should be used with caution. When the updatable option is not specified, accessing the materialized view table is more efficient because modifications do not need to be considered. Therefore, only specify the updatable option if row based incremental updates are needed. Even when performing row updates, full snapshot refreshes may be needed to ensure consistency. The EventDistributor also exposes the updateMatViewRow as a lower level API for Programmatic Control - care should be taken when using this update method. Internal materialized view tables will automatically create non-unique indexes for each unique constraint and index defined on the materialized view. These indexes are created as non-unique even for unique constraints since the materialized table is not intended as an enforcement point for data integrity and when updatable the table may not be consistent with underlying values and thus unable to satisfy constraints. The primary key (if it exists) of the view will automatically be part of the covered columns for the index. The secondary indexes are always created as trees - bitmap or hash indexes are not supported. Teiid's metadata for indexes is currently limited. We are not currently able to capture additional information, sort direction, additional columns to cover, etc. You may workaround some of these limitations though. Function based index are supported, but can only be specified through DDL metadata. If you are not using DDL metadata, consider adding another column to the view that projects the function expression, then place an index on that new column. Queries to the view will need to be modified as appropriate though to make use of the new column/index. If additional covered columns are needed, they may simply be added to the index columns. This however is only applicable to comparable types. Adding additional columns will increase the amount of space used by the index, but may allow its usage to result in higher performance when only the covered columns are used and the main table is not consulted. Each member in a cluster maintains its own copy of each materialized table and associated indexes. An attempt is made to ensure each member receives the same full refresh events as the others. Full consistency for updatable materialized views however is not guaranteed. Periodic full refreshes of updatable materialized view tables helps ensure consistency among members. | [
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <vdb name=\"sakila\" version=\"1\"> <model name=\"pg\"> <source name=\"pg\" translator-name=\"postgresql\" connection-jndi-name=\"java:/sakila-ds\"/> </model> <model name=\"sakila\" type=\"VIRTUAL\"> <metadata type=\"DDL\"><![CDATA[ CREATE VIEW actor ( actor_id integer, first_name varchar(45) NOT NULL, last_name varchar(45) NOT NULL, last_update timestamp NOT NULL ) OPTIONS (materialized true, \"teiid_rel:MATVIEW_TTL\" 120000, \"teiid_rel:MATVIEW_PREFER_MEMORY\" 'true', \"teiid_rel:MATVIEW_UPDATABLE\" 'true', \"teiid_rel:MATVIEW_SCOPE\" 'vdb') AS SELECT actor_id, first_name, last_name, last_update from pg.\"public\".actor; ]> </metadata> </model> </vdb>",
"CALL SYSADMIN.refreshMatView(viewname=>'schema.matview', invalidate=>true)",
"/*+ cache(ttl:3600000) */ select t.col, t1.col from t, t1 where t.id = t1.id",
"/*+ cache(updatable) */ select t.col, t1.col from t, t1 where t.id = t1.id",
"CALL SYSADMIN.refreshMatViewRow(viewname=>'schema.matview', key=>5)"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_5_caching_guide/ch02s05 |
Chapter 5. Certificate-Based Login Modules | Chapter 5. Certificate-Based Login Modules 5.1. Certificate Login Module Short name : Certificate Full name : org.jboss.security.auth.spi.BaseCertLoginModule Parent : AbstractServer Login Module Certificate login module authenticates users based on X509 certificates. A typical use case for this login module is CLIENT-CERT authentication in the web tier. This login module only performs authentication and must be combined with another login module capable of acquiring authorization roles to completely define access to a secured web or Jakarta Enterprise Beans components. Two subclasses of this login module, CertRoles Login Module and DatabaseCert Login Module extend the behavior to obtain the authorization roles from either a properties file or database. Table 5.1. Certificate Login Module Options Option Type Default Description securityDomain String other Name of the security domain that has the JSSE configuration for the truststore holding the trusted certificates. verifier class none The class name of the org.jboss.security.auth.certs.X509CertificateVerifier to use for verification of the login certificate. 5.2. CertificateRoles Login Module Short name : CertificateRoles Full name : org.jboss.security.auth.spi.CertRolesLoginModule Parent : Certificate Login Module The CertificateRoles login module adds role mapping capabilities from a properties file using the following options: Table 5.2. CertificateRoles Login Module Options Option Type Default Description rolesProperties String roles.properties The name of the resource or file containing the roles to assign to each user. The role properties file must be in the format username=role1,role2 where the user name is the DN of the certificate, escaping any equals and space characters. The following example is in the correct format: CN\=unit-tests-client,\ OU\=Red\ Hat\ Inc.,\ O\=Red\ Hat\ Inc.,\ ST\=North\ Carolina,\ C\=US defaultRolesProperties String defaultRoles.properties Name of the resource or file to fall back to if the rolesProperties file cannot be found. roleGroupSeparator A single character. . (a single period) Which character to use as the role group separator in the rolesProperties file. 5.3. DatabaseCertificate Login Module Short name : DatabaseCertificate Full name : org.jboss.security.auth.spi.DatabaseCertLoginModule Parent : Certificate Login Module The DatabaseCertificate login module adds mapping capabilities from a database table through these additional options: Table 5.3. DatabaseCertificate Login Module Options Option Type Default Description dsJndiName A JNDI resource java:/DefaultDS The name of the JNDI resource storing the authentication information. rolesQuery prepared SQL statement select Role , RoleGroup from Roles where PrincipalID=? SQL prepared statement to be executed in order to map roles. It should be an equivalent to the query 'select Role , RoleGroup from Roles where PrincipalID =?', where Role is the role name and the RoleGroup column value should always be either Roles with a capital R or CallerPrincipal . suspendResume true or false true Whether any existing Jakarta Transactions transaction should be suspended during database operations. transactionManagerJndiName JNDI Resource java:/TransactionManager The JNDI name of the transaction manager used by the login module. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/login_module_reference/certificate_based_login_modules |
Chapter 10. VolumeAttachment [storage.k8s.io/v1] | Chapter 10. VolumeAttachment [storage.k8s.io/v1] Description VolumeAttachment captures the intent to attach or detach the specified volume to/from the specified node. VolumeAttachment objects are non-namespaced. Type object Required spec 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object VolumeAttachmentSpec is the specification of a VolumeAttachment request. status object VolumeAttachmentStatus is the status of a VolumeAttachment request. 10.1.1. .spec Description VolumeAttachmentSpec is the specification of a VolumeAttachment request. Type object Required attacher source nodeName Property Type Description attacher string Attacher indicates the name of the volume driver that MUST handle this request. This is the name returned by GetPluginName(). nodeName string The node that the volume should be attached to. source object VolumeAttachmentSource represents a volume that should be attached. Right now only PersistenVolumes can be attached via external attacher, in future we may allow also inline volumes in pods. Exactly one member can be set. 10.1.2. .spec.source Description VolumeAttachmentSource represents a volume that should be attached. Right now only PersistenVolumes can be attached via external attacher, in future we may allow also inline volumes in pods. Exactly one member can be set. Type object Property Type Description inlineVolumeSpec PersistentVolumeSpec inlineVolumeSpec contains all the information necessary to attach a persistent volume defined by a pod's inline VolumeSource. This field is populated only for the CSIMigration feature. It contains translated fields from a pod's inline VolumeSource to a PersistentVolumeSpec. This field is beta-level and is only honored by servers that enabled the CSIMigration feature. persistentVolumeName string Name of the persistent volume to attach. 10.1.3. .status Description VolumeAttachmentStatus is the status of a VolumeAttachment request. Type object Required attached Property Type Description attachError object VolumeError captures an error encountered during a volume operation. attached boolean Indicates the volume is successfully attached. This field must only be set by the entity completing the attach operation, i.e. the external-attacher. attachmentMetadata object (string) Upon successful attach, this field is populated with any information returned by the attach operation that must be passed into subsequent WaitForAttach or Mount calls. This field must only be set by the entity completing the attach operation, i.e. the external-attacher. detachError object VolumeError captures an error encountered during a volume operation. 10.1.4. .status.attachError Description VolumeError captures an error encountered during a volume operation. Type object Property Type Description message string String detailing the error encountered during Attach or Detach operation. This string may be logged, so it should not contain sensitive information. time Time Time the error was encountered. 10.1.5. .status.detachError Description VolumeError captures an error encountered during a volume operation. Type object Property Type Description message string String detailing the error encountered during Attach or Detach operation. This string may be logged, so it should not contain sensitive information. time Time Time the error was encountered. 10.2. API endpoints The following API endpoints are available: /apis/storage.k8s.io/v1/volumeattachments DELETE : delete collection of VolumeAttachment GET : list or watch objects of kind VolumeAttachment POST : create a VolumeAttachment /apis/storage.k8s.io/v1/watch/volumeattachments GET : watch individual changes to a list of VolumeAttachment. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/volumeattachments/{name} DELETE : delete a VolumeAttachment GET : read the specified VolumeAttachment PATCH : partially update the specified VolumeAttachment PUT : replace the specified VolumeAttachment /apis/storage.k8s.io/v1/watch/volumeattachments/{name} GET : watch changes to an object of kind VolumeAttachment. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/storage.k8s.io/v1/volumeattachments/{name}/status GET : read status of the specified VolumeAttachment PATCH : partially update status of the specified VolumeAttachment PUT : replace status of the specified VolumeAttachment 10.2.1. /apis/storage.k8s.io/v1/volumeattachments Table 10.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of VolumeAttachment Table 10.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 10.3. Body parameters Parameter Type Description body DeleteOptions schema Table 10.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind VolumeAttachment Table 10.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.6. HTTP responses HTTP code Reponse body 200 - OK VolumeAttachmentList schema 401 - Unauthorized Empty HTTP method POST Description create a VolumeAttachment Table 10.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.8. Body parameters Parameter Type Description body VolumeAttachment schema Table 10.9. HTTP responses HTTP code Reponse body 200 - OK VolumeAttachment schema 201 - Created VolumeAttachment schema 202 - Accepted VolumeAttachment schema 401 - Unauthorized Empty 10.2.2. /apis/storage.k8s.io/v1/watch/volumeattachments Table 10.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of VolumeAttachment. deprecated: use the 'watch' parameter with a list operation instead. Table 10.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.3. /apis/storage.k8s.io/v1/volumeattachments/{name} Table 10.12. Global path parameters Parameter Type Description name string name of the VolumeAttachment Table 10.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a VolumeAttachment Table 10.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 10.15. Body parameters Parameter Type Description body DeleteOptions schema Table 10.16. HTTP responses HTTP code Reponse body 200 - OK VolumeAttachment schema 202 - Accepted VolumeAttachment schema 401 - Unauthorized Empty HTTP method GET Description read the specified VolumeAttachment Table 10.17. HTTP responses HTTP code Reponse body 200 - OK VolumeAttachment schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified VolumeAttachment Table 10.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 10.19. Body parameters Parameter Type Description body Patch schema Table 10.20. HTTP responses HTTP code Reponse body 200 - OK VolumeAttachment schema 201 - Created VolumeAttachment schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified VolumeAttachment Table 10.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.22. Body parameters Parameter Type Description body VolumeAttachment schema Table 10.23. HTTP responses HTTP code Reponse body 200 - OK VolumeAttachment schema 201 - Created VolumeAttachment schema 401 - Unauthorized Empty 10.2.4. /apis/storage.k8s.io/v1/watch/volumeattachments/{name} Table 10.24. Global path parameters Parameter Type Description name string name of the VolumeAttachment Table 10.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind VolumeAttachment. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 10.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.5. /apis/storage.k8s.io/v1/volumeattachments/{name}/status Table 10.27. Global path parameters Parameter Type Description name string name of the VolumeAttachment Table 10.28. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified VolumeAttachment Table 10.29. HTTP responses HTTP code Reponse body 200 - OK VolumeAttachment schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified VolumeAttachment Table 10.30. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 10.31. Body parameters Parameter Type Description body Patch schema Table 10.32. HTTP responses HTTP code Reponse body 200 - OK VolumeAttachment schema 201 - Created VolumeAttachment schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified VolumeAttachment Table 10.33. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.34. Body parameters Parameter Type Description body VolumeAttachment schema Table 10.35. HTTP responses HTTP code Reponse body 200 - OK VolumeAttachment schema 201 - Created VolumeAttachment schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/storage_apis/volumeattachment-storage-k8s-io-v1 |
Chapter 3. Kafka schema reference | Chapter 3. Kafka schema reference Property Property type Description spec KafkaSpec The specification of the Kafka and ZooKeeper clusters, and Topic Operator. status KafkaStatus The status of the Kafka and ZooKeeper clusters, and Topic Operator. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-Kafka-reference |
Providing feedback on Red Hat build of Quarkus documentation | Providing feedback on Red Hat build of Quarkus documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/compiling_your_red_hat_build_of_quarkus_applications_to_native_executables/proc_providing-feedback-on-red-hat-documentation_quarkus-building-native-executable |
5.6. Creating Distributed Replicated Volumes | 5.6. Creating Distributed Replicated Volumes Use distributed replicated volumes in environments where the requirement to scale storage, and high-reliability is critical. Distributed replicated volumes also offer improved read performance in most environments. Note The number of bricks must be a multiple of the replica count for a distributed replicated volume. Also, the order in which bricks are specified has a great effect on data protection. Each replica_count consecutive bricks in the list you give will form a replica set, with all replica sets combined into a distribute set. To ensure that replica-set members are not placed on the same node, list the first brick on every server, then the second brick on every server in the same order, and so on. Prerequisites A trusted storage pool has been created, as described in Section 4.1, "Adding Servers to the Trusted Storage Pool" . Understand how to start and stop volumes, as described in Section 5.10, "Starting Volumes" . 5.6.1. Creating Three-way Distributed Replicated Volumes Three-way distributed replicated volume distributes and creates three copies of files across multiple bricks in the volume. The number of bricks must be equal to the replica count for a replicated volume. To protect against server and disk failures, it is recommended that the bricks of the volume are from different servers. Synchronous three-way distributed replication is now fully supported in Red Hat Gluster Storage. It is recommended that three-way distributed replicated volumes use JBOD, but use of hardware RAID with three-way distributed replicated volumes is also supported. Figure 5.3. Illustration of a Three-way Distributed Replicated Volume Creating three-way distributed replicated volumes Run the gluster volume create command to create the distributed replicated volume. The syntax is # gluster volume create NEW-VOLNAME [replica COUNT ] [transport tcp | rdma (Deprecated) | tcp,rdma] NEW-BRICK... The default value for transport is tcp . Other options can be passed such as auth.allow or auth.reject . See Section 11.1, "Configuring Volume Options" for a full list of parameters. Example 5.5. Six Node Distributed Replicated Volume with a Three-way Replication The order in which bricks are specified determines how bricks are replicated with each other. For example, first 3 bricks, where 3 is the replica count forms a replicate set. Run # gluster volume start VOLNAME to start the volume. Run gluster volume info command to optionally display the volume information. Important By default, the client-side quorum is enabled on three-way distributed replicated volumes. You must also set server-side quorum on the distributed-replicated volumes to prevent split-brain scenarios. For more information on setting quorums, see Section 11.15.1, "Preventing Split-brain" . | [
"gluster v create glustervol replica 3 transport tcp server1:/rhgs/brick1 server2:/rhgs/brick1 server3:/rhgs/brick1 server4:/rhgs/brick1 server5:/rhgs/brick1 server6:/rhgs/brick1 volume create: glutervol: success: please start the volume to access data",
"gluster v start glustervol volume start: glustervol: success"
]
| https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-creating_distributed_replicated_volumes |
Installing Identity Management | Installing Identity Management Red Hat Enterprise Linux 8 Methods of installing IdM servers and clients Red Hat Customer Content Services | [
"[user@server ~]USD dig +short -t SRV _ntp._udp.example.com 0 100 123 ntpserver .example.com.",
"[user@server ~]USD dig +short -t SRV _ntp._udp.example.com 0 100 123 ntpserver .example.com.",
"hostname server.idm.example.com",
"ip addr show 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:1a:4a:10:4e:33 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1 /24 brd 192.0.2.255 scope global dynamic eth0 valid_lft 106694sec preferred_lft 106694sec inet6 2001:DB8::1111 /32 scope global dynamic valid_lft 2591521sec preferred_lft 604321sec inet6 fe80::56ee:75ff:fe2b:def6/64 scope link valid_lft forever preferred_lft forever",
"dig +short server.idm.example.com A 192.0.2.1",
"dig +short server.idm.example.com AAAA 2001:DB8::1111",
"dig +short -x 192.0.2.1 server.idm.example.com",
"dig +short -x 2001:DB8::1111 server.idm.example.com",
"dig @ IP_address_of_the_DNS_forwarder . SOA",
"dig +dnssec @ IP_address_of_the_DNS_forwarder . SOA",
";; ->>HEADER<<- opcode: QUERY, status: NOERROR , id: 48655 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags: do; udp: 4096 ;; ANSWER SECTION: . 31679 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2015100701 1800 900 604800 86400 . 31679 IN RRSIG SOA 8 0 86400 20151017170000 20151007160000 62530 . GNVz7SQs [...]",
"127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.0.2.1 server.idm.example.com server 2001:DB8::1111 server.idm.example.com server",
"systemctl status firewalld.service",
"systemctl start firewalld.service systemctl enable firewalld.service",
"firewall-cmd --permanent --add-port={80/tcp,443/tcp,389/tcp,636/tcp,88/tcp,88/udp,464/tcp,464/udp,53/tcp,53/udp}",
"firewall-cmd --permanent --add-service={freeipa-4,dns}",
"firewall-cmd --reload",
"firewall-cmd --runtime-to-permanent",
"nmap -p 80,443,389,636,88,464,53 server.idm.example.com [...] PORT STATE SERVICE 53/tcp open domain 80/tcp open http 88/tcp open kerberos-sec 389/tcp open ldap 443/tcp open https 464/tcp open kpasswd5 636/tcp open ldapssl",
"nmap -sU -p 88,464,53 server.idm.example.com [...] PORT STATE SERVICE 53/udp open domain 88/udp open|filtered kerberos-sec 464/udp open|filtered kpasswd5",
"subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms",
"yum module enable idm:DL1",
"yum distro-sync",
"yum module install idm:DL1/server",
"yum module install idm:DL1/dns",
"yum module install idm:DL1/adtrust",
"yum module install idm:DL1/{dns,adtrust}",
"yum module install idm:DL1/client",
"umask 0027",
"umask 0022",
"umask 0027",
"ipa-server-install",
"Do you want to configure integrated DNS (BIND)? [no]: yes",
"Server host name [server.idm.example.com]: Please confirm the domain name [idm.example.com]: Please provide a realm name [IDM.EXAMPLE.COM]:",
"Directory Manager password: IPA admin password:",
"Do you want to configure DNS forwarders? [yes]:",
"Do you want to search for missing reverse zones? [yes]:",
"Do you want to create reverse zone for IP 192.0.2.1 [yes]: Please specify the reverse zone name [2.0.192.in-addr.arpa.]: Using reverse zone(s) 2.0.192.in-addr.arpa.",
"Continue to configure the system with these values? [no]: yes",
"ipa-server-install --realm IDM.EXAMPLE.COM --ds-password DM_password --admin-password admin_password --unattended --setup-dns --forwarder 192.0.2.1 --no-reverse",
"ipa-server-install --external-ca",
"ipa-server-install --external-ca --external-ca-type=ms-cs --external-ca-profile= <oid>/<name>/default",
"ipa-server-install --external-ca",
"Do you want to configure integrated DNS (BIND)? [no]: yes",
"Server host name [ server.idm.example.com ]: Please confirm the domain name [ idm.example.com ]: Please provide a realm name [ IDM.EXAMPLE.COM ]:",
"Directory Manager password: IPA admin password:",
"Do you want to configure DNS forwarders? [yes]:",
"Do you want to search for missing reverse zones? [yes]:",
"Do you want to create reverse zone for IP 192.0.2.1 [yes]: Please specify the reverse zone name [2.0.192.in-addr.arpa.]: Using reverse zone(s) 2.0.192.in-addr.arpa.",
"Continue to configure the system with these values? [no]: yes",
"Configuring certificate server (pki-tomcatd): Estimated time 3 minutes 30 seconds [1/8]: creating certificate server user [2/8]: configuring certificate server instance The next step is to get /root/ipa.csr signed by your CA and re-run /sbin/ipa-server-install as: /sbin/ipa-server-install --external-cert-file=/path/to/signed_certificate --external-cert-file=/path/to/external_ca_certificate",
"ipa-server-install --external-cert-file= /tmp/servercert20170601.pem --external-cert-file= /tmp/cacert.pem",
"ipa : CRITICAL failed to configure ca instance Command '/usr/sbin/pkispawn -s CA -f /tmp/ configuration_file ' returned non-zero exit status 1 Configuration of CA failed",
"ipa : CRITICAL failed to configure ca instance Command '/usr/sbin/pkispawn -s CA -f /tmp/ configuration_file ' returned non-zero exit status 1 Configuration of CA failed",
"env|grep proxy http_proxy=http://example.com:8080 ftp_proxy=http://example.com:8080 https_proxy=http://example.com:8080",
"for i in ftp http https; do unset USD{i}_proxy; done",
"pkidestroy -s CA -i pki-tomcat; rm -rf /var/log/pki/pki-tomcat /etc/sysconfig/pki-tomcat /etc/sysconfig/pki/tomcat/pki-tomcat /var/lib/pki/pki-tomcat /etc/pki/pki-tomcat /root/ipa.csr",
"ipa-server-install --uninstall",
"ipa-server-install --http-cert-file /tmp/server.crt --http-cert-file /tmp/server.key --http-pin secret --dirsrv-cert-file /tmp/server.crt --dirsrv-cert-file /tmp/server.key --dirsrv-pin secret --ca-cert-file ca.crt",
"Do you want to configure integrated DNS (BIND)? [no]: yes",
"Server host name [server.idm.example.com]: Please confirm the domain name [idm.example.com]: Please provide a realm name [IDM.EXAMPLE.COM]:",
"Directory Manager password: IPA admin password:",
"Do you want to configure DNS forwarders? [yes]:",
"Do you want to search for missing reverse zones? [yes]:",
"Do you want to create reverse zone for IP 192.0.2.1 [yes]: Please specify the reverse zone name [2.0.192.in-addr.arpa.]: Using reverse zone(s) 2.0.192.in-addr.arpa.",
"Continue to configure the system with these values? [no]: yes",
"ipa-server-install",
"Do you want to configure integrated DNS (BIND)? [no]:",
"Server host name [server.idm.example.com]: Please confirm the domain name [idm.example.com]: Please provide a realm name [IDM.EXAMPLE.COM]:",
"Directory Manager password: IPA admin password:",
"NetBIOS domain name [EXAMPLE]: Do you want to configure chrony with NTP server or pool address? [no]:",
"Continue to configure the system with these values? [no]: yes",
"Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server",
"ipa-server-install --realm IDM.EXAMPLE.COM --ds-password DM_password --admin-password admin_password --unattended",
"Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server",
"Please add records in this file to your DNS system: /tmp/ipa.system.records.6zdjqxh3.db",
"_kerberos-master._tcp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos-master._udp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos._tcp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos._udp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos.example.com. 86400 IN TXT \"EXAMPLE.COM\" _kpasswd._tcp.example.com. 86400 IN SRV 0 100 464 server.example.com. _kpasswd._udp.example.com. 86400 IN SRV 0 100 464 server.example.com. _ldap._tcp.example.com. 86400 IN SRV 0 100 389 server.example.com.",
"ipa-server-install --external-ca --external-ca-type=ms-cs --external-ca-profile= <oid>/<name>/default",
"ipa-server-install --external-ca",
"Do you want to configure integrated DNS (BIND)? [no]:",
"Server host name [ server.idm.example.com ]: Please confirm the domain name [ idm.example.com ]: Please provide a realm name [ IDM.EXAMPLE.COM ]:",
"Directory Manager password: IPA admin password:",
"Continue to configure the system with these values? [no]: yes",
"Configuring certificate server (pki-tomcatd): Estimated time 3 minutes 30 seconds [1/8]: creating certificate server user [2/8]: configuring certificate server instance The next step is to get /root/ipa.csr signed by your CA and re-run /sbin/ipa-server-install as: /sbin/ipa-server-install --external-cert-file=/path/to/signed_certificate --external-cert-file=/path/to/external_ca_certificate",
"ipa-server-install --external-cert-file= /tmp/servercert20170601.pem --external-cert-file= /tmp/cacert.pem",
"Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server",
"ipa : CRITICAL failed to configure ca instance Command '/usr/sbin/pkispawn -s CA -f /tmp/pass:quotes[ configuration_file ]' returned non-zero exit status 1 Configuration of CA failed",
"ipa-server-install --external-ca --realm IDM.EXAMPLE.COM --ds-password DM_password --admin-password admin_password --unattended",
"Configuring certificate server (pki-tomcatd). Estimated time: 3 minutes [1/11]: configuring certificate server instance The next step is to get /root/ipa.csr signed by your CA and re-run /usr/sbin/ipa-server-install as: /usr/sbin/ipa-server-install --external-cert-file=/path/to/signed_certificate --external-cert-file=/path/to/external_ca_certificate The ipa-server-install command was successful",
"ipa-server-install --external-cert-file= /tmp/servercert20170601.pem --external-cert-file= /tmp/cacert.pem --realm IDM.EXAMPLE.COM --ds-password DM_password --admin-password admin_password --unattended",
"Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server",
"Please add records in this file to your DNS system: /tmp/ipa.system.records.6zdjqxh3.db",
"_kerberos-master._tcp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos-master._udp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos._tcp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos._udp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos.example.com. 86400 IN TXT \"EXAMPLE.COM\" _kpasswd._tcp.example.com. 86400 IN SRV 0 100 464 server.example.com. _kpasswd._udp.example.com. 86400 IN SRV 0 100 464 server.example.com. _ldap._tcp.example.com. 86400 IN SRV 0 100 389 server.example.com.",
"dn: cn=config changetype: modify replace: nsslapd-idletimeout nsslapd-idletimeout: 1800 - replace: nsslapd-maxdescriptors nsslapd-maxdescriptors: 8192",
"ipa-server-install --dirsrv-config-file filename.ldif",
"ipa-replica-install --dirsrv-config-file filename.ldif",
"[user@server ~]USD sudo tail -n 10 /var/log/ipaserver-install.log [sudo] password for user: value = gen.send(prev_value) File \"/usr/lib/python3.6/site-packages/ipapython/install/common.py\", line 65, in _install for unused in self._installer(self.parent): File \"/usr/lib/python3.6/site-packages/ipaserver/install/server/ init .py\", line 564, in main master_install(self) File \"/usr/lib/python3.6/site-packages/ipaserver/install/server/install.py\", line 291, in decorated raise ScriptError() 2020-05-27T22:59:41Z DEBUG The ipa-server-install command failed, exception: ScriptError: 2020-05-27T22:59:41Z ERROR The ipa-server-install command failed. See /var/log/ipaserver-install.log for more information",
"[user@server ~]USD sudo less -N +G /var/log/ipaserver-install.log",
"[user@server ~]USD sudo less -N +G /var/log/httpd/error_log [user@server ~]USD sudo less -N +G /var/log/dirsrv/slapd- INSTANCE-NAME /access [user@server ~]USD sudo less -N +G /var/log/dirsrv/slapd- INSTANCE-NAME /errors",
"[user@server ~]USD sudo less -N +G /var/log/pki/pki-ca-spawn. 20200527185902 .log",
"ipa-server-install The log file for this installation can be found in /var/log/ipaserver-install.log IPA server is already configured on this system. If you want to reinstall the IPA server, please uninstall it first using 'ipa-server-install --uninstall' . The ipa-server-install command failed. See /var/log/ipaserver-install.log for more information",
"ipa-server-install --uninstall",
"ipa server-role-find --role 'DNS server' ---------------------- 2 server roles matched ---------------------- Server name: server456.idm.example.com Role name: DNS server Role status: enabled [...] ---------------------------- Number of entries returned 2 ----------------------------",
"ipa server-role-find --role 'CA server' ---------------------- 2 server roles matched ---------------------- Server name: server123.idm.example.com Role name: CA server Role status: enabled Server name: r8server.idm.example.com Role name: CA server Role status: enabled ---------------------------- Number of entries returned 2 ----------------------------",
"ipa server-role-find --role 'KRA server' ---------------------- 2 server roles matched ---------------------- Server name: server123.idm.example.com Role name: KRA server Role status: enabled Server name: r8server.idm.example.com Role name: KRA server Role status: enabled ---------------------------- Number of entries returned 2 ----------------------------",
"ipa config-show | grep 'CA renewal' IPA CA renewal master: r8server.idm.example.com",
"ipa-crlgen-manage status CRL generation: disabled",
"ssh idm_user@server456",
"[idm_user@server456 ~]USD kinit admin",
"[idm_user@server456 ~]USD ipa-replica-manage dnarange-show server123.idm.example.com: 1001-1500 server456.idm.example.com: 1501-2000 [...]",
"[idm_user@server456 ~]USD ipa user-add test_idm_user",
"[idm_user@server456 ~]USD ipa server-del server123.idm.example.com",
"ipa-server-install --uninstall Are you sure you want to continue with the uninstall procedure? [no]: true",
"ipactl stop",
"yum upgrade ipa- *",
"yum distro-sync ipa- *",
"lookup_family_order = ipv4_only",
"yum module install idm",
"yum module enable idm:DL1 yum distro-sync",
"yum module install idm:DL1/client",
"ipa-client-install --mkhomedir",
"ipa-client-install --enable-dns-updates --mkhomedir",
"Client hostname: client.example.com Realm: EXAMPLE.COM DNS Domain: example.com IPA Server: server.example.com BaseDN: dc=example,dc=com Continue to configure the system with these values? [no]: yes",
"User authorized to enroll computers: hostadmin Password for hostadmin @ EXAMPLE.COM :",
"Client configuration complete.",
"ipa host-add client.example.com --random -------------------------------------------------- Added host \"client.example.com\" -------------------------------------------------- Host name: client.example.com Random password: W5YpARl=7M.n Password: True Keytab: False Managed by: server.example.com",
"ipa-client-install --mkhomedir --password= password",
"ipa-client-install --password 'W5YpARl=7M.n' --enable-dns-updates --mkhomedir",
"Client hostname: client.example.com Realm: EXAMPLE.COM DNS Domain: example.com IPA Server: server.example.com BaseDN: dc=example,dc=com Continue to configure the system with these values? [no]: yes",
"Client configuration complete.",
"ipa-client-install --password ' W5YpARl=7M.n ' --mkhomedir --unattended",
"ipa-client-install --password ' W5YpARl=7M.n ' --domain idm.example.com --server server.idm.example.com --realm IDM.EXAMPLE.COM --mkhomedir --unattended",
"BASE dc=example,dc=com URI ldap://ldap.example.com #URI ldaps://server.example.com # modified by IPA #BASE dc=ipa,dc=example,dc=com # modified by IPA",
"[user@client ~]USD id admin uid=1254400000(admin) gid=1254400000(admins) groups=1254400000(admins)",
"[user@client ~]USD su - Last login: Thu Oct 18 18:39:11 CEST 2018 from 192.168.122.1 on pts/0",
"ipa host-add client.example.com --password= secret",
"%packages ipa-client",
"%post --log=/root/ks-post.log Generate SSH keys; ipa-client-install uploads them to the IdM server by default /usr/libexec/openssh/sshd-keygen rsa Run the client install script /usr/sbin/ipa-client-install --hostname= client.example.com --domain= EXAMPLE.COM --enable-dns-updates --mkhomedir -w secret --realm= EXAMPLE.COM --server= server.example.com",
"env DBUS_SYSTEM_BUS_ADDRESS=unix:path=/dev/null getcert list env DBUS_SYSTEM_BUS_ADDRESS=unix:path=/dev/null ipa-client-install",
"[user@client ~]USD id admin uid=1254400000(admin) gid=1254400000(admins) groups=1254400000(admins)",
"[user@client ~]USD su - Last login: Thu Oct 18 18:39:11 CEST 2018 from 192.168.122.1 on pts/0",
"[user@server ~]USD sudo grep ScriptError /var/log/ipaclient-install.log [sudo] password for user: 2020-05-28T18:24:50Z DEBUG The ipa-client-install command failed, exception: ScriptError : One of password / principal / keytab is required.",
"[user@server ~]USD sudo less -N +G /var/log/ipaclient-install.log",
"[user@server ~]USD ipa dnszone-mod idm.example.com. --dynamic-update=TRUE",
"[user@server ~]USD sudo firewall-cmd --permanent --add-port=53/tcp --add-port=53/udp [sudo] password for user: success [user@server ~]USD firewall-cmd --runtime-to-permanent success",
"[user@server ~]USD sudo grep nsupdate /var/log/ipaclient-install.log",
"Joining realm failed: Failed to add key to the keytab child exited with 11 Installation failed. Rolling back changes.",
"[user@client ~]USD sudo rm /etc/krb5.keytab [sudo] password for user: [user@client ~]USD ls /etc/krb5.keytab ls: cannot access '/etc/krb5.keytab': No such file or directory",
"The ipa-client-install command was successful.",
"/usr/sbin/ipa-client-automount -U --location <raleigh>",
"ipa-client-install --force-join",
"User authorized to enroll computers: hostadmin Password for hostadmin @ EXAMPLE.COM :",
"ipa-client-install --keytab /tmp/krb5.keytab",
"[user@client ~]USD id admin uid=1254400000(admin) gid=1254400000(admins) groups=1254400000(admins)",
"[user@client ~]USD su - Last login: Thu Oct 18 18:39:11 CEST 2018 from 192.168.122.1 on pts/0",
"ipa-client-install --uninstall",
"kinit admin kinit: Client '[email protected]' not found in Kerberos database while getting initial credentials",
"ipa-rmkeytab -k /path/to/keytab -r EXAMPLE.COM",
"ipa dnsrecord-del Record name: old-client-name Zone name: idm.example.com No option to delete specific record provided. Delete all? Yes/No (default No): true ------------------------ Deleted record \"old-client-name\"",
"ipa host-del client.idm.example.com",
"rm /etc/krb5.conf",
"mv /etc/krb5.conf.ipa /etc/krb5.conf",
"yum reinstall krb5-libs",
"kinit admin kinit: Client '[email protected]' not found in Kerberos database while getting initial credentials",
"ipa service-find old-client-name.example.com",
"find / -name \"*.keytab\"",
"ipa hostgroup-find old-client-name.example.com",
"ipa-client-install --uninstall",
"kinit admin kinit: Client '[email protected]' not found in Kerberos database while getting initial credentials",
"ipa-rmkeytab -k /path/to/keytab -r EXAMPLE.COM",
"ipa dnsrecord-del Record name: old-client-name Zone name: idm.example.com No option to delete specific record provided. Delete all? Yes/No (default No): true ------------------------ Deleted record \"old-client-name\"",
"ipa host-del client.idm.example.com",
"rm /etc/krb5.conf",
"mv /etc/krb5.conf.ipa /etc/krb5.conf",
"yum reinstall krb5-libs",
"kinit admin kinit: Client '[email protected]' not found in Kerberos database while getting initial credentials",
"hostnamectl set-hostname new-client-name.example.com",
"ipa service-add service_name/new-client-name",
"ipa --version VERSION: 4.8.0 , API_VERSION: 2.233",
"rpm -q ipa-server ipa-server-4.8.0-11 .module+el8.1.0+4247+9f3fd721.x86_64",
"kinit admin",
"ipa hostgroup-add-member ipaservers --hosts client.idm.example.com Host-group: ipaservers Description: IPA server hosts Member hosts: server.idm.example.com, client.idm.example.com ------------------------- Number of members added 1 -------------------------",
"kinit admin",
"kinit admin",
"ipa host-add replica.example.com --random -------------------------------------------------- Added host \"replica.example.com\" -------------------------------------------------- Host name: replica.example.com Random password: W5YpARl=7M.n Password: True Keytab: False Managed by: server.example.com",
"ipa hostgroup-add-member ipaservers --hosts replica.example.com Host-group: ipaservers Description: IPA server hosts Member hosts: server.example.com, replica.example.com ------------------------- Number of members added 1 -------------------------",
"ipa-replica-install --setup-dns --forwarder 192.0.2.1 --setup-ca",
"ipa-replica-install --setup-dns --forwarder 192.0.2.1",
"ipa-replica-install --setup-ca",
"ipa dns-update-system-records --dry-run --out dns_records_file.nsupdate",
"ipa-replica-install --dirsrv-cert-file /tmp/server.crt --dirsrv-cert-file /tmp/server.key --dirsrv-pin secret --http-cert-file /tmp/server.crt --http-cert-file /tmp/server.key --http-pin secret",
"ipa-replica-install --hidden-replica",
"[admin@new_replica ~]USD ipa user-add test_user",
"[admin@another_replica ~]USD ipa user-show test_user",
"[user@replica ~]USD sudo tail -n 10 /var/log/ipareplica-install.log [sudo] password for user: func(installer) File \"/usr/lib/python3.6/site-packages/ipaserver/install/server/replicainstall.py\", line 424, in decorated func(installer) File \"/usr/lib/python3.6/site-packages/ipaserver/install/server/replicainstall.py\", line 785, in promote_check ensure_enrolled(installer) File \"/usr/lib/python3.6/site-packages/ipaserver/install/server/replicainstall.py\", line 740, in ensure_enrolled raise ScriptError(\"Configuration of client side components failed!\") 2020-05-28T18:24:51Z DEBUG The ipa-replica-install command failed, exception: ScriptError: Configuration of client side components failed! 2020-05-28T18:24:51Z ERROR Configuration of client side components failed! 2020-05-28T18:24:51Z ERROR The ipa-replica-install command failed. See /var/log/ipareplica-install.log for more information",
"[user@replica ~]USD sudo less -N +G /var/log/ipareplica-install.log",
"[user@replica ~]USD sudo less -N +G /var/log/ipareplica-conncheck.log [user@replica ~]USD sudo less -N +G /var/log/ipaclient-install.log [user@replica ~]USD sudo less -N +G /var/log/httpd/error_log [user@replica ~]USD sudo less -N +G /var/log/dirsrv/slapd- INSTANCE-NAME /access [user@replica ~]USD sudo less -N +G /var/log/dirsrv/slapd- INSTANCE-NAME /errors [user@replica ~]USD sudo less -N +G /var/log/ipaserver-install.log",
"[user@server ~]USD sudo less -N +G /var/log/httpd/error_log [user@server ~]USD sudo less -N +G /var/log/dirsrv/slapd- INSTANCE-NAME /access [user@server ~]USD sudo less -N +G /var/log/dirsrv/slapd- INSTANCE-NAME /errors",
"[user@server ~]USD sudo less -N +G /var/log/pki/pki-ca-spawn. 20200527185902 .log",
"ipa-replica-install Your system may be partly configured. Run /usr/sbin/ipa-server-install --uninstall to clean up. IPA server is already configured on this system. If you want to reinstall the IPA server, please uninstall it first using 'ipa-server-install --uninstall'. The ipa-replica-install command failed. See /var/log/ipareplica-install.log for more information",
"ipa-server-install --uninstall",
"ipa server-del replica.idm.example.com",
"[27/40]: setting up initial replication Starting replication, please wait until this has completed. Update in progress, 15 seconds elapsed [ldap://server.example.com:389] reports: Update failed! Status: [49 - LDAP error: Invalid credentials] [error] RuntimeError: Failed to start replication Your system may be partly configured. Run /usr/sbin/ipa-server-install --uninstall to clean up. ipa.ipapython.install.cli.install_tool(CompatServerReplicaInstall): ERROR Failed to start replication ipa.ipapython.install.cli.install_tool(CompatServerReplicaInstall): ERROR The ipa-replica-install command failed. See /var/log/ipareplica-install.log for more information",
"[user@server ~]USD date Thu May 28 21:03:57 EDT 2020 [user@replica ~]USD sudo timedatectl set-time '2020-05-28 21:04:00'",
"ipa server-role-show r8server.idm.example.com Role name: DNS server Server name: r8server.idm.example.com Role name: DNS server Role status: absent",
"yum module enable idm:DL1",
"yum module install idm:DL1/dns",
"ipa-dns-install",
"Do you want to configure DNS forwarders? [yes]:",
"Do you want to search for missing reverse zones? [yes]:",
"Do you want to create reverse zone for IP 192.0.2.1 [yes]: Please specify the reverse zone name [2.0.192.in-addr.arpa.]: Using reverse zone(s) 2.0.192.in-addr.arpa.",
"[root@idmserver ~] ipa-ca-install",
"[root@idmserver ~] ipa-ca-install --external-ca",
"ipa-ca-install --external-cert-file=/root/master.crt --external-cert-file=/root/ca.crt",
"[root@idmserver ~] ipa-ca-install",
"ipa topologysuffix-find --------------------------- 2 topology suffixes matched --------------------------- Suffix name: ca Managed LDAP suffix DN: o=ipaca Suffix name: domain Managed LDAP suffix DN: dc=example,dc=com ---------------------------- Number of entries returned 2 ----------------------------",
"ipa topologysegment-find Suffix name: domain ----------------- 1 segment matched ----------------- Segment name: server1.example.com-to-server2.example.com Left node: server1.example.com Right node: server2.example.com Connectivity: both ---------------------------- Number of entries returned 1 ----------------------------",
"ipa topologysegment-show Suffix name: domain Segment name: server1.example.com-to-server2.example.com Segment name: server1.example.com-to-server2.example.com Left node: server1.example.com Right node: server2.example.com Connectivity: both",
"ipa topologysegment-add Suffix name: domain Left node: server1.example.com Right node: server2.example.com Segment name [server1.example.com-to-server2.example.com]: new_segment --------------------------- Added segment \"new_segment\" --------------------------- Segment name: new_segment Left node: server1.example.com Right node: server2.example.com Connectivity: both",
"ipa topologysegment-show Suffix name: domain Segment name: new_segment Segment name: new_segment Left node: server1.example.com Right node: server2.example.com Connectivity: both",
"ipa topologysegment-find Suffix name: domain ------------------ 8 segments matched ------------------ Segment name: new_segment Left node: server1.example.com Right node: server2.example.com Connectivity: both ---------------------------- Number of entries returned 8 ----------------------------",
"ipa topologysegment-del Suffix name: domain Segment name: new_segment ----------------------------- Deleted segment \"new_segment\" -----------------------------",
"ipa topologysegment-find Suffix name: domain ------------------ 7 segments matched ------------------ Segment name: server2.example.com-to-server3.example.com Left node: server2.example.com Right node: server3.example.com Connectivity: both ---------------------------- Number of entries returned 7 ----------------------------",
"[user@server2 ~]USD ipa server-del Server name: server1.example.com Removing server1.example.com from replication topology, please wait ---------------------------------------------------------- Deleted IPA server \"server1.example.com\" ----------------------------------------------------------",
"ipa server-install --uninstall",
"ipa-replica-manage list-ruv server1.example.com:389: 6 server2.example.com:389: 5 server3.example.com:389: 4 server4.example.com:389: 12",
"ipa-replica-manage clean-ruv 6 ipa-replica-manage clean-ruv 5",
"dn: cn=clean replica_ID, cn=cleanallruv, cn=tasks, cn=config objectclass: extensibleObject replica-base-dn: dc=example,dc=com replica-id: replica_ID replica-force-cleaning: no cn: clean replica_ID",
"ipa config-show IPA masters: server1.example.com, server2.example.com, server3.example.com IPA CA servers: server1.example.com, server2.example.com IPA CA renewal master: server1.example.com",
"ipa server-show Server name: server.example.com Enabled server roles: CA server, DNS server, KRA server",
"ipa server-find --servrole \"CA server\" --------------------- 2 IPA servers matched --------------------- Server name: server1.example.com Server name: server2.example.com ---------------------------- Number of entries returned 2 ----------------------------",
"ipa server-state replica.idm.example.com --state=hidden",
"ipa server-state replica.idm.example.com --state=enabled",
"ipa config-show",
"yum install ipa-healthcheck",
"ipa-healthcheck --failures-only []",
"ipa-healthcheck",
"subscription-manager repos --enable ansible-2.8-for-rhel-8-x86_64-rpms",
"yum install ansible",
"yum install ansible-freeipa",
"ls -1 /usr/share/ansible/roles/ ipaclient ipareplica ipaserver",
"ls -1 /usr/share/doc/ansible-freeipa/ playbooks README-client.md README.md README-replica.md README-server.md README-topology.md",
"ls -1 /usr/share/doc/ansible-freeipa/playbooks/ install-client.yml install-cluster.yml install-replica.yml install-server.yml uninstall-client.yml uninstall-cluster.yml uninstall-replica.yml uninstall-server.yml",
"mkdir MyPlaybooks",
"ipaserver_setup_dns=true",
"[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=true ipaserver_auto_forwarders=true [...]",
"[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=true ipaserver_auto_forwarders=true ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 [...]",
"[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=true ipaserver_auto_forwarders=true ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 ipaserver_firewalld_zone= custom zone",
"--- - name: Playbook to configure IPA server hosts: ipaserver become: true vars_files: - playbook_sensitive_data.yml roles: - role: ipaserver state: present",
"--- - name: Playbook to configure IPA server hosts: ipaserver become: true roles: - role: ipaserver state: present",
"mkdir MyPlaybooks",
"[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=no [...]",
"[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=no ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 [...]",
"[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=no ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 ipaserver_firewalld_zone= custom zone",
"--- - name: Playbook to configure IPA server hosts: ipaserver become: true vars_files: - playbook_sensitive_data.yml roles: - role: ipaserver state: present",
"--- - name: Playbook to configure IPA server hosts: ipaserver become: true roles: - role: ipaserver state: present",
"ansible-playbook -i ~/MyPlaybooks/inventory ~/MyPlaybooks/install-server.yml",
"Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server",
"mkdir MyPlaybooks",
"ipaserver_setup_dns=true",
"[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=true ipaserver_auto_forwarders=true [...]",
"[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=true ipaserver_auto_forwarders=true ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 [...]",
"[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=true ipaserver_auto_forwarders=true ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 ipaserver_firewalld_zone= custom zone [...]",
"--- - name: Playbook to configure IPA server Step 1 hosts: ipaserver become: true vars_files: - playbook_sensitive_data.yml vars: ipaserver_external_ca: true roles: - role: ipaserver state: present post_tasks: - name: Copy CSR /root/ipa.csr from node to \"{{ groups.ipaserver[0] + '-ipa.csr' }}\" fetch: src: /root/ipa.csr dest: \"{{ groups.ipaserver[0] + '-ipa.csr' }}\" flat: true",
"--- - name: Playbook to configure IPA server Step 2 hosts: ipaserver become: true vars_files: - playbook_sensitive_data.yml vars: ipaserver_external_cert_files: - \"/root/servercert20240601.pem\" - \"/root/cacert.pem\" pre_tasks: - name: Copy \"{{ groups.ipaserver[0] }}-{{ item }}\" to \"/root/{{ item }}\" on node ansible.builtin.copy: src: \"{{ groups.ipaserver[0] }}-{{ item }}\" dest: \"/root/{{ item }}\" force: true with_items: - servercert20240601.pem - cacert.pem roles: - role: ipaserver state: present",
"mkdir MyPlaybooks",
"[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=no [...]",
"[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=no ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 [...]",
"[ipaserver] server.idm.example.com [ipaserver:vars] ipaserver_domain=idm.example.com ipaserver_realm=IDM.EXAMPLE.COM ipaserver_setup_dns=no ipaadmin_password=MySecretPassword123 ipadm_password=MySecretPassword234 ipaserver_firewalld_zone= custom zone [...]",
"--- - name: Playbook to configure IPA server Step 1 hosts: ipaserver become: true vars_files: - playbook_sensitive_data.yml vars: ipaserver_external_ca: true roles: - role: ipaserver state: present post_tasks: - name: Copy CSR /root/ipa.csr from node to \"{{ groups.ipaserver[0] + '-ipa.csr' }}\" fetch: src: /root/ipa.csr dest: \"{{ groups.ipaserver[0] + '-ipa.csr' }}\" flat: true",
"--- - name: Playbook to configure IPA server Step 2 hosts: ipaserver become: true vars_files: - playbook_sensitive_data.yml vars: ipaserver_external_cert_files: - \"/root/servercert20240601.pem\" - \"/root/cacert.pem\" pre_tasks: - name: Copy \"{{ groups.ipaserver[0] }}-{{ item }}\" to \"/root/{{ item }}\" on node ansible.builtin.copy: src: \"{{ groups.ipaserver[0] }}-{{ item }}\" dest: \"/root/{{ item }}\" force: true with_items: - servercert20240601.pem - cacert.pem roles: - role: ipaserver state: present",
"ansible-playbook --vault-password-file=password_file -v -i ~/MyPlaybooks/inventory ~/MyPlaybooks/install-server-step1.yml",
"ansible-playbook -v -i ~/MyPlaybooks/inventory ~/MyPlaybooks/install-server-step2.yml",
"Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server",
"--- - name: Playbook to uninstall an IdM replica hosts: ipaserver become: true roles: - role: ipaserver ipaserver_remove_from_domain: true state: absent",
"ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/inventory <path_to_playbooks_directory>/uninstall-server.yml",
"--- - name: Playbook to uninstall an IdM replica hosts: ipaserver become: true roles: - role: ipaserver ipaserver_remove_from_domain: true ipaserver_remove_on_server: server456.idm.example.com ipaserver_ignore_topology_disconnect: true state: absent",
"ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/uninstall-server.yml",
"[ipareplicas] replica1.idm.example.com replica2.idm.example.com replica3.idm.example.com [...]",
"[ipaservers] server.idm.example.com [ipareplicas] replica1.idm.example.com replica2.idm.example.com replica3.idm.example.com [...]",
"[ipaservers] server.idm.example.com replica1.idm.example.com [ipareplicas] replica2.idm.example.com replica3.idm.example.com ipareplica_servers=replica1.idm.example.com",
"[ipaservers] server.idm.example.com [ipareplicas_tier1] replica1.idm.example.com [ipareplicas_tier2] replica2.idm.example.com \\ ipareplica_servers=replica1.idm.example.com,server.idm.example.com",
"--- - name: Playbook to configure IPA replicas (tier1) hosts: ipareplicas_tier1 become: true roles: - role: ipareplica state: present - name: Playbook to configure IPA replicas (tier2) hosts: ipareplicas_tier2 become: true roles: - role: ipareplica state: present",
"[ipaservers] server.idm.example.com [ipareplicas] replica1.idm.example.com replica2.idm.example.com replica3.idm.example.com [...] [ipareplicas:vars] ipareplica_firewalld_zone= custom zone",
"[ipaservers] server.idm.example.com [ipareplicas] replica1.idm.example.com replica2.idm.example.com replica3.idm.example.com [...] [ipareplicas:vars] ipareplica_setup_dns=true ipareplica_forwarders=192.0.2.1,192.0.2.2",
"[...] [ipaclient:vars] ipaclient_configure_dns_resolver=true ipaclient_dns_servers=192.168.100.1",
"- name: Playbook to configure IPA replicas hosts: ipareplicas become: true vars_files: - playbook_sensitive_data.yml roles: - role: ipareplica state: present",
"[...] [ipareplicas:vars] ipaadmin_password=Secret123",
"- name: Playbook to configure IPA replicas hosts: ipareplicas become: true roles: - role: ipareplica state: present",
"[...] [ipareplicas:vars] ipaadmin_principal=my_admin ipaadmin_password=my_admin_secret123",
"- name: Playbook to configure IPA replicas hosts: ipareplicas become: true roles: - role: ipareplica state: present",
"ansible-playbook -i ~/MyPlaybooks/inventory ~/MyPlaybooks/install-replica.yml",
"[ipaclients] client.idm.example.com [...]",
"- name: Playbook to configure IPA clients with username/password hosts: ipaclients become: true vars_files: - playbook_sensitive_data.yml roles: - role: ipaclient state: present",
"[...] [ipaclients:vars] ipaadmin_principal=my_admin ipaadmin_password=Secret123",
"- name: Playbook to unconfigure IPA clients hosts: ipaclients become: true roles: - role: ipaclient state: true",
"[...] [ipaclients:vars] ipaadmin_password: \"{{ ipaadmin_password }}\" ipaclient_domain=idm.example.com ipaclient_configure_dns_resolver=true ipaclient_dns_servers=192.168.100.1",
"[ipaclients] client.idm.example.com [ipaservers] server.idm.example.com [ipaclients:vars] ipaclient_domain=idm.example.com [...]",
"- name: Playbook to configure IPA clients with username/password hosts: ipaclients become: true vars_files: - playbook_sensitive_data.yml roles: - role: ipaclient state: present",
"[...] [ipaclients:vars] ipaadmin_principal=my_admin ipaadmin_password=Secret123",
"- name: Playbook to unconfigure IPA clients hosts: ipaclients become: true roles: - role: ipaclient state: true",
"[ipaclients:vars] ipaadmin_password=Secret123 ipaclient_use_otp=true",
"[ipaclients:vars] ipaclient_otp=<W5YpARl=7M.>",
"[ipaclients:vars] ipaadmin_keytab=/root/admin.keytab ipaclient_use_otp=true",
"[ipaclients:vars] ipaclient_keytab=/root/krb5.keytab",
"[ipaclients:vars] ipaadmin_password=Secret123",
"[ipaclients:vars] [...]",
"[ipaclients:vars] [...]",
"- name: Playbook to configure IPA clients hosts: ipaclients become: true vars_files: - ansible_vault_file.yml roles: - role: ipaclient state: present",
"- name: Playbook to configure IPA clients hosts: ipaclients become: true roles: - role: ipaclient state: true",
"ansible-playbook -v -i ~/MyPlaybooks/inventory ~/MyPlaybooks/install-client.yml",
"ssh [email protected]",
"[admin@server ~]USD ipa host-add client.idm.example.com --ip-address=172.25.250.11 --random -------------------------------------------------- Added host \"client.idm.example.com\" -------------------------------------------------- Host name: client.idm.example.com Random password: W5YpARl=7M.n Password: True Keytab: False Managed by: server.idm.example.com",
"exit logout Connection to server.idm.example.com closed.",
"[...] [ipaclients] client.idm.example.com [ipaclients:vars] ipaclient_domain=idm.example.com ipaclient_otp=W5YpARl=7M.n [...]",
"sudo dnf install krb5-workstation",
"ansible-playbook -i inventory install-client.yml",
"[user@client1 ~]USD id admin uid=1254400000(admin) gid=1254400000(admins) groups=1254400000(admins)",
"[user@client1 ~]USD su - idm_user Last login: Thu Oct 18 18:39:11 CEST 2018 from 192.168.122.1 on pts/0 [idm_user@client1 ~]USD",
"ansible-playbook -v -i ~/MyPlaybooks/inventory ~/MyPlaybooks/uninstall-client.yml",
"C:\\> gpupdate /force /target:computer",
"update-crypto-policies --set DEFAULT:AD-SUPPORT Setting system policy to DEFAULT:AD-SUPPORT Note: System-wide crypto policies are applied on application start-up. It is recommended to restart the system for the change of policies to fully take place.",
"[libdefaults] permitted_enctypes = aes256-cts-hmac-sha1-96 aes256-cts-hmac-sha384-192 camellia256-cts-cmac aes128-cts-hmac-sha1-96 aes128-cts-hmac-sha256-128 camellia128-cts-cmac +rc4",
"ipa dns-update-system-records --dry-run",
"IPA DNS records: _kerberos-master._tcp.idm.example.com. 86400 IN SRV 0 100 88 server.idm.example.com. _kerberos-master._udp.idm.example.com. 86400 IN SRV 0 100 88 server.idm.example.com. _kerberos._tcp.idm.example.com. 86400 IN SRV 0 100 88 server.idm.example.com. _kerberos._tcp.idm.example.com. 86400 IN SRV 0 100 88 server.idm.example.com. _kerberos.idm.example.com. 86400 IN TXT \"IDM.EXAMPLE.COM\" _kpasswd._tcp.idm.example.com. 86400 IN SRV 0 100 464 server.idm.example.com. _kpasswd._udp.idm.example.com. 86400 IN SRV 0 100 464 server.idm.example.com. _ldap._tcp.idm.example.com. 86400 IN SRV 0 100 389 server.idm.example.com. _ipa-ca.idm.example.com. 86400 IN A 192.168.122.2",
"dnssec-enable no; dnssec-validation no;",
"systemctl restart named-pkcs11",
"nslookup ad.example.com Server: 192.168.122.2 Address: 192.168.122.2#53 No-authoritative answer: Name: ad.example.com Address: 192.168.122.3",
"ipa dnsforwardzone-add ad.example.com --forwarder= 192.168.122.3 --forward-policy=first",
"named-pkcs11[2572]: no valid DS resolving 'host.ad.example.com/A/IN': 192.168.100.25#53",
"dnssec-enable no; dnssec-validation no;",
"systemctl restart named-pkcs11",
"nslookup ad.example.com Server: 192.168.122.2 Address: 192.168.122.2#53 No-authoritative answer: Name: ad.example.com Address: 192.168.122.3",
"dig +short -t SRV _kerberos._udp.idm.example.com. 0 100 88 server.idm.example.com. dig +short -t SRV _ldap._tcp.idm.example.com. 0 100 389 server.idm.example.com.",
"dig +short -t TXT _kerberos.idm.example.com. \"IDM.EXAMPLE.COM\"",
"[admin@server ~]USD ipa dns-update-system-records",
"[admin@server ~]USD ipa dns-update-system-records --dry-run --out dns_records_file.nsupdate",
"dig +short -t SRV _kerberos._tcp.dc._msdcs.ad.example.com. 0 100 88 addc1.ad.example.com. dig +short -t SRV _ldap._tcp.dc._msdcs.ad.example.com. 0 100 389 addc1.ad.example.com.",
"ipa-client-install --domain=idm.example.com",
".ad.example.com = IDM.EXAMPLE.COM ad.example.com = IDM.EXAMPLE.COM",
"idm-client.ad.example.com = IDM.EXAMPLE.COM",
"ipa-getcert request -r -f /etc/httpd/alias/server.crt -k /etc/httpd/alias/server.key -N CN=ipa-client.ad.example.com -D ipa-client.ad.example.com -K host/[email protected] -U id-kp-serverAuth",
"ignore_acceptor_hostname = true",
"ipa host-add idm-client.ad.example.com --force",
"ipa host-add-managedby idm-client.ad.example.com --hosts=idm-client.idm.example.com",
"ipa-getcert request -r -f /etc/httpd/alias/server.crt -k /etc/httpd/alias/server.key -N CN=`hostname --fqdn` -D `hostname --fqdn` -D idm-client.ad.example.com -K host/[email protected] -U id-kp-serverAuth",
"yum install ipa-server-trust-ad samba-client",
"kinit admin",
"ipa-adtrust-install",
"WARNING: The smb.conf already exists. Running ipa-adtrust-install will break your existing Samba configuration. Do you wish to continue? [no]: yes",
"Do you want to enable support for trusted domains in Schema Compatibility plugin? This will allow clients older than SSSD 1.9 and non-Linux clients to work with trusted users. Enable trusted domains support in slapi-nis? [no]: yes",
"Trust is configured but no NetBIOS domain name found, setting it now. Enter the NetBIOS name for the IPA domain. Only up to 15 uppercase ASCII letters, digits and dashes are allowed. Example: EXAMPLE. NetBIOS domain name [IDM]:",
"Do you want to run the ipa-sidgen task? [no]: yes",
"net conf setparm global 'rpc server dynamic port range' 55000-65000 firewall-cmd --add-port=55000-65000/tcp firewall-cmd --runtime-to-permanent",
"ipactl restart",
"smbclient -L ipaserver.idm.example.com -U user_name --use-kerberos=required lp_load_ex: changing to config backend registry Sharename Type Comment --------- ---- ------- IPCUSD IPC IPC Service (Samba 4.15.2)",
"ipa trust-add --type=ad ad.example.com --admin <ad_admin_username> --password --range-type=ipa-ad-trust",
"ipa trust-add --type=ad ad.example.com --admin <ad_admin_username> --password --range-type=ipa-ad-trust-posix",
"cd ~/ MyPlaybooks /",
"--- - name: Playbook to create a trust hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: ensure the trust is present ipatrust: ipaadmin_password: \"{{ ipaadmin_password }}\" realm: ad.example.com admin: Administrator password: secret_password state: present",
"--- - name: Playbook to create a trust hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: ensure the trust is present ipatrust: ipaadmin_password: \"{{ ipaadmin_password }}\" realm: ad.example.com admin: Administrator password: secret_password range_type: ipa-ad-trust-posix state: present",
"--- - name: Playbook to create a trust hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: ensure the trust is present ipatrust: ipaadmin_password: \"{{ ipaadmin_password }}\" realm: ad.example.com admin: Administrator password: secret_password state: present",
"ansible-playbook --vault-password-file=password_file -v -i inventory add-trust.yml",
"kinit [email protected]",
"kvno -S host server.idm.example.com",
"klist Ticket cache: KEYRING:persistent:0:krb_ccache_hRtox00 Default principal: [email protected] Valid starting Expires Service principal 03.05.2016 18:31:06 04.05.2016 04:31:01 host/[email protected] renew until 04.05.2016 18:31:00 03.05.2016 18:31:06 04.05.2016 04:31:01 krbtgt/[email protected] renew until 04.05.2016 18:31:00 03.05.2016 18:31:01 04.05.2016 04:31:01 krbtgt/[email protected] renew until 04.05.2016 18:31:00",
"dig +short -t SRV _kerberos._udp.dc._msdcs.idm.example.com. 0 100 88 server.idm.example.com. dig +short -t SRV _ldap._tcp.dc._msdcs.idm.example.com. 0 100 389 server.idm.example.com.",
"dig +short -t SRV _kerberos._tcp.dc._msdcs.ad.example.com. 0 100 88 addc1.ad.example.com. dig +short -t SRV _ldap._tcp.dc._msdcs.ad.example.com. 0 100 389 addc1.ad.example.com.",
"C:\\>nslookup.exe > set type=SRV",
"> _kerberos._udp.idm.example.com. _kerberos._udp.idm.example.com. SRV service location: priority = 0 weight = 100 port = 88 svr hostname = server.idm.example.com > _ldap._tcp.idm.example.com _ldap._tcp.idm.example.com SRV service location: priority = 0 weight = 100 port = 389 svr hostname = server.idm.example.com",
"C:\\>nslookup.exe > set type=TXT > _kerberos.idm.example.com. _kerberos.idm.example.com. text = \"IDM.EXAMPLE.COM\"",
"C:\\>nslookup.exe > set type=SRV > _kerberos._udp.dc._msdcs.idm.example.com. _kerberos._udp.dc._msdcs.idm.example.com. SRV service location: priority = 0 weight = 100 port = 88 svr hostname = server.idm.example.com > _ldap._tcp.dc._msdcs.idm.example.com. _ldap._tcp.dc._msdcs.idm.example.com. SRV service location: priority = 0 weight = 100 port = 389 svr hostname = server.idm.example.com",
"C:\\>nslookup.exe > set type=SRV",
"> _kerberos._udp.dc._msdcs.ad.example.com. _kerberos._udp.dc._msdcs.ad.example.com. SRV service location: priority = 0 weight = 100 port = 88 svr hostname = addc1.ad.example.com > _ldap._tcp.dc._msdcs.ad.example.com. _ldap._tcp.dc._msdcs.ad.example.com. SRV service location: priority = 0 weight = 100 port = 389 svr hostname = addc1.ad.example.com",
"ipa-adtrust-install --add-agents",
"ipactl restart",
"sssctl cache-remove",
"ipa server-show new_replica.idm.example.com Enabled server roles: CA server, NTP server, AD trust agent",
"ipa idrange-find ---------------- 2 ranges matched ---------------- Range name: IDM.EXAMPLE.COM_id_range First Posix ID of the range: 882200000 Number of IDs in the range: 200000 Range type: local domain range Range name: AD.EXAMPLE.COM_id_range First Posix ID of the range: 1337000000 Number of IDs in the range: 200000 Domain SID of the trusted domain: S-1-5-21-4123312420-990666102-3578675309 Range type: Active Directory trust range with POSIX attributes ---------------------------- Number of entries returned 2 ----------------------------",
"ipa idrange-mod --auto-private-groups=hybrid AD.EXAMPLE.COM_id_range",
"sss_cache -E",
"[global] debug=True",
"systemctl restart httpd",
"systemctl stop smb winbind",
"net conf setparm global 'log level' 100",
"[global] log level = 100",
"rm /var/log/samba/log.*",
"systemctl start smb winbind",
"date; ipa -vvv trust-add --type=ad ad.example.com",
"mv /etc/ipa/server.conf /etc/ipa/server.conf.backup systemctl restart httpd systemctl stop smb winbind net conf setparm global 'log level' 0 mv /usr/share/ipa/smb.conf.empty /usr/share/ipa/smb.conf.empty.backup systemctl start smb winbind",
"tar -cvf debugging-trust.tar /var/log/httpd/error_log /var/log/samba/log.*",
"ipa trust-del ad_domain_name ------------------------------ Deleted trust \" ad_domain_name \" ------------------------------",
"ipa idrange-del AD.EXAMPLE.COM_id_range systemctl restart sssd",
"ipa trust-show ad.example.com ipa: ERROR: ad.example.com: trust not found",
"cd ~/ MyPlaybooks /",
"--- - name: Playbook to delete trust hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: ensure the trust is absent ipatrust: ipaadmin_password: \"{{ ipaadmin_password }}\" realm: ad.example.com state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory del-trust.yml",
"ipa idrange-del AD.EXAMPLE.COM_id_range systemctl restart sssd",
"ipa trust-show ad.example.com ipa: ERROR: ad.example.com: trust not found",
"ipa idrange-find",
"ipa idrange-del AD.EXAMPLE.COM_id_range",
"systemctl restart sssd"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux//8/html-single/installing_identity_management/index |
Chapter 143. KafkaRebalanceStatus schema reference | Chapter 143. KafkaRebalanceStatus schema reference Used in: KafkaRebalance Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer sessionId The session identifier for requests to Cruise Control pertaining to this KafkaRebalance resource. This is used by the Kafka Rebalance operator to track the status of ongoing rebalancing operations. string optimizationResult A JSON object describing the optimization result. map | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkarebalancestatus-reference |
Appendix A. Using your subscription | Appendix A. Using your subscription AMQ Streams is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ Streams entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ Streams product. The Software Downloads page opens. Click the Download link for your component. Revised on 2021-12-14 20:09:07 UTC | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/deploying_and_upgrading_amq_streams_on_openshift/using_your_subscription |
Part XI. Command Line Tools | Part XI. Command Line Tools Red Hat JBoss Data Grid includes two command line tools for interacting with the caches in the data grid: The JBoss Data Grid Library CLI. For more information, see Section 23.1, "Red Hat JBoss Data Grid Library Mode CLI" . The JBoss Data Grid Server CLI. For more information, see Section 23.2, "Red Hat Data Grid Server CLI" . Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/part-command_line_tools |
Preface | Preface Important The following proof of concept deployment method is unsupported for production purposes. This deployment type uses local storage. Local storage is not guaranteed to provide the required read-after-write consistency and data integrity guarantees during parallel access that a storage registry like Red Hat Quay requires. Do not use this deployment type for production purposes. Use it for testing purposes only. Red Hat Quay is an enterprise-quality registry for building, securing and serving container images. The documents in this section detail how to deploy Red Hat Quay for proof of concept , or non-production, purposes. The primary objectives of this document includes the following: How to deploy Red Hat Quay for basic non-production purposes. Asses Red Hat Quay's container image management, including how to push, pull, tag, and organize images. Explore availability and scalability. How to deploy an advanced Red Hat Quay proof of concept deployment using SSL/TLS certificates. Beyond the primary objectives of this document, a proof of concept deployment can be used to test various features offered by Red Hat Quay, such as establishing superusers, setting repository quota limitations, enabling Splunk for action log storage, enabling Clair for vulnerability reporting, and more. See the " steps" section for a list of some of the features available after you have followed this guide. This proof of concept deployment procedure can be followed on a single machine, either physical or virtual. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/proof_of_concept_-_deploying_red_hat_quay/pr01 |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To provide feedback, you can highlight the text in a document and add comments. Follow the steps in the procedure to learn about submitting feedback on Red Hat documentation. Prerequisites Log in to the Red Hat Customer Portal. In the Red Hat Customer Portal, view the document in HTML format. Procedure Click the Feedback button to see existing reader comments. Note The feedback feature is enabled only in the HTML format. Highlight the section of the document where you want to provide feedback. In the prompt menu that opens near the text you selected, click Add Feedback . A text box opens in the feedback section on the right side of the page. Enter your feedback in the text box and click Submit . You have created a documentation issue. To view the issue, click the issue tracker link in the feedback view. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.1/proc-providing-feedback-on-redhat-documentation_cryostat |
Chapter 15. message | Chapter 15. message The original log entry text, UTF-8 encoded. This field may be absent or empty if a non-empty structured field is present. See the description of structured for more. Data type text Example value HAPPY | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/logging/message |
Chapter 4. Environment variables | Chapter 4. Environment variables Red Hat Quay supports a limited number of environment variables for dynamic configuration. 4.1. Geo-replication The same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environment variable. Table 4.1. Geo-replication configuration Variable Type Description QUAY_DISTRIBUTED_STORAGE_PREFERENCE String The preferred storage engine (by ID in DISTRIBUTED_STORAGE_CONFIG) to use. 4.2. Database connection pooling Red Hat Quay is composed of many different processes which all run within the same container. Many of these processes interact with the database. Database connection pooling is enabled by default, and each process that interacts with the database contains a connection pool. These per-process connection pools are configured to maintain a maximum of 20 connections. Under heavy load, it is possible to fill the connection pool for every process within a Red Hat Quay container. Under certain deployments and loads, this might require analysis to ensure that Red Hat Quay does not exceed the configured database's maximum connection count. Overtime, the connection pools release idle connections. To release all connections immediately, Red Hat Quay requires a restart. For standalone Red Hat Quay deployments, database connection pooling can be toggled off when starting your deployment. For example: USD sudo podman run -d --rm -p 80:8080 -p 443:8443 \ --name=quay \ -v USDQUAY/config:/conf/stack:Z \ -v USDQUAY/storage:/datastorage:Z \ -e DB_CONNECTION_POOLING=false registry.redhat.io/quay/quay-rhel8:v3.12.1 For Red Hat Quay on OpenShift Container Platform, database connection pooling can be configured by modifying the QuayRegistry custom resource definition (CRD). For example: Example QuayRegistry CRD spec: components: - kind: quay managed: true overrides: env: - name: DB_CONNECTION_POOLING value: "false" Table 4.2. Database connection pooling configuration Variable Type Description DB_CONNECTION_POOLING String Whether to enable or disable database connection pooling. Defaults to true. Accepted values are "true" or "false" If database connection pooling is enabled, it is possible to change the maximum size of the connection pool. This can be done through the following config.yaml option: config.yaml ... DB_CONNECTION_ARGS: max_connections: 10 ... 4.3. HTTP connection counts It is possible to specify the quantity of simultaneous HTTP connections using environment variables. These can be specified as a whole, or for a specific component. The default for each is 50 parallel connections per process. Table 4.3. HTTP connection counts configuration Variable Type Description WORKER_CONNECTION_COUNT Number Simultaneous HTTP connections Default: 50 WORKER_CONNECTION_COUNT_REGISTRY Number Simultaneous HTTP connections for registry Default: WORKER_CONNECTION_COUNT WORKER_CONNECTION_COUNT_WEB Number Simultaneous HTTP connections for web UI Default: WORKER_CONNECTION_COUNT WORKER_CONNECTION_COUNT_SECSCAN Number Simultaneous HTTP connections for Clair Default: WORKER_CONNECTION_COUNT 4.4. Worker count variables Table 4.4. Worker count variables Variable Type Description WORKER_COUNT Number Generic override for number of processes WORKER_COUNT_REGISTRY Number Specifies the number of processes to handle Registry requests within the Quay container Values: Integer between 8 and 64 WORKER_COUNT_WEB Number Specifies the number of processes to handle UI/Web requests within the container Values: Integer between 2 and 32 WORKER_COUNT_SECSCAN Number Specifies the number of processes to handle Security Scanning (e.g. Clair) integration within the container Values: Integer. Because the Operator specifies 2 vCPUs for resource requests and limits, setting this value between 2 and 4 is safe. However, users can run more, for example, 16 , if warranted. 4.5. Debug variables The following debug variables are available on Red Hat Quay. Table 4.5. Debug configuration variables Variable Type Description DEBUGLOG Boolean Whether to enable or disable debug logs. USERS_DEBUG Integer. Either 0 or 1 . Used to debug LDAP operations in clear text, including passwords. Must be used with DEBUGLOG=TRUE . Important Setting USERS_DEBUG=1 exposes credentials in clear text. This variable should be removed from the Red Hat Quay deployment after debugging. The log file that is generated with this environment variable should be scrutinized, and passwords should be removed before sending to other users. Use with caution. | [
"sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z -e DB_CONNECTION_POOLING=false registry.redhat.io/quay/quay-rhel8:v3.12.1",
"spec: components: - kind: quay managed: true overrides: env: - name: DB_CONNECTION_POOLING value: \"false\"",
"DB_CONNECTION_ARGS: max_connections: 10"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/configure_red_hat_quay/config-envar-intro |
Chapter 37. File | Chapter 37. File Both producer and consumer are supported The File component provides access to file systems, allowing files to be processed by any other Camel Components or messages from other components to be saved to disk. 37.1. Dependencies When using file with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-file-starter</artifactId> </dependency> 37.2. URI format Where directoryName represents the underlying file directory. Only directories Camel supports only endpoints configured with a starting directory. So the directoryName must be a directory. If you want to consume a single file only, you can use the fileName option, e.g. by setting fileName=thefilename . Also, the starting directory must not contain dynamic expressions with USD{ } placeholders. Again use the fileName option to specify the dynamic part of the filename. Note Avoid reading files currently being written by another application Beware the JDK File IO API is a bit limited in detecting whether another application is currently writing/copying a file. And the implementation can be different depending on OS platform as well. This could lead to that Camel thinks the file is not locked by another process and start consuming it. Therefore you have to do you own investigation what suites your environment. To help with this Camel provides different readLock options and doneFileName option that you can use. See also the section Consuming files from folders where others drop files directly . 37.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 37.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 37.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 37.4. Component Options The File component supports 3 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 37.5. Endpoint Options The File endpoint is configured using URI syntax: with the following path and query parameters: 37.5.1. Path Parameters (1 parameters) Name Description Default Type directoryName (common) Required The starting directory. File 37.5.2. Query Parameters (94 parameters) Name Description Default Type charset (common) This option is used to specify the encoding of the file. You can use this on the consumer, to specify the encodings of the files, which allow Camel to know the charset it should load the file content in case the file content is being accessed. Likewise when writing a file, you can use this option to specify which charset to write the file as well. Do mind that when writing the file Camel may have to read the message content into memory to be able to convert the data into the configured charset, so do not use this if you have big messages. String doneFileName (common) Producer: If provided, then Camel will write a 2nd done file when the original file has been written. The done file will be empty. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders. The done file will always be written in the same folder as the original file. Consumer: If provided, Camel will only consume files if a done file exists. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders.The done file is always expected in the same folder as the original file. Only USD\\{file.name} and USD\\{file.name.} is supported as dynamic placeholders. String fileName (common) Use Expression such as File Language to dynamically set the filename. For consumers, it's used as a filename filter. For producers, it's used to evaluate the filename to write. If an expression is set, it take precedence over the CamelFileName header. (Note: The header itself can also be an Expression). The expression options support both String and Expression types. If the expression is a String type, it is always evaluated using the File Language. If the expression is an Expression type, the specified Expression type is used - this allows you, for instance, to use OGNL expressions. For the consumer, you can use it to filter filenames, so you can for instance consume today's file using the File Language syntax: mydata-USD\\{date:now:yyyyMMdd}.txt. The producers support the CamelOverruleFileName header which takes precedence over any existing CamelFileName header; the CamelOverruleFileName is a header that is used only once, and makes it easier as this avoids to temporary store CamelFileName and have to restore it afterwards. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean delete (consumer) If true, the file will be deleted after it is processed successfully. false boolean moveFailed (consumer) Sets the move failure expression based on Simple language. For example, to move files into a .error subdirectory use: .error. Note: When moving the files to the fail location Camel will handle the error and will not pick up the file again. String noop (consumer) If true, the file is not moved or deleted in any way. This option is good for readonly data, or for ETL type requirements. If noop=true, Camel will set idempotent=true as well, to avoid consuming the same files over and over again. false boolean preMove (consumer) Expression (such as File Language) used to dynamically set the filename when moving it before processing. For example to move in-progress files into the order directory set this value to order. String preSort (consumer) When pre-sort is enabled then the consumer will sort the file and directory names during polling, that was retrieved from the file system. You may want to do this in case you need to operate on the files in a sorted order. The pre-sort is executed before the consumer starts to filter, and accept files to process by Camel. This option is default=false meaning disabled. false boolean recursive (consumer) If a directory, will look for files in all the sub-directories as well. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean directoryMustExist (consumer (advanced)) Similar to the startingDirectoryMustExist option but this applies during polling (after starting the consumer). false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern extendedAttributes (consumer (advanced)) To define which file attributes of interest. Like posix:permissions,posix:owner,basic:lastAccessTime, it supports basic wildcard like posix:, basic:lastAccessTime. String inProgressRepository (consumer (advanced)) A pluggable in-progress repository org.apache.camel.spi.IdempotentRepository. The in-progress repository is used to account the current in progress files being consumed. By default a memory based repository is used. IdempotentRepository localWorkDirectory (consumer (advanced)) When consuming, a local work directory can be used to store the remote file content directly in local files, to avoid loading the content into memory. This is beneficial, if you consume a very big remote file and thus can conserve memory. String onCompletionExceptionHandler (consumer (advanced)) To use a custom org.apache.camel.spi.ExceptionHandler to handle any thrown exceptions that happens during the file on completion process where the consumer does either a commit or rollback. The default implementation will log any exception at WARN level and ignore. ExceptionHandler pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy probeContentType (consumer (advanced)) Whether to enable probing of the content type. If enable then the consumer uses Files#probeContentType(java.nio.file.Path) to determine the content-type of the file, and store that as a header with key Exchange#FILE_CONTENT_TYPE on the Message. false boolean processStrategy (consumer (advanced)) A pluggable org.apache.camel.component.file.GenericFileProcessStrategy allowing you to implement your own readLock option or similar. Can also be used when special conditions must be met before a file can be consumed, such as a special ready file exists. If this option is set then the readLock option does not apply. GenericFileProcessStrategy resumeStrategy (consumer (advanced)) Set a resume strategy for files. This makes it possible to define a strategy for resuming reading files after the last point before stopping the application. See the FileConsumerResumeStrategy for implementation details. FileConsumerResumeStrategy startingDirectoryMustExist (consumer (advanced)) Whether the starting directory must exist. Mind that the autoCreate option is default enabled, which means the starting directory is normally auto created if it doesn't exist. You can disable autoCreate and enable this to ensure the starting directory must exist. Will thrown an exception if the directory doesn't exist. false boolean startingDirectoryMustHaveAccess (consumer (advanced)) Whether the starting directory has access permissions. Mind that the startingDirectoryMustExist parameter must be set to true in order to verify that the directory exists. Will thrown an exception if the directory doesn't have read and write permissions. false boolean appendChars (producer) Used to append characters (text) after writing files. This can for example be used to add new lines or other separators when writing and appending new files or existing files. To specify new-line (slash-n or slash-r) or tab (slash-t) characters then escape with an extra slash, eg slash-slash-n. String fileExist (producer) What to do if a file already exists with the same name. Override, which is the default, replaces the existing file. - Append - adds content to the existing file. - Fail - throws a GenericFileOperationException, indicating that there is already an existing file. - Ignore - silently ignores the problem and does not override the existing file, but assumes everything is okay. - Move - option requires to use the moveExisting option to be configured as well. The option eagerDeleteTargetFile can be used to control what to do if an moving the file, and there exists already an existing file, otherwise causing the move operation to fail. The Move option will move any existing files, before writing the target file. - TryRename is only applicable if tempFileName option is in use. This allows to try renaming the file from the temporary name to the actual name, without doing any exists check. This check may be faster on some file systems and especially FTP servers. Enum values: Override Append Fail Ignore Move TryRename Override GenericFileExist flatten (producer) Flatten is used to flatten the file name path to strip any leading paths, so it's just the file name. This allows you to consume recursively into sub-directories, but when you eg write the files to another directory they will be written in a single directory. Setting this to true on the producer enforces that any file name in CamelFileName header will be stripped for any leading paths. false boolean jailStartingDirectory (producer) Used for jailing (restricting) writing files to the starting directory (and sub) only. This is enabled by default to not allow Camel to write files to outside directories (to be more secured out of the box). You can turn this off to allow writing files to directories outside the starting directory, such as parent or root folders. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean moveExisting (producer) Expression (such as File Language) used to compute file name to use when fileExist=Move is configured. To move files into a backup subdirectory just enter backup. This option only supports the following File Language tokens: file:name, file:name.ext, file:name.noext, file:onlyname, file:onlyname.noext, file:ext, and file:parent. Notice the file:parent is not supported by the FTP component, as the FTP component can only move any existing files to a relative directory based on current dir as base. String tempFileName (producer) The same as tempPrefix option but offering a more fine grained control on the naming of the temporary filename as it uses the File Language. The location for tempFilename is relative to the final file location in the option 'fileName', not the target directory in the base uri. For example if option fileName includes a directory prefix: dir/finalFilename then tempFileName is relative to that subdirectory dir. String tempPrefix (producer) This option is used to write the file using a temporary name and then, after the write is complete, rename it to the real name. Can be used to identify files being written and also avoid consumers (not using exclusive read locks) reading in progress files. Is often used by FTP when uploading big files. String allowNullBody (producer (advanced)) Used to specify if a null body is allowed during file writing. If set to true then an empty file will be created, when set to false, and attempting to send a null body to the file component, a GenericFileWriteException of 'Cannot write null body to file.' will be thrown. If the fileExist option is set to 'Override', then the file will be truncated, and if set to append the file will remain unchanged. false boolean chmod (producer (advanced)) Specify the file permissions which is sent by the producer, the chmod value must be between 000 and 777; If there is a leading digit like in 0755 we will ignore it. String chmodDirectory (producer (advanced)) Specify the directory permissions used when the producer creates missing directories, the chmod value must be between 000 and 777; If there is a leading digit like in 0755 we will ignore it. String eagerDeleteTargetFile (producer (advanced)) Whether or not to eagerly delete any existing target file. This option only applies when you use fileExists=Override and the tempFileName option as well. You can use this to disable (set it to false) deleting the target file before the temp file is written. For example you may write big files and want the target file to exists during the temp file is being written. This ensure the target file is only deleted until the very last moment, just before the temp file is being renamed to the target filename. This option is also used to control whether to delete any existing files when fileExist=Move is enabled, and an existing file exists. If this option copyAndDeleteOnRenameFails false, then an exception will be thrown if an existing file existed, if its true, then the existing file is deleted before the move operation. true boolean forceWrites (producer (advanced)) Whether to force syncing writes to the file system. You can turn this off if you do not want this level of guarantee, for example if writing to logs / audit logs etc; this would yield better performance. true boolean keepLastModified (producer (advanced)) Will keep the last modified timestamp from the source file (if any). Will use the Exchange.FILE_LAST_MODIFIED header to located the timestamp. This header can contain either a java.util.Date or long with the timestamp. If the timestamp exists and the option is enabled it will set this timestamp on the written file. Note: This option only applies to the file producer. You cannot use this option with any of the ftp producers. false boolean moveExistingFileStrategy (producer (advanced)) Strategy (Custom Strategy) used to move file with special naming token to use when fileExist=Move is configured. By default, there is an implementation used if no custom strategy is provided. FileMoveExistingStrategy autoCreate (advanced) Automatically create missing directories in the file's pathname. For the file consumer, that means creating the starting directory. For the file producer, it means the directory the files should be written to. true boolean bufferSize (advanced) Buffer size in bytes used for writing files (or in case of FTP for downloading and uploading files). 131072 int copyAndDeleteOnRenameFail (advanced) Whether to fallback and do a copy and delete file, in case the file could not be renamed directly. This option is not available for the FTP component. true boolean renameUsingCopy (advanced) Perform rename operations using a copy and delete strategy. This is primarily used in environments where the regular rename operation is unreliable (e.g. across different file systems or networks). This option takes precedence over the copyAndDeleteOnRenameFail parameter that will automatically fall back to the copy and delete strategy, but only after additional delays. false boolean synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean antExclude (filter) Ant style filter exclusion. If both antInclude and antExclude are used, antExclude takes precedence over antInclude. Multiple exclusions may be specified in comma-delimited format. String antFilterCaseSensitive (filter) Sets case sensitive flag on ant filter. true boolean antInclude (filter) Ant style filter inclusion. Multiple inclusions may be specified in comma-delimited format. String eagerMaxMessagesPerPoll (filter) Allows for controlling whether the limit from maxMessagesPerPoll is eager or not. If eager then the limit is during the scanning of files. Where as false would scan all files, and then perform sorting. Setting this option to false allows for sorting all files first, and then limit the poll. Mind that this requires a higher memory usage as all file details are in memory to perform the sorting. true boolean exclude (filter) Is used to exclude files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris. String excludeExt (filter) Is used to exclude files matching file extension name (case insensitive). For example to exclude bak files, then use excludeExt=bak. Multiple extensions can be separated by comma, for example to exclude bak and dat files, use excludeExt=bak,dat. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options. String filter (filter) Pluggable filter as a org.apache.camel.component.file.GenericFileFilter class. Will skip files if filter returns false in its accept() method. GenericFileFilter filterDirectory (filter) Filters the directory based on Simple language. For example to filter on current date, you can use a simple date pattern such as USD\\{date:now:yyyMMdd}. String filterFile (filter) Filters the file based on Simple language. For example to filter on file size, you can use USD\\{file:size} 5000. String idempotent (filter) Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files. Will by default use a memory based LRUCache that holds 1000 entries. If noop=true then idempotent will be enabled as well to avoid consuming the same files over and over again. false Boolean idempotentKey (filter) To use a custom idempotent key. By default the absolute path of the file is used. You can use the File Language, for example to use the file name and file size, you can do: idempotentKey=USD\\{file:name}-USD\\{file:size}. String idempotentRepository (filter) A pluggable repository org.apache.camel.spi.IdempotentRepository which by default use MemoryIdempotentRepository if none is specified and idempotent is true. IdempotentRepository include (filter) Is used to include files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris. String includeExt (filter) Is used to include files matching file extension name (case insensitive). For example to include txt files, then use includeExt=txt. Multiple extensions can be separated by comma, for example to include txt and xml files, use includeExt=txt,xml. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options. String maxDepth (filter) The maximum depth to traverse when recursively processing a directory. 2147483647 int maxMessagesPerPoll (filter) To define a maximum messages to gather per poll. By default no maximum is set. Can be used to set a limit of e.g. 1000 to avoid when starting up the server that there are thousands of files. Set a value of 0 or negative to disabled it. Notice: If this option is in use then the File and FTP components will limit before any sorting. For example if you have 100000 files and use maxMessagesPerPoll=500, then only the first 500 files will be picked up, and then sorted. You can use the eagerMaxMessagesPerPoll option and set this to false to allow to scan all files first and then sort afterwards. int minDepth (filter) The minimum depth to start processing when recursively processing a directory. Using minDepth=1 means the base directory. Using minDepth=2 means the first sub directory. int move (filter) Expression (such as Simple Language) used to dynamically set the filename when moving it after processing. To move files into a .done subdirectory just enter .done. String exclusiveReadLockStrategy (lock) Pluggable read-lock as a org.apache.camel.component.file.GenericFileExclusiveReadLockStrategy implementation. GenericFileExclusiveReadLockStrategy readLock (lock) Used by consumer, to only poll the files if it has exclusive read-lock on the file (i.e. the file is not in-progress or being written). Camel will wait until the file lock is granted. This option provides the build in strategies: - none - No read lock is in use - markerFile - Camel creates a marker file (fileName.camelLock) and then holds a lock on it. This option is not available for the FTP component - changed - Changed is using file length/modification timestamp to detect whether the file is currently being copied or not. Will at least use 1 sec to determine this, so this option cannot consume files as fast as the others, but can be more reliable as the JDK IO API cannot always determine whether a file is currently being used by another process. The option readLockCheckInterval can be used to set the check frequency. - fileLock - is for using java.nio.channels.FileLock. This option is not avail for Windows OS and the FTP component. This approach should be avoided when accessing a remote file system via a mount/share unless that file system supports distributed file locks. - rename - rename is for using a try to rename the file as a test if we can get exclusive read-lock. - idempotent - (only for file component) idempotent is for using a idempotentRepository as the read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-changed - (only for file component) idempotent-changed is for using a idempotentRepository and changed as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-rename - (only for file component) idempotent-rename is for using a idempotentRepository and rename as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that.Notice: The various read locks is not all suited to work in clustered mode, where concurrent consumers on different nodes is competing for the same files on a shared file system. The markerFile using a close to atomic operation to create the empty marker file, but its not guaranteed to work in a cluster. The fileLock may work better but then the file system need to support distributed file locks, and so on. Using the idempotent read lock can support clustering if the idempotent repository supports clustering, such as Hazelcast Component or Infinispan. Enum values: none markerFile fileLock rename changed idempotent idempotent-changed idempotent-rename none String readLockCheckInterval (lock) Interval in millis for the read-lock, if supported by the read lock. This interval is used for sleeping between attempts to acquire the read lock. For example when using the changed read lock, you can set a higher interval period to cater for slow writes. The default of 1 sec. may be too fast if the producer is very slow writing the file. Notice: For FTP the default readLockCheckInterval is 5000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that amble time is allowed for the read lock process to try to grab the lock before the timeout was hit. 1000 long readLockDeleteOrphanLockFiles (lock) Whether or not read lock with marker files should upon startup delete any orphan read lock files, which may have been left on the file system, if Camel was not properly shutdown (such as a JVM crash). If turning this option to false then any orphaned lock file will cause Camel to not attempt to pickup that file, this could also be due another node is concurrently reading files from the same shared directory. true boolean readLockIdempotentReleaseAsync (lock) Whether the delayed release task should be synchronous or asynchronous. See more details at the readLockIdempotentReleaseDelay option. false boolean readLockIdempotentReleaseAsyncPoolSize (lock) The number of threads in the scheduled thread pool when using asynchronous release tasks. Using a default of 1 core threads should be sufficient in almost all use-cases, only set this to a higher value if either updating the idempotent repository is slow, or there are a lot of files to process. This option is not in-use if you use a shared thread pool by configuring the readLockIdempotentReleaseExecutorService option. See more details at the readLockIdempotentReleaseDelay option. int readLockIdempotentReleaseDelay (lock) Whether to delay the release task for a period of millis. This can be used to delay the release tasks to expand the window when a file is regarded as read-locked, in an active/active cluster scenario with a shared idempotent repository, to ensure other nodes cannot potentially scan and acquire the same file, due to race-conditions. By expanding the time-window of the release tasks helps prevents these situations. Note delaying is only needed if you have configured readLockRemoveOnCommit to true. int readLockIdempotentReleaseExecutorService (lock) To use a custom and shared thread pool for asynchronous release tasks. See more details at the readLockIdempotentReleaseDelay option. ScheduledExecutorService readLockLoggingLevel (lock) Logging level used when a read lock could not be acquired. By default a DEBUG is logged. You can change this level, for example to OFF to not have any logging. This option is only applicable for readLock of types: changed, fileLock, idempotent, idempotent-changed, idempotent-rename, rename. Enum values: TRACE DEBUG INFO WARN ERROR OFF DEBUG LoggingLevel readLockMarkerFile (lock) Whether to use marker file with the changed, rename, or exclusive read lock types. By default a marker file is used as well to guard against other processes picking up the same files. This behavior can be turned off by setting this option to false. For example if you do not want to write marker files to the file systems by the Camel application. true boolean readLockMinAge (lock) This option is applied only for readLock=changed. It allows to specify a minimum age the file must be before attempting to acquire the read lock. For example use readLockMinAge=300s to require the file is at last 5 minutes old. This can speedup the changed read lock as it will only attempt to acquire files which are at least that given age. 0 long readLockMinLength (lock) This option is applied only for readLock=changed. It allows you to configure a minimum file length. By default Camel expects the file to contain data, and thus the default value is 1. You can set this option to zero, to allow consuming zero-length files. 1 long readLockRemoveOnCommit (lock) This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file is succeeded and a commit happens. By default the file is not removed which ensures that any race-condition do not occur so another active node may attempt to grab the file. Instead the idempotent repository may support eviction strategies that you can configure to evict the file name entry after X minutes - this ensures no problems with race conditions. See more details at the readLockIdempotentReleaseDelay option. false boolean readLockRemoveOnRollback (lock) This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file failed and a rollback happens. If this option is false, then the file name entry is confirmed (as if the file did a commit). true boolean readLockTimeout (lock) Optional timeout in millis for the read-lock, if supported by the read-lock. If the read-lock could not be granted and the timeout triggered, then Camel will skip the file. At poll Camel, will try the file again, and this time maybe the read-lock could be granted. Use a value of 0 or lower to indicate forever. Currently fileLock, changed and rename support the timeout. Notice: For FTP the default readLockTimeout value is 20000 instead of 10000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that amble time is allowed for the read lock process to try to grab the lock before the timeout was hit. 10000 long backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean shuffle (sort) To shuffle the list of files (sort in random order). false boolean sortBy (sort) Built-in sort by using the File Language. Supports nested sorts, so you can have a sort by file name and as a 2nd group sort by modified date. String sorter (sort) Pluggable sorter as a java.util.Comparator class. Comparator Note Default behavior for file producer By default it will override any existing file, if one exist with the same name. 37.6. Move and Delete operations Any move or delete operations is executed after (post command) the routing has completed; so during processing of the Exchange the file is still located in the inbox folder. Lets illustrate this with an example: from("file://inbox?move=.done").to("bean:handleOrder"); When a file is dropped in the inbox folder, the file consumer notices this and creates a new FileExchange that is routed to the handleOrder bean. The bean then processes the File object. At this point in time the file is still located in the inbox folder. After the bean completes, and thus the route is completed, the file consumer will perform the move operation and move the file to the .done sub-folder. The move and the preMove options are considered as a directory name (though if you use an expression such as File Language, or Simple then the result of the expression evaluation is the file name to be used. For example, if you set: then that's using the File language which we use return the file name to be used), which can be either relative or absolute. If relative, the directory is created as a sub-folder from within the folder where the file was consumed. By default, Camel will move consumed files to the .camel sub-folder relative to the directory where the file was consumed. If you want to delete the file after processing, the route should be: from("file://inbox?delete=true").to("bean:handleOrder"); We have introduced a pre move operation to move files before they are processed. This allows you to mark which files have been scanned as they are moved to this sub folder before being processed. from("file://inbox?preMove=inprogress").to("bean:handleOrder"); You can combine the pre move and the regular move: from("file://inbox?preMove=inprogress&move=.done").to("bean:handleOrder"); So in this situation, the file is in the inprogress folder when being processed and after it's processed, it's moved to the .done folder. 37.7. Fine grained control over Move and PreMove option The move and preMove options are Expression-based, so we have the full power of the File Language to do advanced configuration of the directory and name pattern. Camel will, in fact, internally convert the directory name you enter into a File Language expression. So when we enter move=.done Camel will convert this into: USD{file:parent}/.done/USD{file:onlyname} . This is only done if Camel detects that you have not provided a USD\{ } in the option value yourself. So when you enter a USD\{ } Camel will not convert it and thus you have the full power. So if we want to move the file into a backup folder with today's date as the pattern, we can do: 37.8. About moveFailed The moveFailed option allows you to move files that could not be processed successfully to another location such as an error folder of your choice. For example to move the files in an error folder with a timestamp you can use moveFailed=/error/USD{ file:name.noext }-USD{date:now:yyyyMMddHHmmssSSS}.USD{\'\'file:ext }. 37.9. Message Headers The following headers are supported by this component: 37.9.1. File producer only Header Description CamelFileName Specifies the name of the file to write (relative to the endpoint directory). This name can be a String ; a String with a File Language or Simple language expression; or an Expression object. If it's null then Camel will auto-generate a filename based on the message unique ID. CamelFileNameProduced The actual absolute filepath (path + name) for the output file that was written. This header is set by Camel and its purpose is providing end-users with the name of the file that was written. CamelOverruleFileName Is used for overruling CamelFileName header and use the value instead (but only once, as the producer will remove this header after writing the file). The value can be only be a String. Notice that if the option fileName has been configured, then this is still being evaluated. 37.9.2. File consumer only Header Description CamelFileName Name of the consumed file as a relative file path with offset from the starting directory configured on the endpoint. CamelFileNameOnly Only the file name (the name with no leading paths). CamelFileAbsolute A boolean option specifying whether the consumed file denotes an absolute path or not. Should normally be false for relative paths. Absolute paths should normally not be used but we added to the move option to allow moving files to absolute paths. But can be used elsewhere as well. CamelFileAbsolutePath The absolute path to the file. For relative files this path holds the relative path instead. CamelFilePath The file path. For relative files this is the starting directory + the relative filename. For absolute files this is the absolute path. CamelFileRelativePath The relative path. CamelFileParent The parent path. CamelFileLength A long value containing the file size. CamelFileLastModified A Long value containing the last modified timestamp of the file. 37.10. Batch Consumer This component implements the Batch Consumer. 37.11. Exchange Properties, file consumer only As the file consumer implements the BatchConsumer it supports batching the files it polls. By batching we mean that Camel will add the following additional properties to the Exchange, so you know the number of files polled, the current index, and whether the batch is already completed. Property Description CamelBatchSize The total number of files that was polled in this batch. CamelBatchIndex The current index of the batch. Starts from 0. CamelBatchComplete A boolean value indicating the last Exchange in the batch. Is only true for the last entry. This allows you for instance to know how many files exist in this batch and for instance let the Aggregator2 aggregate this number of files. 37.12. Using charset The charset option allows for configuring an encoding of the files on both the consumer and producer endpoints. For example if you read utf-8 files, and want to convert the files to iso-8859-1, you can do: from("file:inbox?charset=utf-8") .to("file:outbox?charset=iso-8859-1") You can also use the convertBodyTo in the route. In the example below we have still input files in utf-8 format, but we want to convert the file content to a byte array in iso-8859-1 format. And then let a bean process the data. Before writing the content to the outbox folder using the current charset. from("file:inbox?charset=utf-8") .convertBodyTo(byte[].class, "iso-8859-1") .to("bean:myBean") .to("file:outbox"); If you omit the charset on the consumer endpoint, then Camel does not know the charset of the file, and would by default use "UTF-8". However you can configure a JVM system property to override and use a different default encoding with the key org.apache.camel.default.charset . In the example below this could be a problem if the files is not in UTF-8 encoding, which would be the default encoding for read the files. In this example when writing the files, the content has already been converted to a byte array, and thus would write the content directly as is (without any further encodings). from("file:inbox") .convertBodyTo(byte[].class, "iso-8859-1") .to("bean:myBean") .to("file:outbox"); You can also override and control the encoding dynamic when writing files, by setting a property on the exchange with the key Exchange.CHARSET_NAME . For example in the route below we set the property with a value from a message header. from("file:inbox") .convertBodyTo(byte[].class, "iso-8859-1") .to("bean:myBean") .setProperty(Exchange.CHARSET_NAME, header("someCharsetHeader")) .to("file:outbox"); We suggest to keep things simpler, so if you pickup files with the same encoding, and want to write the files in a specific encoding, then favor to use the charset option on the endpoints. Notice that if you have explicit configured a charset option on the endpoint, then that configuration is used, regardless of the Exchange.CHARSET_NAME property. If you have some issues then you can enable DEBUG logging on org.apache.camel.component.file , and Camel logs when it reads/write a file using a specific charset. For example the route below will log the following: from("file:inbox?charset=utf-8") .to("file:outbox?charset=iso-8859-1") And the logs: 37.13. Common gotchas with folder and filenames When Camel is producing files (writing files) there are a few gotchas affecting how to set a filename of your choice. By default, Camel will use the message ID as the filename, and since the message ID is normally a unique generated ID, you will end up with filenames such as: ID-MACHINENAME-2443-1211718892437-1-0 . If such a filename is not desired, then you must provide a filename in the CamelFileName message header. The constant, Exchange.FILE_NAME , can also be used. The sample code below produces files using the message ID as the filename: from("direct:report").to("file:target/reports"); To use report.txt as the filename you have to do: from("direct:report").setHeader(Exchange.FILE_NAME, constant("report.txt")).to( "file:target/reports"); the same as above, but with CamelFileName : from("direct:report").setHeader("CamelFileName", constant("report.txt")).to( "file:target/reports"); And a syntax where we set the filename on the endpoint with the fileName URI option. from("direct:report").to("file:target/reports/?fileName=report.txt"); 37.14. Filename Expression Filename can be set either using the expression option or as a string-based File language expression in the CamelFileName header. See the File language for syntax and samples. 37.15. Consuming files from folders where others drop files directly Beware if you consume files from a folder where other applications write files to directly. Take a look at the different readLock options to see what suits your use cases. The best approach is however to write to another folder and after the write move the file in the drop folder. However if you write files directly to the drop folder then the option changed could better detect whether a file is currently being written/copied as it uses a file changed algorithm to see whether the file size / modification changes over a period of time. The other readLock options rely on Java File API that sadly is not always very good at detecting this. You may also want to look at the doneFileName option, which uses a marker file (done file) to signal when a file is done and ready to be consumed. 37.16. Using done files See also section writing done files below. If you want only to consume files when a done file exists, then you can use the doneFileName option on the endpoint. from("file:bar?doneFileName=done"); Will only consume files from the bar folder, if a done file exists in the same directory as the target files. Camel will automatically delete the done file when it's done consuming the files. Camel does not delete automatically the done file if noop=true is configured. However it is more common to have one done file per target file. This means there is a 1:1 correlation. To do this you must use dynamic placeholders in the doneFileName option. Currently Camel supports the following two dynamic tokens: file:name and file:name.noext which must be enclosed in USD\{ }. The consumer only supports the static part of the done file name as either prefix or suffix (not both). from("file:bar?doneFileName=USD{file:name}.done"); In this example only files will be polled if there exists a done file with the name file name .done. For example hello.txt - is the file to be consumed hello.txt.done - is the associated done file You can also use a prefix for the done file, such as: from("file:bar?doneFileName=ready-USD{file:name}"); hello.txt - is the file to be consumed ready-hello.txt - is the associated done file 37.17. Writing done files After you have written a file you may want to write an additional done file as a kind of marker, to indicate to others that the file is finished and has been written. To do that you can use the doneFileName option on the file producer endpoint. .to("file:bar?doneFileName=done"); Will simply create a file named done in the same directory as the target file. However it is more common to have one done file per target file. This means there is a 1:1 correlation. To do this you must use dynamic placeholders in the doneFileName option. Currently Camel supports the following two dynamic tokens: file:name and file:name.noext which must be enclosed in USD\{ }. .to("file:bar?doneFileName=done-USD{file:name}"); Will for example create a file named done-foo.txt if the target file was foo.txt in the same directory as the target file. .to("file:bar?doneFileName=USD{file:name}.done"); Will for example create a file named foo.txt.done if the target file was foo.txt in the same directory as the target file. .to("file:bar?doneFileName=USD{file:name.noext}.done"); Will for example create a file named foo.done if the target file was foo.txt in the same directory as the target file. 37.18. Samples 37.18.1. Read from a directory and write to another directory from("file://inputdir/?delete=true").to("file://outputdir") 37.18.2. Read from a directory and write to another directory using a overrule dynamic name from("file://inputdir/?delete=true").to("file://outputdir?overruleFile=copy-of-USD{file:name}") Listen on a directory and create a message for each file dropped there. Copy the contents to the outputdir and delete the file in the inputdir . 37.18.3. Reading recursively from a directory and writing to another from("file://inputdir/?recursive=true&delete=true").to("file://outputdir") Listen on a directory and create a message for each file dropped there. Copy the contents to the outputdir and delete the file in the inputdir . Will scan recursively into sub-directories. Will lay out the files in the same directory structure in the outputdir as the inputdir , including any sub-directories. inputdir/foo.txt inputdir/sub/bar.txt Will result in the following output layout: 37.19. Using flatten If you want to store the files in the outputdir directory in the same directory, disregarding the source directory layout (e.g. to flatten out the path), you just add the flatten=true option on the file producer side: from("file://inputdir/?recursive=true&delete=true").to("file://outputdir?flatten=true") Will result in the following output layout: 37.20. Reading from a directory and the default move operation Camel will by default move any processed file into a .camel subdirectory in the directory the file was consumed from. from("file://inputdir/?recursive=true&delete=true").to("file://outputdir") Affects the layout as follows: before after 37.21. Read from a directory and process the message in java from("file://inputdir/").process(new Processor() { public void process(Exchange exchange) throws Exception { Object body = exchange.getIn().getBody(); // do some business logic with the input body } }); The body will be a File object that points to the file that was just dropped into the inputdir directory. 37.22. Writing to files Camel is of course also able to write files, i.e. produce files. In the sample below we receive some reports on the SEDA queue that we process before they are being written to a directory. 37.22.1. Write to subdirectory using Exchange.FILE_NAME Using a single route, it is possible to write a file to any number of subdirectories. If you have a route setup as such: <route> <from uri="bean:myBean"/> <to uri="file:/rootDirectory"/> </route> You can have myBean set the header Exchange.FILE_NAME to values such as: This allows you to have a single route to write files to multiple destinations. 37.22.2. Writing file through the temporary directory relative to the final destination Sometime you need to temporarily write the files to some directory relative to the destination directory. Such situation usually happens when some external process with limited filtering capabilities is reading from the directory you are writing to. In the example below files will be written to the /var/myapp/filesInProgress directory and after data transfer is done, they will be atomically moved to the` /var/myapp/finalDirectory `directory. from("direct:start"). to("file:///var/myapp/finalDirectory?tempPrefix=/../filesInProgress/"); 37.23. Using expression for filenames In this sample we want to move consumed files to a backup folder using today's date as a sub-folder name: from("file://inbox?move=backup/USD{date:now:yyyyMMdd}/USD{file:name}").to("..."); See File language for more samples. 37.24. Avoiding reading the same file more than once (idempotent consumer) Camel supports Idempotent Consumer directly within the component so it will skip already processed files. This feature can be enabled by setting the idempotent=true option. from("file://inbox?idempotent=true").to("..."); Camel uses the absolute file name as the idempotent key, to detect duplicate files. You can customize this key by using an expression in the idempotentKey option. For example to use both the name and the file size as the key <route> <from uri="file://inbox?idempotent=true&idempotentKey=USD{file:name}-USD{file:size}"/> <to uri="bean:processInbox"/> </route> By default Camel uses a in memory based store for keeping track of consumed files, it uses a least recently used cache holding up to 1000 entries. You can plugin your own implementation of this store by using the idempotentRepository option using the # sign in the value to indicate it's a referring to a bean in the Registry with the specified id . <!-- define our store as a plain spring bean --> <bean id="myStore" class="com.mycompany.MyIdempotentStore"/> <route> <from uri="file://inbox?idempotent=true&idempotentRepository=#myStore"/> <to uri="bean:processInbox"/> </route> Camel will log at DEBUG level if it skips a file because it has been consumed before: DEBUG FileConsumer is idempotent and the file has been consumed before. Will skip this file: target\idempotent\report.txt 37.25. Using a file based idempotent repository In this section we will use the file based idempotent repository org.apache.camel.processor.idempotent.FileIdempotentRepository instead of the in-memory based that is used as default. This repository uses a 1st level cache to avoid reading the file repository. It will only use the file repository to store the content of the 1st level cache. Thereby the repository can survive server restarts. It will load the content of the file into the 1st level cache upon startup. The file structure is very simple as it stores the key in separate lines in the file. By default, the file store has a size limit of 1mb. When the file grows larger Camel will truncate the file store, rebuilding the content by flushing the 1st level cache into a fresh empty file. We configure our repository using Spring XML creating our file idempotent repository and define our file consumer to use our repository with the idempotentRepository using # sign to indicate Registry lookup: 37.26. Using a JPA based idempotent repository In this section we will use the JPA based idempotent repository instead of the in-memory based that is used as default. First we need a persistence-unit in META-INF/persistence.xml where we need to use the class org.apache.camel.processor.idempotent.jpa.MessageProcessed as model. <persistence-unit name="idempotentDb" transaction-type="RESOURCE_LOCAL"> <class>org.apache.camel.processor.idempotent.jpa.MessageProcessed</class> <properties> <property name="openjpa.ConnectionURL" value="jdbc:derby:target/idempotentTest;create=true"/> <property name="openjpa.ConnectionDriverName" value="org.apache.derby.jdbc.EmbeddedDriver"/> <property name="openjpa.jdbc.SynchronizeMappings" value="buildSchema"/> <property name="openjpa.Log" value="DefaultLevel=WARN, Tool=INFO"/> <property name="openjpa.Multithreaded" value="true"/> </properties> </persistence-unit> , we can create our JPA idempotent repository in the spring XML file as well: <!-- we define our jpa based idempotent repository we want to use in the file consumer --> <bean id="jpaStore" class="org.apache.camel.processor.idempotent.jpa.JpaMessageIdRepository"> <!-- Here we refer to the entityManagerFactory --> <constructor-arg index="0" ref="entityManagerFactory"/> <!-- This 2nd parameter is the name (= a category name). You can have different repositories with different names --> <constructor-arg index="1" value="FileConsumer"/> </bean> And yes then we just need to refer to the jpaStore bean in the file consumer endpoint using the idempotentRepository using the # syntax option: <route> <from uri="file://inbox?idempotent=true&idempotentRepository=#jpaStore"/> <to uri="bean:processInbox"/> </route> 37.27. Filter using org.apache.camel.component.file.GenericFileFilter Camel supports pluggable filtering strategies. You can then configure the endpoint with such a filter to skip certain files being processed. In the sample we have built our own filter that skips files starting with skip in the filename: And then we can configure our route using the filter attribute to reference our filter (using # notation) that we have defined in the spring XML file: <!-- define our filter as a plain spring bean --> <bean id="myFilter" class="com.mycompany.MyFileFilter"/> <route> <from uri="file://inbox?filter=#myFilter"/> <to uri="bean:processInbox"/> </route> 37.28. Filtering using ANT path matcher The ANT path matcher is based on AntPathMatcher . The file paths is matched with the following rules: ? matches one character * matches zero or more characters ** matches zero or more directories in a path The antInclude and antExclude options make it easy to specify ANT style include/exclude without having to define the filter. See the URI options above for more information. The sample below demonstrates how to use it. Note When using minDepth/maxDepth with the combination of recursive=true , antExclude=... , and readLockDeleteOrphanLockFiles=true results in scanning all the files/subfolders deeper than value mentioned in the maxDepth . The workaround is to configure readLockDeleteOrphanLockFiles=false . 37.28.1. Sorting using Comparator Camel supports pluggable sorting strategies. This strategy it to use the build in java.util.Comparator in Java. You can then configure the endpoint with such a comparator and have Camel sort the files before being processed. In the sample we have built our own comparator that just sorts by file name: And then we can configure our route using the sorter option to reference to our sorter ( mySorter ) we have defined in the spring XML file: <!-- define our sorter as a plain spring bean --> <bean id="mySorter" class="com.mycompany.MyFileSorter"/> <route> <from uri="file://inbox?sorter=#mySorter"/> <to uri="bean:processInbox"/> </route> Note URI options can reference beans using the # syntax In the Spring DSL route above notice that we can refer to beans in the Registry by prefixing the id with #. So writing sorter=#mySorter , will instruct Camel to go look in the Registry for a bean with the ID, mySorter . 37.28.2. Sorting using sortBy Camel supports pluggable sorting strategies. This strategy it to use the File language to configure the sorting. The sortBy option is configured as follows: sortBy=group 1;group 2;group 3;... Where each group is separated with semi colon. In the simple situations you just use one group, so a simple example could be: This will sort by file name, you can reverse the order by prefixing reverse: to the group, so the sorting is now Z..A: As we have the full power of File language we can use some of the other parameters, so if we want to sort by file size we do: You can configure to ignore the case, using ignoreCase: for string comparison, so if you want to use file name sorting but to ignore the case then we do: You can combine ignore case and reverse, however reverse must be specified first: In the sample below we want to sort by last modified file, so we do: And then we want to group by name as a 2nd option so files with same modifcation is sorted by name: Now there is an issue here, can you spot it? Well the modified timestamp of the file is too fine as it will be in milliseconds, but what if we want to sort by date only and then subgroup by name? Well as we have the true power of File language we can use its date command that supports patterns. So this can be solved as: Yeah, that is pretty powerful, oh by the way you can also use reverse per group, so we could reverse the file names: 37.29. Using GenericFileProcessStrategy The option processStrategy can be used to use a custom GenericFileProcessStrategy that allows you to implement your own begin , commit and rollback logic. For instance lets assume a system writes a file in a folder you should consume. But you should not start consuming the file before another ready file has been written as well. So by implementing our own GenericFileProcessStrategy we can implement this as: In the begin() method we can test whether the special ready file exists. The begin method returns a boolean to indicate if we can consume the file or not. In the abort() method special logic can be executed in case the begin operation returned false , for example to cleanup resources etc. in the commit() method we can move the actual file and also delete the ready file. 37.30. Using filter The filter option allows you to implement a custom filter in Java code by implementing the org.apache.camel.component.file.GenericFileFilter interface. This interface has an accept method that returns a boolean. Return true to include the file, and false to skip the file. There is a isDirectory method on GenericFile whether the file is a directory. This allows you to filter unwanted directories, to avoid traversing down unwanted directories. For example to skip any directories which starts with "skip" in the name, can be implemented as follows: 37.31. Using bridgeErrorHandler If you want to use the Camel Error Handler to deal with any exception occurring in the file consumer, then you can enable the bridgeErrorHandler option as shown below: // to handle any IOException being thrown onException(IOException.class) .handled(true) .log("IOException occurred due: USD{exception.message}") .transform().simple("Error USD{exception.message}") .to("mock:error"); // this is the file route that pickup files, notice how we bridge the consumer to use the Camel routing error handler // the exclusiveReadLockStrategy is only configured because this is from an unit test, so we use that to simulate exceptions from("file:target/nospace?bridgeErrorHandler=true") .convertBodyTo(String.class) .to("mock:result"); So all you have to do is to enable this option, and the error handler in the route will take it from there. Important When using bridgeErrorHandler When using bridgeErrorHandler, then interceptors, OnCompletions does not apply. The Exchange is processed directly by the Camel Error Handler, and does not allow prior actions such as interceptors, onCompletion to take action. 37.32. Debug logging This component has log level TRACE that can be helpful if you have problems. 37.33. Spring Boot Auto-Configuration The component supports 11 options, which are listed below. Name Description Default Type camel.cluster.file.acquire-lock-delay The time to wait before starting to try to acquire lock. String camel.cluster.file.acquire-lock-interval The time to wait between attempts to try to acquire lock. String camel.cluster.file.attributes Custom service attributes. Map camel.cluster.file.enabled Sets if the file cluster service should be enabled or not, default is false. false Boolean camel.cluster.file.id Cluster Service ID. String camel.cluster.file.order Service lookup order/priority. Integer camel.cluster.file.root The root path. String camel.component.file.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.file.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.file.enabled Whether to enable auto configuration of the file component. This is enabled by default. Boolean camel.component.file.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-file-starter</artifactId> </dependency>",
"file:directoryName[?options]",
"file:directoryName",
"from(\"file://inbox?move=.done\").to(\"bean:handleOrder\");",
"move=../backup/copy-of-USD{file:name}",
"from(\"file://inbox?delete=true\").to(\"bean:handleOrder\");",
"from(\"file://inbox?preMove=inprogress\").to(\"bean:handleOrder\");",
"from(\"file://inbox?preMove=inprogress&move=.done\").to(\"bean:handleOrder\");",
"move=backup/USD{date:now:yyyyMMdd}/USD{file:name}",
"from(\"file:inbox?charset=utf-8\") .to(\"file:outbox?charset=iso-8859-1\")",
"from(\"file:inbox?charset=utf-8\") .convertBodyTo(byte[].class, \"iso-8859-1\") .to(\"bean:myBean\") .to(\"file:outbox\");",
"from(\"file:inbox\") .convertBodyTo(byte[].class, \"iso-8859-1\") .to(\"bean:myBean\") .to(\"file:outbox\");",
"from(\"file:inbox\") .convertBodyTo(byte[].class, \"iso-8859-1\") .to(\"bean:myBean\") .setProperty(Exchange.CHARSET_NAME, header(\"someCharsetHeader\")) .to(\"file:outbox\");",
"from(\"file:inbox?charset=utf-8\") .to(\"file:outbox?charset=iso-8859-1\")",
"DEBUG GenericFileConverter - Read file /Users/davsclaus/workspace/camel/camel-core/target/charset/input/input.txt with charset utf-8 DEBUG FileOperations - Using Reader to write file: target/charset/output.txt with charset: iso-8859-1",
"from(\"direct:report\").to(\"file:target/reports\");",
"from(\"direct:report\").setHeader(Exchange.FILE_NAME, constant(\"report.txt\")).to( \"file:target/reports\");",
"from(\"direct:report\").setHeader(\"CamelFileName\", constant(\"report.txt\")).to( \"file:target/reports\");",
"from(\"direct:report\").to(\"file:target/reports/?fileName=report.txt\");",
"from(\"file:bar?doneFileName=done\");",
"from(\"file:bar?doneFileName=USD{file:name}.done\");",
"from(\"file:bar?doneFileName=ready-USD{file:name}\");",
".to(\"file:bar?doneFileName=done\");",
".to(\"file:bar?doneFileName=done-USD{file:name}\");",
".to(\"file:bar?doneFileName=USD{file:name}.done\");",
".to(\"file:bar?doneFileName=USD{file:name.noext}.done\");",
"from(\"file://inputdir/?delete=true\").to(\"file://outputdir\")",
"from(\"file://inputdir/?delete=true\").to(\"file://outputdir?overruleFile=copy-of-USD{file:name}\")",
"from(\"file://inputdir/?recursive=true&delete=true\").to(\"file://outputdir\")",
"inputdir/foo.txt inputdir/sub/bar.txt",
"outputdir/foo.txt outputdir/sub/bar.txt",
"from(\"file://inputdir/?recursive=true&delete=true\").to(\"file://outputdir?flatten=true\")",
"outputdir/foo.txt outputdir/bar.txt",
"from(\"file://inputdir/?recursive=true&delete=true\").to(\"file://outputdir\")",
"inputdir/foo.txt inputdir/sub/bar.txt",
"inputdir/.camel/foo.txt inputdir/sub/.camel/bar.txt outputdir/foo.txt outputdir/sub/bar.txt",
"from(\"file://inputdir/\").process(new Processor() { public void process(Exchange exchange) throws Exception { Object body = exchange.getIn().getBody(); // do some business logic with the input body } });",
"<route> <from uri=\"bean:myBean\"/> <to uri=\"file:/rootDirectory\"/> </route>",
"Exchange.FILE_NAME = hello.txt => /rootDirectory/hello.txt Exchange.FILE_NAME = foo/bye.txt => /rootDirectory/foo/bye.txt",
"from(\"direct:start\"). to(\"file:///var/myapp/finalDirectory?tempPrefix=/../filesInProgress/\");",
"from(\"file://inbox?move=backup/USD{date:now:yyyyMMdd}/USD{file:name}\").to(\"...\");",
"from(\"file://inbox?idempotent=true\").to(\"...\");",
"<route> <from uri=\"file://inbox?idempotent=true&idempotentKey=USD{file:name}-USD{file:size}\"/> <to uri=\"bean:processInbox\"/> </route>",
"<!-- define our store as a plain spring bean --> <bean id=\"myStore\" class=\"com.mycompany.MyIdempotentStore\"/> <route> <from uri=\"file://inbox?idempotent=true&idempotentRepository=#myStore\"/> <to uri=\"bean:processInbox\"/> </route>",
"DEBUG FileConsumer is idempotent and the file has been consumed before. Will skip this file: target\\idempotent\\report.txt",
"<persistence-unit name=\"idempotentDb\" transaction-type=\"RESOURCE_LOCAL\"> <class>org.apache.camel.processor.idempotent.jpa.MessageProcessed</class> <properties> <property name=\"openjpa.ConnectionURL\" value=\"jdbc:derby:target/idempotentTest;create=true\"/> <property name=\"openjpa.ConnectionDriverName\" value=\"org.apache.derby.jdbc.EmbeddedDriver\"/> <property name=\"openjpa.jdbc.SynchronizeMappings\" value=\"buildSchema\"/> <property name=\"openjpa.Log\" value=\"DefaultLevel=WARN, Tool=INFO\"/> <property name=\"openjpa.Multithreaded\" value=\"true\"/> </properties> </persistence-unit>",
"<!-- we define our jpa based idempotent repository we want to use in the file consumer --> <bean id=\"jpaStore\" class=\"org.apache.camel.processor.idempotent.jpa.JpaMessageIdRepository\"> <!-- Here we refer to the entityManagerFactory --> <constructor-arg index=\"0\" ref=\"entityManagerFactory\"/> <!-- This 2nd parameter is the name (= a category name). You can have different repositories with different names --> <constructor-arg index=\"1\" value=\"FileConsumer\"/> </bean>",
"<route> <from uri=\"file://inbox?idempotent=true&idempotentRepository=#jpaStore\"/> <to uri=\"bean:processInbox\"/> </route>",
"<!-- define our filter as a plain spring bean --> <bean id=\"myFilter\" class=\"com.mycompany.MyFileFilter\"/> <route> <from uri=\"file://inbox?filter=#myFilter\"/> <to uri=\"bean:processInbox\"/> </route>",
"<!-- define our sorter as a plain spring bean --> <bean id=\"mySorter\" class=\"com.mycompany.MyFileSorter\"/> <route> <from uri=\"file://inbox?sorter=#mySorter\"/> <to uri=\"bean:processInbox\"/> </route>",
"sortBy=group 1;group 2;group 3;",
"sortBy=file:name",
"sortBy=reverse:file:name",
"sortBy=file:length",
"sortBy=ignoreCase:file:name",
"sortBy=reverse:ignoreCase:file:name",
"sortBy=file:modified",
"sortBy=file:modified;file:name",
"sortBy=date:file:yyyyMMdd;file:name",
"sortBy=date:file:yyyyMMdd;reverse:file:name",
"// to handle any IOException being thrown onException(IOException.class) .handled(true) .log(\"IOException occurred due: USD{exception.message}\") .transform().simple(\"Error USD{exception.message}\") .to(\"mock:error\"); // this is the file route that pickup files, notice how we bridge the consumer to use the Camel routing error handler // the exclusiveReadLockStrategy is only configured because this is from an unit test, so we use that to simulate exceptions from(\"file:target/nospace?bridgeErrorHandler=true\") .convertBodyTo(String.class) .to(\"mock:result\");"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-file-component-starter |
Chapter 3. Configuration | Chapter 3. Configuration 3.1. Configuring Authorization AMQ Broker has role based access control (RBAC) that is used to restrict access to the attributes and methods of MBeans. For instructions on how this is configured see Configuring AMQ Broker . If the broker is configured to use RBAC then the JON plugin must be configured with the correct username and password so that the plugin can authenticate against the broker. Once authenticated any access to MBeans will be restricted based on the BROKER_INSTANCE_DIR /etc/management.xml configuration file. The JON plugin's default user is 'admin' and default password is 'activemq'. You can configure the plugin to use a different user/password to connect to the broker. When the JON plugin is being initiated, it loads the JMX credentials from a configuration file located at BROKER_INSTANCE_DIR /etc/org.jboss.rh-messaging.amq.jon.cfg This configuration file will look something like this. This configuration file also contains the connectorAddress property which is the JMX address that the plugin will use to connect to the broker. 3.2. Populating the Inventory The Inventory is where all manageable resources and groups are shown. To populate the AMQ Broker resources, perform the following steps: Click Resources > Discovery Queue in the left navigation pane. At least one resource, an AMQ Broker instance, is displayed in the Discovery Queue window. Expand the broker resource entry by clicking the down arrow in the Resource Name column. Sub-entries for at least two servers are displayed, including the AMQ 7 Server and the JMX server. The AMQ 7 Server is the entity for which we need to populate the inventory. In the Resource Name column, click the checkbox to the AMQ 7 Server sub-entry. Note that the Import and Ignore buttons are enabled in the bottom left part of the window. Click Import . A Question dialog opens and asks if you want to discover the platform children. Click Yes . You should see a confirmation message stating that the resources were successfully imported. Verify that the import was successful by clicking Resources > Platform in the left navigation pane. You should see multiple entries running in the Platforms window, such as the AMQ Server and the JON Server. | [
"principal = username credentials = password connectorAddress = service:jmx:rmi:///jndi/rmi://localhost:11099/jmxrmi"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_jon_with_amq_broker/configuration |
5.240. php-pecl-apc | 5.240. php-pecl-apc 5.240.1. RHSA-2012:0811 - Low: php-pecl-apc security, bug fix, and enhancement update Updated php-pecl-apc packages that fix one security issue, several bugs, and add various enhancements are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having low security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The php-pecl-apc packages contain APC (Alternative PHP Cache), the framework for caching and optimization of intermediate PHP code. Security Fix CVE-2010-3294 A cross-site scripting (XSS) flaw was found in the "apc.php" script, which provides a detailed analysis of the internal workings of APC and is shipped as part of the APC extension documentation. A remote attacker could possibly use this flaw to conduct a cross-site scripting attack. Note The administrative script is not deployed upon package installation. It must manually be copied to the web root (the default is "/var/www/html/", for example). In addition, the php-pecl-apc packages have been upgraded to upstream version 3.1.9, which provides a number of bug fixes and enhancements over the version. (BZ# 662655 ) All users of php-pecl-apc are advised to upgrade to these updated packages, which fix these issues and add these enhancements. If the "apc.php" script was previously deployed in the web root, it must manually be re-deployed to replace the vulnerable version to resolve this issue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/php-pecl-apc |
Converting a virtualization cluster to a hyperconverged cluster | Converting a virtualization cluster to a hyperconverged cluster Red Hat Hyperconverged Infrastructure for Virtualization 1.8 Convert existing hyperconverged hosts to create a hyperconverged cluster Laura Bailey [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/converting_a_virtualization_cluster_to_a_hyperconverged_cluster/index |
Chapter 4. Managing networking in the web console | Chapter 4. Managing networking in the web console The the web console supports basic network configuration. You can: Configure IPv4/IPv6 network settings Manage network bridges Manage VLANs Manage Teams Manage Bonds Inspect a network log Note The the web console is build on top of the NetworkManager service. For details, see Getting started with NetworkManager . 4.1. Prerequisites The the web console installed and enabled. A+ For details about NetworkManager, see Installing the web console . 4.2. Configuring network bridges in the web console Network bridges are used to connect multiple interfaces to the one subnet with the same range of IP addresses. 4.2.1. Adding bridges in the web console This section describes creating a software bridge on multiple network interfaces using the web console. Procedure Log in to the RHEL web console. For details, see Logging in to the web console . Open Networking . Click the Add Bridge button. In the Bridge Settings dialog box, enter a name for the new bridge. In the Port field, select interfaces which you want to put to the one subnet. Optionally, you can select the Spanning Tree protocol (STP) to avoid bridge loops and broadcast radiation. If you do not have a strong preference, leave the predefined values as they are. Click Create . If the bridge is successfully created, the web console displays the new bridge in the Networking section. Check values in the Sending and Receiving columns in the new bridge row. If you can see that zero bytes are sent and received through the bridge, the connection does not work correctly and you need to adjust the network settings. 4.2.2. Configuring a static IP address in the web console IP address for your system can be assigned from the pool automatically by the DHCP server or you can configure the IP address manually. The IP address will not be influenced by the DHCP server settings. This section describes configuring static IPv4 addresses of a network bridge using the RHEL web console. Procedure Log in to the RHEL web console. For details, see Logging in to the web console . Open the Networking section. Click the interface where you want to set the static IP address. In the interface details screen, click the IPv4 configuration. In the IPv4 Settings dialog box, select Manual in the Addresses drop down list. Click Apply . In the Addresses field, enter the desired IP address, netmask and gateway. Click Apply . At this point, the IP address has been configured and the interface uses the new static IP address. 4.2.3. Removing interfaces from the bridge using the web console Network bridges can include multiple interfaces. You can remove them from the bridge. Each removed interface will be automatically changed to the standalone interface. This section describes removing a network interface from a software bridge created in the RHEL 7 system. Prerequisites Having a bridge with multiple interfaces in your system. Procedure Log in to the RHEL web console. For details, see Logging in to the web console . Open Networking . Click the bridge you want to configure. In the bridge settings screen, scroll down to the table of ports (interfaces). Select the interface and click the - icon. The RHEL web console removes the interface from the bridge and you can see it back in the Networking section as standalone interface. 4.2.4. Deleting bridges in the web console You can delete a software network bridge in the RHEL web console. All network interfaces included in the bridge will be changed automatically to standalone interfaces. Prerequisites Having a bridge in your system. Procedure Log in to the RHEL web console. For details, see Logging in to the web console . Open the Networking section. Click the bridge you want to configure. In the bridge settings screen, scroll down to the table of ports. Click Delete . At this stage, go back to Networking and verify that all the network interfaces are displayed on the Interfaces tab. Interfaces which were part of the bridge can be inactive now. Therefore, you may need to activate them and set network parameters manually. 4.3. Configuring VLANs in the web console VLANs (Virtual LANs) are virtual networks created on a single physical Ethernet interface. Each VLAN is defined by an ID which represents a unique positive integer and works as a standalone interface. The following procedure describes creating VLANs in the RHEL web console. Prerequisites Having a network interface in your system. Procedure Log in to the RHEL web console. For details, see Logging in to the web console . Open Networking . Click Add VLAN button. In the VLAN Settings dialog box, select the physical interface for which you want to create a VLAN. Enter the VLAN Id or just use the predefined number. In the Name field, you can see a predefined name consisted of the parent interface and VLAN Id. If it is not necessary, leave the name as it is. Click Apply . The new VLAN has been created and you need to click at the VLAN and configure the network settings. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/managing_systems_using_the_rhel_7_web_console/managing-networking-in-the-web-console_system-management-using-the-rhel-7-web-console |
probe::vm.kmem_cache_alloc_node | probe::vm.kmem_cache_alloc_node Name probe::vm.kmem_cache_alloc_node - Fires when kmem_cache_alloc_node is requested Synopsis vm.kmem_cache_alloc_node Values gfp_flags type of kmemory to allocate name name of the probe point bytes_req requested Bytes ptr pointer to the kmemory allocated bytes_alloc allocated Bytes caller_function name of the caller function call_site address of the function calling this kmemory function gfp_flag_name type of kmemory to allocate(in string format) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-vm-kmem-cache-alloc-node |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_security_automation_guide/making-open-source-more-inclusive |
Release Notes for Streams for Apache Kafka 2.7 on OpenShift | Release Notes for Streams for Apache Kafka 2.7 on OpenShift Red Hat Streams for Apache Kafka 2.7 Highlights of what's new and what's changed with this release of Streams for Apache Kafka on OpenShift Container Platform | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/release_notes_for_streams_for_apache_kafka_2.7_on_openshift/index |
Chapter 3. Identity [user.openshift.io/v1] | Chapter 3. Identity [user.openshift.io/v1] Description Identity records a successful authentication of a user with an identity provider. The information about the source of authentication is stored on the identity, and the identity is then associated with a single user object. Multiple identities can reference a single user. Information retrieved from the authentication provider is stored in the extra field using a schema determined by the provider. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required providerName providerUserName user 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources extra object (string) Extra holds extra information about this identity kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata providerName string ProviderName is the source of identity information providerUserName string ProviderUserName uniquely represents this identity in the scope of the provider user ObjectReference User is a reference to the user this identity is associated with Both Name and UID must be set 3.2. API endpoints The following API endpoints are available: /apis/user.openshift.io/v1/identities DELETE : delete collection of Identity GET : list or watch objects of kind Identity POST : create an Identity /apis/user.openshift.io/v1/watch/identities GET : watch individual changes to a list of Identity. deprecated: use the 'watch' parameter with a list operation instead. /apis/user.openshift.io/v1/identities/{name} DELETE : delete an Identity GET : read the specified Identity PATCH : partially update the specified Identity PUT : replace the specified Identity /apis/user.openshift.io/v1/watch/identities/{name} GET : watch changes to an object of kind Identity. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 3.2.1. /apis/user.openshift.io/v1/identities HTTP method DELETE Description delete collection of Identity Table 3.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Identity Table 3.3. HTTP responses HTTP code Reponse body 200 - OK IdentityList schema 401 - Unauthorized Empty HTTP method POST Description create an Identity Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body Identity schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK Identity schema 201 - Created Identity schema 202 - Accepted Identity schema 401 - Unauthorized Empty 3.2.2. /apis/user.openshift.io/v1/watch/identities HTTP method GET Description watch individual changes to a list of Identity. deprecated: use the 'watch' parameter with a list operation instead. Table 3.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/user.openshift.io/v1/identities/{name} Table 3.8. Global path parameters Parameter Type Description name string name of the Identity HTTP method DELETE Description delete an Identity Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Identity Table 3.11. HTTP responses HTTP code Reponse body 200 - OK Identity schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Identity Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. HTTP responses HTTP code Reponse body 200 - OK Identity schema 201 - Created Identity schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Identity Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.15. Body parameters Parameter Type Description body Identity schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Identity schema 201 - Created Identity schema 401 - Unauthorized Empty 3.2.4. /apis/user.openshift.io/v1/watch/identities/{name} Table 3.17. Global path parameters Parameter Type Description name string name of the Identity HTTP method GET Description watch changes to an object of kind Identity. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/user_and_group_apis/identity-user-openshift-io-v1 |
Chapter 2. Before You Begin | Chapter 2. Before You Begin 2.1. Comparison: Red Hat Single Sign-On for OpenShift Image and Red Hat Single Sign-On The Red Hat Single Sign-On for OpenShift image version number 7.4.10.GA is based on Red Hat Single Sign-On 7.4.10.GA. There are some differences in functionality between the Red Hat Single Sign-On for OpenShift image and Red Hat Single Sign-On: The Red Hat Single Sign-On for OpenShift image includes all of the functionality of Red Hat Single Sign-On. In addition, the Red Hat Single Sign-On-enabled JBoss EAP image automatically handles OpenID Connect or SAML client registration and configuration for .war deployments that contain <auth-method>KEYCLOAK</auth-method> or <auth-method>KEYCLOAK-SAML</auth-method> in their respective web.xml files. 2.2. Version Compatibility and Support See the xPaaS part of the OpenShift and Atomic Platform Tested Integrations page for details about OpenShift image version compatibility. 2.3. Deprecated Image Streams and Application Templates for Red Hat Single Sign-On for OpenShift Important The Red Hat Single Sign-On for OpenShift image version number between 7.0 and 7.3 are deprecated and they will no longer receive updates of image and application templates. To deploy new applications, it is recommended to use the version 7.4 or 7.4.10.GA of the Red Hat Single Sign-On for OpenShift image along with the application templates specific to these image versions. 2.4. Initial Setup The Tutorials in this guide follow on from and assume an OpenShift instance similar to that created by performing the installation of the OpenShift Container Platform cluster . Important For information related to updating the existing database when migrating Red Hat Single Sign-On for OpenShift image from versions to version 7.4.10.GA, see the Updating Existing Database when Migrating Red Hat Single Sign-On for OpenShift Image to a new version section. | null | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/red_hat_single_sign-on_for_openshift_on_openjdk/before_you_begin |
Chapter 24. LDAP authentication | Chapter 24. LDAP authentication Administrators use the Lightweight Directory Access Protocol (LDAP) as a source for account authentication information for automation controller users. User authentication is provided, but not the synchronization of user permissions and credentials. Organization membership and team membership can be synchronized by the organization administrator. 24.1. Setting up LDAP authentication When configured, a user who logs in with an LDAP username and password automatically has an automation controller account created for them. They can be automatically placed into organizations as either regular users or organization administrators. Users created in the user interface (Local) take precedence over those logging into automation controller for their first time with an alternative authentication solution. You must delete the local user if you want to re-use with another authentication method, such as LDAP. Users created through an LDAP login cannot change their username, given name, surname, or set a local password for themselves. You can also configure this to restrict editing of other field names. Note If the LDAP server you want to connect to has a certificate that is self-signed or signed by a corporate internal certificate authority (CA), you must add the CA certificate to the system's trusted CAs. Otherwise, connection to the LDAP server results in an error that the certificate issuer is not recognized. For more information, see Importing a certificate authority in automation controller for LDAPS integration . If prompted, use your Red Hat customer credentials to login. Procedure Create a user in LDAP that has access to read the entire LDAP structure. Use the ldapsearch command to test if you can make successful queries to the LDAP server. You can install this tool from automation controller's system command line, and by using other Linux and OSX systems. Example In this example, CN=josie,CN=users,DC=website,DC=com is the distinguished name of the connecting user. Note The ldapsearch utility is not automatically pre-installed with automation controller. However, you can install it from the openldap-clients package. From the navigation panel, select Settings in the automation controller UI. Select LDAP settings in the list of Authentication options. You do not need multiple LDAP configurations per LDAP server, but you can configure many LDAP servers from this page, otherwise, leave the server at Default . The equivalent API endpoints show AUTH_LDAP_* repeated: AUTH_LDAP_1_* , AUTH_LDAP_2_* , AUTH_LDAP_5_* to denote server designations. To enter or change the LDAP server address, click Edit and enter in the LDAP Server URI field by using the same format as the one pre-populated in the text field. Note You can specify multiple LDAP servers by separating each with spaces or commas. Click the icon to comply with the correct syntax and rules. Enter the password to use for the binding user in the LDAP Bind Password text field. For more information about LDAP variables, see Ansible automation hub variables . Click to select a group type from the LDAP Group Type list. The LDAP group types that are supported by automation controller use the underlying django-auth-ldap library . To specify the parameters for the selected group type, see Step 15. The LDAP Start TLS is disabled by default. To enable TLS when the LDAP connection is not using SSL/TLS, set the toggle to On . Enter the distinguished name in the LDAP Bind DN text field to specify the user that automation controller uses to connect (Bind) to the LDAP server. If that name is stored in key sAMAccountName , the LDAP User DN Template is populated from (sAMAccountName=%(user)s) . Active Directory stores the username to sAMAccountName . For OpenLDAP, the key is uid and the line becomes (uid=%(user)s) . Enter the distinguished group name to enable users within that group to access automation controller in the LDAP Require Group field, using the same format as the one shown in the text field, CN=controller Users,OU=Users,DC=website,DC=com . Enter the distinguished group name to prevent users within that group from accessing automation controller in the LDAP Deny Group field, using the same format as the one shown in the text field. Enter where to search for users while authenticating in the LDAP User Search field by using the same format as the one shown in the text field. In this example, use: [ "OU=Users,DC=website,DC=com", "SCOPE_SUBTREE", "(cn=%(user)s)" ] The first line specifies where to search for users in the LDAP tree. In the earlier example, the users are searched recursively starting from DC=website,DC=com . The second line specifies the scope where the users should be searched: SCOPE_BASE : Use this value to indicate searching only the entry at the base DN, resulting in only that entry being returned. SCOPE_ONELEVEL : Use this value to indicate searching all entries one level under the base DN, but not including the base DN and not including any entries under that one level under the base DN. SCOPE_SUBTREE : Use this value to indicate searching of all entries at all levels under and including the specified base DN. The third line specifies the key name where the user name is stored. For many search queries, use the following correct syntax: [ [ "OU=Users,DC=northamerica,DC=acme,DC=com", "SCOPE_SUBTREE", "(sAMAccountName=%(user)s)" ], [ "OU=Users,DC=apac,DC=corp,DC=com", "SCOPE_SUBTREE", "(sAMAccountName=%(user)s)" ], [ "OU=Users,DC=emea,DC=corp,DC=com", "SCOPE_SUBTREE", "(sAMAccountName=%(user)s)" ] ] In the LDAP Group Search text field, specify which groups to search and how to search them. In this example, use: [ "dc=example,dc=com", "SCOPE_SUBTREE", "(objectClass=group)" ] The first line specifies the BASE DN where the groups should be searched. The second line specifies the scope and is the same as that for the user directive. The third line specifies what the objectClass of a group object is in the LDAP that you are using. Enter the user attributes in the LDAP User Attribute Map the text field. In this example, use: { "first_name": "givenName", "last_name": "sn", "email": "mail" } The earlier example retrieves users by surname from the key sn . You can use the same LDAP query for the user to decide what keys they are stored under. Depending on the selected LDAP Group Type , different parameters are available in the LDAP Group Type Parameters field to account for this. LDAP_GROUP_TYPE_PARAMS is a dictionary that is converted by automation controller to kwargs and passed to the LDAP Group Type class selected. There are two common parameters used by any of the LDAP Group Type ; name_attr and member_attr . Where name_attr defaults to cn and member_attr defaults to member: {"name_attr": "cn", "member_attr": "member"} To find what parameters a specific LDAP Group Type expects, see the django_auth_ldap documentation around the classes init parameters. Enter the user profile flags in the LDAP User Flags by Group text field. The following example uses the syntax to set LDAP users as "Superusers" and "Auditors": { "is_superuser": "cn=superusers,ou=groups,dc=website,dc=com", "is_system_auditor": "cn=auditors,ou=groups,dc=website,dc=com" } For more information about completing the mapping fields, LDAP Organization Map and LDAP Team Map , see the LDAP Organization and team mapping section. Click Save . Note Automation controller does not actively synchronize users, but they are created during their initial login. To improve performance associated with LDAP authentication, see Preventing LDAP attributes from updating on each login . 24.1.1. LDAP organization and team mapping You can control which users are placed into which automation controller organizations based on LDAP attributes (mapping out between your organization administrators, users and LDAP groups). Keys are organization names. Organizations are created if not present. Values are dictionaries defining the options for each organization's membership. For each organization, you can specify what groups are automatically users of the organization and also what groups can administer the organization. admins : none , true , false , string or list/tuple of strings: If none , organization administrators are not updated based on LDAP values. If true , all users in LDAP are automatically added as administrators of the organization. If false , no LDAP users are automatically added as administrators of the organization. If a string or list of strings specifies the group DNs that are added to the organization if they match any of the specified groups. remove_admins: True/False. Defaults to False : When true , a user who is not a member of the given group is removed from the organization's administrative list. users : none , true , false , string or list/tuple of strings. The same rules apply as for administrators. remove_users : true or false . Defaults to false . The same rules apply as for administrators. Example When mapping between users and LDAP groups, keys are team names and are created if not present. Values are dictionaries of options for each team's membership, where each can contain the following parameters: organization : string . The name of the organization to which the team belongs. The team is created if the combination of organization and team name does not exist. The organization is first created if it does not exist. users : none , true , false , string , or list/tuple of strings: If none , team members are not updated. If true or false , all LDAP users are added or removed as team members. If a string or list of strings specifies the group DNs, the user is added as a team member if the user is a member of any of these groups. remove : true or false . Defaults to false . When true , a user who is not a member of the given group is removed from the team. Example 24.1.2. Enabling logging for LDAP To enable logging for LDAP, you must set the level to DEBUG in the Settings configuration window: Procedure From the navigation panel, select Settings . Select Logging settings from the list of System options. Click Edit . Set the Logging Aggregator Level Threshold field to DEBUG . Click Save . 24.1.3. Preventing LDAP attributes from updating on each login By default, when an LDAP user authenticates, all user-related attributes are updated in the database on each login. In some environments, you can skip this operation due to performance issues. To avoid it, you can disable the option AUTH_LDAP_ALWAYS_UPDATE_USER . Warning Set this option to false to not update the LDAP user's attributes. Attributes are only updated the first time the user is created. Procedure Create a custom file under /etc/tower/conf.d/custom-ldap.py with the following contents. If you have multiple nodes, execute it on all nodes: AUTH_LDAP_ALWAYS_UPDATE_USER = False Restart automation controller on all nodes: automation-controller-service restart With this option set to False , no changes to LDAP user's attributes are pushed to automation controller. Note that new users are created and their attributes are pushed to the database on their first login. By default, an LDAP user gets their attributes updated in the database upon each login. For a playbook that runs multiple times with an LDAP credential, those queries can be avoided. Verification Check the PostgreSQL for slow queries related to the LDAP authentication. Additional resources For more information, see AUTH_LDAP_ALWAYS_UPDATE_USER of the Django documentation. 24.1.4. Importing a certificate authority in automation controller for LDAPS integration You can authenticate to the automation controller server by using LDAP, but if you change to using LDAPS (LDAP over SSL/TLS) to authenticate, it fails with one of the following errors: 2020-04-28 17:25:36,184 WARNING django_auth_ldap Caught LDAPError while authenticating e079127: SERVER_DOWN({'info': 'error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed (unable to get issuer certificate)', 'desc': "Can't contact LDAP server"},) 2020-06-02 11:48:24,840 WARNING django_auth_ldap Caught LDAPError while authenticating reinernippes: SERVER_DOWN({'desc': "Can't contact LDAP server", 'info': 'error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed (certificate has expired)'},) Note By default, django_auth_ldap verifies SSL connections before starting an LDAPS transaction. When you receive a certificate verify failed error, this means that the django_auth_ldap could not verify the certificate. When the SSL/TLS connection cannot be verified, the connection attempt is halted. Procedure To import an LDAP CA, run the following commands: cp ldap_server-CA.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust Note Run these two commands on all automation controller nodes in a clustered setup. 24.1.5. Referrals Active Directory uses "referrals" in case the queried object is not available in its database. This does not work correctly with the django LDAP client and it helps to disable referrals. Disable LDAP referrals by adding the following lines to your /etc/tower/conf.d/custom.py file: AUTH_LDAP_GLOBAL_OPTIONS = { ldap.OPT_REFERRALS: False, } 24.1.6. Changing the default timeout for authentication You can change the default length of time, in seconds, that your supplied token is valid in the Settings screen of the automation controller UI. Procedure From the navigation panel, select Settings . Select Miscellaneous Authentication settings from the list of System options. Click Edit . Enter the timeout period in seconds in the Idle Time Force Log Out text field. Click Save . Note If you access automation controller and have trouble logging in, clear your web browser's cache. In situations such as this, it is common for the authentication token to be cached during the browser session. You must clear it to continue. | [
"ldapsearch -x -H ldap://win -D \"CN=josie,CN=Users,DC=website,DC=com\" -b \"dc=website,dc=com\" -w Josie4Cloud",
"[ \"OU=Users,DC=website,DC=com\", \"SCOPE_SUBTREE\", \"(cn=%(user)s)\" ]",
"[ [ \"OU=Users,DC=northamerica,DC=acme,DC=com\", \"SCOPE_SUBTREE\", \"(sAMAccountName=%(user)s)\" ], [ \"OU=Users,DC=apac,DC=corp,DC=com\", \"SCOPE_SUBTREE\", \"(sAMAccountName=%(user)s)\" ], [ \"OU=Users,DC=emea,DC=corp,DC=com\", \"SCOPE_SUBTREE\", \"(sAMAccountName=%(user)s)\" ] ]",
"[ \"dc=example,dc=com\", \"SCOPE_SUBTREE\", \"(objectClass=group)\" ]",
"{ \"first_name\": \"givenName\", \"last_name\": \"sn\", \"email\": \"mail\" }",
"{\"name_attr\": \"cn\", \"member_attr\": \"member\"}",
"{ \"is_superuser\": \"cn=superusers,ou=groups,dc=website,dc=com\", \"is_system_auditor\": \"cn=auditors,ou=groups,dc=website,dc=com\" }",
"{ \"LDAP Organization\": { \"admins\": \"cn=engineering_admins,ou=groups,dc=example,dc=com\", \"remove_admins\": false, \"users\": [ \"cn=engineering,ou=groups,dc=example,dc=com\", \"cn=sales,ou=groups,dc=example,dc=com\", \"cn=it,ou=groups,dc=example,dc=com\" ], \"remove_users\": false }, \"LDAP Organization 2\": { \"admins\": [ \"cn=Administrators,cn=Builtin,dc=example,dc=com\" ], \"remove_admins\": false, \"users\": true, \"remove_users\": false } }",
"{ \"LDAP Engineering\": { \"organization\": \"LDAP Organization\", \"users\": \"cn=engineering,ou=groups,dc=example,dc=com\", \"remove\": true }, \"LDAP IT\": { \"organization\": \"LDAP Organization\", \"users\": \"cn=it,ou=groups,dc=example,dc=com\", \"remove\": true }, \"LDAP Sales\": { \"organization\": \"LDAP Organization\", \"users\": \"cn=sales,ou=groups,dc=example,dc=com\", \"remove\": true } }",
"AUTH_LDAP_ALWAYS_UPDATE_USER = False",
"automation-controller-service restart",
"2020-04-28 17:25:36,184 WARNING django_auth_ldap Caught LDAPError while authenticating e079127: SERVER_DOWN({'info': 'error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed (unable to get issuer certificate)', 'desc': \"Can't contact LDAP server\"},)",
"2020-06-02 11:48:24,840 WARNING django_auth_ldap Caught LDAPError while authenticating reinernippes: SERVER_DOWN({'desc': \"Can't contact LDAP server\", 'info': 'error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed (certificate has expired)'},)",
"cp ldap_server-CA.crt /etc/pki/ca-trust/source/anchors/",
"update-ca-trust",
"AUTH_LDAP_GLOBAL_OPTIONS = { ldap.OPT_REFERRALS: False, }"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_administration_guide/controller-LDAP-authentication |
Chapter 6. Support | Chapter 6. Support Red Hat and Microsoft are committed to providing excellent support for .NET and are working together to resolve any problems that occur on Red Hat supported platforms. At a high level, Red Hat supports the installation, configuration, and running of the .NET component in Red Hat Enterprise Linux (RHEL). Red Hat can also provide "commercially reasonable" support for issues we can help with, for example, NuGet access problems, permissions issues, firewalls, and application questions. If the issue is a defect or vulnerability in .NET, we actively work with Microsoft to resolve it. .NET 9.0 is supported on RHEL 8.10, RHEL 9.5, RHEL 10.0, and Red Hat OpenShift Container Platform versions 4.0 and later. See .NET Core Life Cycle for information about the .NET support policy 6.1. Contact options There are a couple of ways you can get support, depending on how you are using .NET. If you are using .NET on-premises, you can contact either Red Hat Support or Microsoft directly. If you are using .NET in Microsoft Azure, you can contact either Red Hat Support or Azure Support to receive Integrated Support. Integrated Support is a collaborative support agreement between Red Hat and Microsoft. Customers using Red Hat products in Microsoft Azure are mutual customers, so both companies are united to provide the best troubleshooting and support experience possible. If you are using .NET on IBM Z, IBM LinuxONE, or IBM Power, you can contact Red Hat Support . If the Red Hat Support Engineer assigned to your case needs assistance from IBM, the Red Hat Support Engineer will collaborate with IBM directly without any action required from you. 6.2. Frequently asked questions Here are four of the most common support questions for Integrated Support. When do I access Integrated Support? You can engage Red Hat Support directly. If the Red Hat Support Engineer assigned to your case needs assistance from Microsoft, the Red Hat Support Engineer will collaborate with Microsoft directly without any action required from you. Likewise on the Microsoft side, they have a process for directly collaborating with Red Hat Support Engineers. What happens after I file a support case? Once the Red Hat support case has been created, a Red Hat Support Engineer will be assigned to the case and begin collaborating on the issue with you and your Microsoft Support Engineer. You should expect a response to the issue based on Red Hat's Production Support Terms of Service . What if I need further assistance? Contact Red Hat Support for assistance in creating your case or with any questions related to this process. You can view any of your open cases here. How do I engage Microsoft for support for an Azure platform issue? If you have support from Microsoft, you can open a case using whatever process you typically would follow. If you do not have support with Microsoft, you can always get support from Microsoft Support . 6.3. Additional support resources The Resources page at Red Hat Developers provides a wealth of information, including: Getting started documents Knowledgebase articles and solutions Blog posts .NET documentation is hosted on a Microsoft website. Here are some additional topics to explore: .NET ASP.NET Core C# F# Visual Basic You can also see more support policy information at Red Hat and Microsoft Azure Certified Cloud & Service Provider Support Policies . | null | https://docs.redhat.com/en/documentation/net/9.0/html/release_notes_for_.net_9.0_rpm_packages/support_release-notes-for-dotnet-rpms |
Chapter 4. Installing and configuring the Nexus Repository Manager plugin | Chapter 4. Installing and configuring the Nexus Repository Manager plugin The Nexus Repository Manager plugin displays the information about your build artifacts in your Developer Hub application. The build artifacts are available in the Nexus Repository Manager. Important The Nexus Repository Manager plugin is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features, see Technology Preview Features Scope . Additional detail on how Red Hat provides support for bundled community dynamic plugins is available on the Red Hat Developer Support Policy page. 4.1. Installation The Nexus Repository Manager plugin is pre-loaded in Developer Hub with basic configuration properties. To enable it, set the disabled property to false as follows: global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-nexus-repository-manager disabled: false 4.2. Configuration Set the proxy to the desired Nexus Repository Manager server in the app-config.yaml file as follows: proxy: '/nexus-repository-manager': target: 'https://<NEXUS_REPOSITORY_MANAGER_URL>' headers: X-Requested-With: 'XMLHttpRequest' # Uncomment the following line to access a private Nexus Repository Manager using a token # Authorization: 'Bearer <YOUR TOKEN>' changeOrigin: true # Change to "false" in case of using self hosted Nexus Repository Manager instance with a self-signed certificate secure: true Optional: Change the base URL of Nexus Repository Manager proxy as follows: nexusRepositoryManager: # default path is `/nexus-repository-manager` proxyPath: /custom-path Optional: Enable the following experimental annotations: nexusRepositoryManager: experimentalAnnotations: true Annotate your entity using the following annotations: metadata: annotations: # insert the chosen annotations here # example nexus-repository-manager/docker.image-name: `<ORGANIZATION>/<REPOSITORY>`, | [
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-nexus-repository-manager disabled: false",
"proxy: '/nexus-repository-manager': target: 'https://<NEXUS_REPOSITORY_MANAGER_URL>' headers: X-Requested-With: 'XMLHttpRequest' # Uncomment the following line to access a private Nexus Repository Manager using a token # Authorization: 'Bearer <YOUR TOKEN>' changeOrigin: true # Change to \"false\" in case of using self hosted Nexus Repository Manager instance with a self-signed certificate secure: true",
"nexusRepositoryManager: # default path is `/nexus-repository-manager` proxyPath: /custom-path",
"nexusRepositoryManager: experimentalAnnotations: true",
"metadata: annotations: # insert the chosen annotations here # example nexus-repository-manager/docker.image-name: `<ORGANIZATION>/<REPOSITORY>`,"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/configuring_dynamic_plugins/installing-configuring-nexus-plugin |
6.3. Configure Maven to Use the Online Repositories | 6.3. Configure Maven to Use the Online Repositories The online repositories required for Red Hat JBoss Data Virtualization are located at https://maven.repository.redhat.com/ga/ . (There is also an early access repository at https://maven.repository.redhat.com/earlyaccess/all/ .) If you provided the location of Maven's settings.xml file during installation, Maven is already configured to use the online repositories. Procedure 6.2. Configuring Maven to Use the Online Repositories Add entries for the online repositories to Maven's settings.xml file: <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <profiles> <!-- Profile with online repositories required by Data Virtualization --> <profile> <id>dv-online-profile</id> <repositories> <repository> <id>jboss-ga-repository</id> <url>http://maven.repository.redhat.com/techpreview/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss-ga-plugin-repository</id> <url>http://maven.repository.redhat.com/techpreview/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <!-- Activation of the Data Virtualization profile --> <activeProfile>dv-online-profile</activeProfile> </activeProfiles> </settings> If you modified the settings.xml file while JBoss Developer Studio was running, you must refresh Maven settings in the IDE. From the menu, choose Window Preferences . In the Preferences Window, expand Maven and choose User Settings . Click the Update Settings button to refresh the Maven user settings in JBoss Developer Studio. Figure 6.1. Update Maven User Settings If your cached local Maven repository contains outdated artifacts, you may encounter one of the following Maven errors when you build or deploy your project: Missing artifact ARTIFACT_NAME [ERROR] Failed to execute goal on project PROJECT_NAME ; Could not resolve dependencies for PROJECT_NAME To resolve the issue, delete the cached local repository - the ~/.m2/repository/ directory on Linux or the %SystemDrive% \Users\ USERNAME \.m2\repository\ directory on Windows. This will force Maven to download correct versions of required artifacts during the build. | [
"<settings xmlns=\"http://maven.apache.org/SETTINGS/1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd\"> <profiles> <!-- Profile with online repositories required by Data Virtualization --> <profile> <id>dv-online-profile</id> <repositories> <repository> <id>jboss-ga-repository</id> <url>http://maven.repository.redhat.com/techpreview/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss-ga-plugin-repository</id> <url>http://maven.repository.redhat.com/techpreview/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <!-- Activation of the Data Virtualization profile --> <activeProfile>dv-online-profile</activeProfile> </activeProfiles> </settings>"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/installation_guide/configure_maven_to_use_the_online_repositories |
Chapter 1. Installing an on-premise cluster using the Assisted Installer | Chapter 1. Installing an on-premise cluster using the Assisted Installer You can install OpenShift Container Platform on on-premise hardware or on-premise VMs by using the Assisted Installer. Installing OpenShift Container Platform by using the Assisted Installer supports x86_64 , AArch64 , ppc64le , and s390x CPU architectures. 1.1. Using the Assisted Installer The Assisted Installer is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console . The Assisted Installer supports the various deployment platforms with a focus on bare metal, Nutanix, and vSphere infrastructures. The Assisted Installer provides installation functionality as a service. This software-as-a-service (SaaS) approach has the following advantages: Web user interface: The web user interface performs cluster installation without the user having to create the installation configuration files manually. No bootstrap node: A bootstrap node is not required when installing with the Assisted Installer. The bootstrapping process executes on a node within the cluster. Hosting: The Assisted Installer hosts: Ignition files The installation configuration A discovery ISO The installer Streamlined installation workflow: Deployment does not require in-depth knowledge of OpenShift Container Platform. The Assisted Installer provides reasonable defaults and provides the installer as a service, which: Eliminates the need to install and run the OpenShift Container Platform installer locally. Ensures the latest version of the installer up to the latest tested z-stream releases. Older versions remain available, if needed. Enables building automation by using the API without the need to run the OpenShift Container Platform installer locally. Advanced networking: The Assisted Installer supports IPv4 and IPv6 networking, as well as dual-stack networking with the OVN-Kubernetes network plugin, NMState-based static IP addressing, and an HTTP/S proxy. OVN-Kubernetes is the default Container Network Interface (CNI) for OpenShift Container Platform 4.12 and later releases. OpenShift SDN is supported up to OpenShift Container Platform 4.14, but is not supported for OpenShift Container Platform 4.15 and later releases. Preinstallation validation: The Assisted Installer validates the configuration before installation to ensure a high probability of success. The validation process includes the following checks: Ensuring network connectivity Ensuring sufficient network bandwidth Ensuring connectivity to the registry Ensuring time synchronization between cluster nodes Verifying that the cluster nodes meet the minimum hardware requirements Validating the installation configuration parameters REST API: The Assisted Installer has a REST API, enabling automation. The Assisted Installer supports installing OpenShift Container Platform on premises in a connected environment, including with an optional HTTP/S proxy. It can install the following: Highly available OpenShift Container Platform or single-node OpenShift (SNO) OpenShift Container Platform on bare metal, Nutanix, or vSphere with full platform integration, or other virtualization platforms without integration Optional: OpenShift Virtualization, multicluster engine, Logical Volume Manager (LVM) Storage, and OpenShift Data Foundation Note Currently, OpenShift Virtualization and LVM Storage are not supported on IBM Z(R) ( s390x ) architecture. The user interface provides an intuitive interactive workflow where automation does not exist or is not required. Users may also automate installations using the REST API. See the Assisted Installer for OpenShift Container Platform documentation for details. 1.2. API support for the Assisted Installer Supported APIs for the Assisted Installer are stable for a minimum of three months from the announcement of deprecation. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on-premise_with_assisted_installer/installing-on-prem-assisted |
Chapter 30. Preventing resource overuse by using mutex | Chapter 30. Preventing resource overuse by using mutex Mutual exclusion (mutex) algorithms are used to prevent overuse of common resources. 30.1. Mutex options Mutual exclusion (mutex) algorithms are used to prevent processes simultaneously using a common resource. A fast user-space mutex (futex) is a tool that allows a user-space thread to claim a mutex without requiring a context switch to kernel space, provided the mutex is not already held by another thread. When you initialize a pthread_mutex_t object with the standard attributes, a private, non-recursive, non-robust, and non-priority inheritance-capable mutex is created. This object does not provide any of the benfits provided by the pthreads API and the RHEL for Real Time kernel. To benefit from the pthreads API and the RHEL for Real Time kernel, create a pthread_mutexattr_t object. This object stores the attributes defined for the futex. Note The terms futex and mutex are used to describe POSIX thread ( pthread ) mutex constructs. 30.2. Creating a mutex attribute object To define any additional capabilities for the mutex , create a pthread_mutexattr_t object. This object stores the defined attributes for the futex. This is a basic safety procedure that you must always perform. Procedure Create the mutex attribute object using one of the following: pthread_mutex_t( my_mutex ) ; pthread_mutexattr_t( &my_mutex_attr ) ; pthread_mutexattr_init( &my_mutex_attr ) ; For more information about advanced mutex attributes, see Advanced mutex attributes . 30.3. Creating a mutex with standard attributes When you initialize a pthread_mutex_t object with the standard attributes, a private, non-recursive, non-robust, and non-priority inheritance-capable mutex is created. Procedure Create a mutex object under pthreads using one of the following: pthread_mutex_t( my_mutex ); pthread_mutex_init( &my_mutex , &my_mutex_attr ); where &my_mutex_attr; is a mutex attribute object. 30.4. Advanced mutex attributes The following advanced mutex attributes can be stored in a mutex attribute object: Mutex attributes Shared and private mutexes Shared mutexes can be used between processes, however they can create a lot more overhead. pthread_mutexattr_setpshared(&my_mutex_attr, PTHREAD_PROCESS_SHARED); Real-time priority inheritance You can avoid priority inversion problems by using priority inheritance. pthread_mutexattr_setprotocol(&my_mutex_attr, PTHREAD_PRIO_INHERIT); Robust mutexes When a pthread dies, robust mutexes under the pthread are released. However, this comes with a high overhead cost. _NP in this string indicates that this option is non-POSIX or not portable. pthread_mutexattr_setrobust_np(&my_mutex_attr, PTHREAD_MUTEX_ROBUST_NP); Mutex initialization Shared mutexes can be used between processes, however, they can create a lot more overhead. pthread_mutex_init(&my_mutex_attr, &my_mutex); 30.5. Cleaning up a mutex attribute object After the mutex has been created using the mutex attribute object, you can keep the attribute object to initialize more mutexes of the same type, or you can clean it up. The mutex is not affected in either case. Procedure Clean up the attribute object using the pthread_mutexattr_destroy() function: The mutex now operates as a regular pthread_mutex and can be locked, unlocked, and destroyed as normal. 30.6. Additional resources futex(7) , pthread_mutex_destroy(P) , pthread_mutexattr_setprotocol(3p) , and pthread_mutexattr_setprioceiling(3p) man pages on your system | [
"pthread_mutexattr_destroy( &my_mutex_attr );"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_preventing-resource-overuse-by-using-mutex_optimizing-RHEL9-for-real-time-for-low-latency-operation |
Chapter 113. XQuery | Chapter 113. XQuery Camel supports XQuery to allow an Expression or Predicate to be used in the DSL . For example, you could use XQuery to create a predicate in a Message Filter or as an expression for a Recipient List . 113.1. Dependencies To use XQuery in your camel routes you need to add the a dependency on camel-saxon which implements the XQuery language. When using xquery with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-saxon-starter</artifactId> </dependency> 113.2. XQuery Language options The XQuery language supports 4 options, which are listed below. Name Default Java Type Description type String Sets the class name of the result type (type from output) The default result type is NodeSet. headerName String Name of header to use as input, instead of the message body. configurationRef String Reference to a saxon configuration instance in the registry to use for xquery (requires camel-saxon). This may be needed to add custom functions to a saxon configuration, so these custom functions can be used in xquery expressions. trim Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 113.3. Variables The message body will be set as the contextItem . And the following variables are available as well: Variable Type Description exchange Exchange The current Exchange in.body Object The message body out.body Object deprecated The OUT message body (if any) in.headers.* Object You can access the value of exchange.in.headers with key foo by using the variable which name is in.headers.foo out.headers.* Object deprecated You can access the value of exchange.out.headers with key foo by using the variable which name is out.headers.foo variable key name Object Any exchange.properties and exchange.in.headers and any additional parameters set using setParameters(Map) . These parameters are added with they own key name, for instance if there is an IN header with the key name foo then its added as foo . 113.4. Example from("queue:foo") .filter().xquery("//foo") .to("queue:bar") You can also use functions inside your query, in which case you need an explicit type conversion, or you will get an org.w3c.dom.DOMException: HIERARCHY_REQUEST_ERR ). You need to pass in the expected output type of the function. For example the concat function returns a String which is done as shown: from("direct:start") .recipientList().xquery("concat('mock:foo.', /person/@city)", String.class); And in XML DSL: <route> <from uri="direct:start"/> <recipientList> <xquery type="java.lang.String">concat('mock:foo.', /person/@city</xquery> </recipientList> </route> 113.4.1. Using namespaces If you have a standard set of namespaces you wish to work with and wish to share them across many XQuery expressions you can use the org.apache.camel.support.builder.Namespaces when using Java DSL as shown: Namespaces ns = new Namespaces("c", "http://acme.com/cheese"); from("direct:start") .filter().xquery("/c:person[@name='James']", ns) .to("mock:result"); Notice how the namespaces are provided to xquery with the ns variable that are passed in as the 2nd parameter. Each namespace is a key=value pair, where the prefix is the key. In the XQuery expression then the namespace is used by its prefix, eg: /c:person[@name='James'] The namespace builder supports adding multiple namespaces as shown: Namespaces ns = new Namespaces("c", "http://acme.com/cheese") .add("w", "http://acme.com/wine") .add("b", "http://acme.com/beer"); When using namespaces in XML DSL then its different, as you setup the namespaces in the XML root tag (or one of the camelContext , routes , route tags). In the XML example below we use Spring XML where the namespace is declared in the root tag beans , in the line with xmlns:foo="http://example.com/person" : <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:foo="http://example.com/person" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <camelContext id="camel" xmlns="http://activemq.apache.org/camel/schema/spring"> <route> <from uri="activemq:MyQueue"/> <filter> <xquery>/foo:person[@name='James']</xquery> <to uri="mqseries:SomeOtherQueue"/> </filter> </route> </camelContext> </beans> This namespace uses foo as prefix, so the <xquery> expression uses /foo: to use this namespace. 113.5. Using XQuery as transformation We can do a message translation using transform or setBody in the route, as shown below: from("direct:start"). transform().xquery("/people/person"); Notice that xquery will use DOMResult by default, so if we want to grab the value of the person node, using text() we need to tell XQuery to use String as result type, as shown: from("direct:start"). transform().xquery("/people/person/text()", String.class); If you want to use Camel variables like headers, you have to explicitly declare them in the XQuery expression. <transform> <xquery> declare variable USDin.headers.foo external; element item {USDin.headers.foo} </xquery> </transform> 113.6. Loading script from external resource You can externalize the script and have Camel load it from a resource such as "classpath:" , "file:" , or "http:" . This is done using the following syntax: "resource:scheme:location" , e.g. to refer to a file on the classpath you can do: .setHeader("myHeader").xquery("resource:classpath:myxquery.txt", String.class) 113.7. Learning XQuery XQuery is a very powerful language for querying, searching, sorting and returning XML. For help learning XQuery try these tutorials Mike Kay's XQuery Primer The W3Schools XQuery Tutorial 113.8. Spring Boot Auto-Configuration The component supports 11 options, which are listed below. Name Description Default Type camel.component.xquery.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.xquery.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.xquery.configuration To use a custom Saxon configuration. The option is a net.sf.saxon.Configuration type. Configuration camel.component.xquery.configuration-properties To set custom Saxon configuration properties. Map camel.component.xquery.enabled Whether to enable auto configuration of the xquery component. This is enabled by default. Boolean camel.component.xquery.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.xquery.module-u-r-i-resolver To use the custom ModuleURIResolver. The option is a net.sf.saxon.lib.ModuleURIResolver type. ModuleURIResolver camel.language.xquery.configuration-ref Reference to a saxon configuration instance in the registry to use for xquery (requires camel-saxon). This may be needed to add custom functions to a saxon configuration, so these custom functions can be used in xquery expressions. String camel.language.xquery.enabled Whether to enable auto configuration of the xquery language. This is enabled by default. Boolean camel.language.xquery.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.xquery.type Sets the class name of the result type (type from output) The default result type is NodeSet. String | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-saxon-starter</artifactId> </dependency>",
"from(\"queue:foo\") .filter().xquery(\"//foo\") .to(\"queue:bar\")",
"from(\"direct:start\") .recipientList().xquery(\"concat('mock:foo.', /person/@city)\", String.class);",
"<route> <from uri=\"direct:start\"/> <recipientList> <xquery type=\"java.lang.String\">concat('mock:foo.', /person/@city</xquery> </recipientList> </route>",
"Namespaces ns = new Namespaces(\"c\", \"http://acme.com/cheese\"); from(\"direct:start\") .filter().xquery(\"/c:person[@name='James']\", ns) .to(\"mock:result\");",
"/c:person[@name='James']",
"Namespaces ns = new Namespaces(\"c\", \"http://acme.com/cheese\") .add(\"w\", \"http://acme.com/wine\") .add(\"b\", \"http://acme.com/beer\");",
"<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:foo=\"http://example.com/person\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <camelContext id=\"camel\" xmlns=\"http://activemq.apache.org/camel/schema/spring\"> <route> <from uri=\"activemq:MyQueue\"/> <filter> <xquery>/foo:person[@name='James']</xquery> <to uri=\"mqseries:SomeOtherQueue\"/> </filter> </route> </camelContext> </beans>",
"from(\"direct:start\"). transform().xquery(\"/people/person\");",
"from(\"direct:start\"). transform().xquery(\"/people/person/text()\", String.class);",
"<transform> <xquery> declare variable USDin.headers.foo external; element item {USDin.headers.foo} </xquery> </transform>",
".setHeader(\"myHeader\").xquery(\"resource:classpath:myxquery.txt\", String.class)"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-saxon-language-starter |
9.4. Configuration | 9.4. Configuration Now that you have developed a custom handler class, package the implementation in a JAR file, then copy this JAR file into the modules directory and edit the module.xml file in the same directory and add then edit standalone.xml or domain.xml file, locate the "logging" subsystem and add the following entries. Change the above configuration accordingly for AuditHandler, if you are working with Audit Messages. | [
"<resource-root path=\"{your-jar-name}.jar\" />",
"<custom-handler name=\"COMMAND\" class=\"org.teiid.logging.CommandHandler\" module=\"org.jboss.teiid\"> </custom-handler> ..other entries <logger category=\"org.teiid.COMMAND_LOG\"> <level name=\"DEBUG\"/> <handlers> <handler name=\"COMMAND\"/> </handlers> </logger>"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/configuration11 |
Chapter 4. Deploying the central controllers | Chapter 4. Deploying the central controllers Deploy the central controller cluster in a similar way to a typical overcloud deployment. This cluster does not require any Compute nodes, so you can set the Compute count to 0 to override the default of 1 . The central controller has particular storage and Oslo configuration requirements. Use the following procedure to address these requirements. Procedure Create a file called central/overrides.yaml with settings similar to the following: ComputeCount: 0 is an optional parameter to prevent Compute nodes from being deployed with the central Controller nodes. GlanceBackend: swift uses Object Storage (swift) as the Image Service (glance) back end. Red Hat recommends that the Image service does not use Ceph in this configuration until multi-backend glance support is available. The resulting configuration interacts with the distributed compute nodes (DCNs) in the following ways: The Image service on the DCN creates a cached copy of the image it receives from the central Object Storage back end. The Image service uses HTTP to copy the image from Object Storage to the local disk cache. Each DCN has its own Object Storage volume service. This means that users can schedule Object Storage volumes from the central node into different availability zones, because the Ceph volume service on the DCN uses the local Ceph cluster. Note The central Controller node must be able to connect to the distributed compute node (DCN) site. The central Controller node can use a routed layer 3 connection. Deploy the central Controller node. For example, you can use a deploy.sh file with the following contents: | [
"parameter_defaults: NtpServer: - 0.pool.ntp.org - 1.pool.ntp.org ControllerCount: 3 ComputeCount: 0 OvercloudControlFlavor: baremetal OvercloudComputeFlavor: baremetal ControllerSchedulerHints: 'capabilities:node': '0-controller-%index%' GlanceBackend: swift",
"#!/bin/bash STACK=central source ~/stackrc time openstack overcloud deploy --stack USDSTACK --templates /usr/share/openstack-tripleo-heat-templates/ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml -e ~/containers-env-file.yaml -e ~/central/overrides.yaml"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/deploying_distributed_compute_nodes_with_separate_heat_stacks/proc_deploying-central-controllers |
Planning Identity Management | Planning Identity Management Red Hat Enterprise Linux 8 Planning the infrastructure and service integration of an IdM environment Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/planning_identity_management/index |
11.3. Removing Swap Space | 11.3. Removing Swap Space Sometimes it can be prudent to reduce swap space after installation. For example, say you downgraded the amount of RAM in your system from 1 GB to 512 MB, but there is 2 GB of swap space still assigned. It might be advantageous to reduce the amount of swap space to 1 GB, since the larger 2 GB could be wasting disk space. You have three options: remove an entire LVM2 logical volume used for swap, remove a swap file, or reduce swap space on an existing LVM2 logical volume. 11.3.1. Reducing Swap on an LVM2 Logical Volume To reduce an LVM2 swap logical volume (assuming /dev/VolGroup00/LogVol01 is the volume you want to extend): Disable swapping for the associated logical volume: Reduce the LVM2 logical volume by 512 MB: Format the new swap space: Enable the extended logical volume: Test that the logical volume has been reduced properly: | [
"swapoff -v /dev/VolGroup00/LogVol01",
"lvm lvreduce /dev/VolGroup00/LogVol01 -L -512M",
"mkswap /dev/VolGroup00/LogVol01",
"swapon -va",
"cat /proc/swaps # free"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Swap_Space-Removing_Swap_Space |
Chapter 4. Storage classes and storage pools | Chapter 4. Storage classes and storage pools The OpenShift Container Storage operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create a custom storage class if you want the storage class to have a different behavior. You can create multiple storage pools which map to storage classes that provide the following features: Enable applications with their own high availability to use persistent volumes with two replicas, potentially improving application performance. Save space for persistent volume claims using storage classes with compression enabled. Note Multiple storage classes and multiple pools are not supported for external mode OpenShift Container Storage clusters. Note With a minimal cluster of a single device set, only two new storage classes can be created. Every storage cluster expansion allows two new additional storage classes. 4.1. Creating storage classes and pools You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Container Storage cluster is in Ready state. Procedure Click Storage Storage Classes . Click Create Storage Class . Enter the storage class Name and Description . Select either Delete or Retain for the Reclaim Policy. By default, Delete is selected. Select RBD Provisioner which is the plugin used for provisioning the persistent volumes. Select an existing Storage Pool from the list or create a new pool. Create new pool Click Create New Pool . Enter Pool name . Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy. Select Enable compression if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Create to create the new storage pool. Click Finish after the pool is created. (Optional) Select Enable Encryption checkbox. Click Create to create the storage class. 4.2. Creating a storage class for persistent volume encryption Use the following procedure to create an encryption enabled storage class using an external key management system (KMS) for persistent volume encryption. Persistent volume encryption is only available for RBD PVs. Prerequisites The OpenShift Container Storage cluster is in Ready state. On the external key management system (KMS), Ensure that a policy with a token exists and the key value backend path in Vault is enabled. See Enabling key value and policy in Vault . Ensure that you are using signed certificates on your Vault servers. Create a secret in the tenant's namespace as follows: On the OpenShift Container Platform web console, navigate to Workloads Secrets . Click Create Key/value secret . Enter Secret Name as ceph-csi-kms-token . Enter Key as token . Enter Value . It is the token from Vault. You can either click Browse to select and upload the file containing the token or enter the token directly in the text box. Click Create . Note The token can be deleted only after all the encrypted PVCs using the ceph-csi-kms-token have been deleted. Procedure Navigate to Storage Storage Classes . Click Create Storage Class . Enter the storage class Name and Description . Select either Delete or Retain for the Reclaim Policy . By default, Delete is selected. Select RBD Provisioner openshift-storage.rbd.csi.ceph.com which is the plugin used for provisioning the persistent volumes. Select Storage Pool where the volume data will be stored from the list or create a new pool. Select Enable Encryption checkbox. Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https://<hostname or ip>'), and Port number . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration. Enter the key value secret path in Backend Path that is dedicated and unique to OpenShift Container Storage. (Optional) Enter TLS Server Name and Vault Enterprise Namespace . Provide CA Certificate , Client Certificate and Client Private Key by uploading the respective PEM encoded certificate file. Click Save . Click Connect . Review external key management service Connection details. To modify the information, click Change connection details and edit the fields. Click Create . Edit the configmap to add the VAULT_BACKEND parameter if the Hashicorp Vault setup does not allow automatic detection of the Key/Value (KV) secret engine API version used by the backend path. Note VAULT_BACKEND is an optional parameter that is added to the configmap to specify the version of the KV secret engine API associated with the backend path. Ensure that the value matches the KV secret engine API version that is set for the backend path, otherwise it might result in a failure during persistent volume claim (PVC) creation. Identify the encryptionKMSID being used by the newly created storage class. On the OpenShift Web Console, navigate to Storage Storage Classes . Click the Storage class name YAML tab. Capture the encryptionKMSID being used by the storage class. Example: On the OpenShift Web Console, navigate to Workloads ConfigMaps . To view the KMS connection details, click csi-kms-connection-details . Edit the configmap. Click Action menu (...) Edit ConfigMap . Add the VAULT_BACKEND parameter depending on the backend that is configured for the previously identified encryptionKMSID . You can assign kv for KV secret engine API, version 1 and kv-v2 for KV secret engine API, version 2 as the VAULT_BACKEND parameter. Example: Click Save . Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the Hashicorp product. For technical assistance with this product, contact Hashicorp . steps The storage class can be used to create encrypted persistent volumes. For more information, see managing persistent volume claims . | [
"encryptionKMSID: 1-vault",
"kind: ConfigMap apiVersion: v1 metadata: name: csi-kms-connection-details [...] data: 1-vault: >- { \"KMS_PROVIDER\": \"vaulttokens\", \"KMS_SERVICE_NAME\": \"vault\", [...] \"VAULT_BACKEND\": \"kv-v2\" }"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_and_managing_openshift_container_storage_using_red_hat_openstack_platform/storage-classes-and-storage-pools_osp |
14.5. Reusing Virtual Databases | 14.5. Reusing Virtual Databases You can treat your deployed VDB as just another database where the database category is your VDB name and each visible model in your VDB is treated as a schema. This is accomplished via a import-vdb XML element in the vdb.xml definition. By allowing VDB's to referenced other VDBs, users can create reusable database components and reduce the amount of modeling required to create complex transformations. This sample vdb.xml file highlights the import-vdb element and the corresponding import-vdb-reference within the view model's model element: Teiid Designer exposes this capability by allowing users to import metadata from deployed VDBs via the JDBC Import option. Through this import, relational VDB source models are created which structurally represent the Catalog (VDB), Schema (Model) and Tables in Virtual DataBase. When dealing with the these VDB source models there are some limitations or rules, namely: VDB source models are read-only VDB source model name is determined by the deployed model name (schema) from the VDB it was imported from Model names have to be unique within a model project VDB source models have to be imported/created in a project different than the project used to create and deploy the Reuse VDB The JDBC Import Wizard will restrict your options to comply with these rules | [
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <vdb version=\"1\" name=\"PartssupplierViewsVDB\"> <property value=\"false\" name=\"preview\"/> <import-vdb import-data-policies=\"false\" version=\"1\" name=\"PartssupplierSourcesVDB\"/> <model visible=\"true\" type=\"VIRTUAL\" name=\"PartsViewModel\" path=\"/PartssupplierProject/PartsViewModel.xmi\"> <property value=\"1623826484\" name=\"checksum\"/> <property value=\"Relational\" name=\"modelClass\"/> <property value=\"false\" name=\"builtIn\"/> <property value=\"655076658.INDEX\" name=\"indexName\"/> <property value=\"PartssupplierSourcesVDB\" name=\"import-vdb-reference\"/> </model> </vdb>"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/reusing_virtual_databases |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_eclipse_temurin_21.0.2/making-open-source-more-inclusive |
Chapter 4. Configuring Static Routes and the Default Gateway | Chapter 4. Configuring Static Routes and the Default Gateway This chapter covers the configuration of static routes and the default gateway. 4.1. Introduction to Understanding Routing and Gateway Routing is a mechanism that allows a system to find the network path to another system. Routing is often handled by devices on the network dedicated to routing (although any device can be configured to perform routing). Therefore, it is often not necessary to configure static routes on Red Hat Enterprise Linux servers or clients. Exceptions include traffic that must pass through an encrypted VPN tunnel or traffic that should take a specific route for reasons of cost or security. A host's routing table will be automatically populated with routes to directly connected networks. The routes examine when the network interfaces are " up " . In order to reach a remote network or host, the system is given the address of a gateway to which traffic should be sent. When a host's interface is configured by DHCP , an address of a gateway that leads to an upstream network or the Internet is usually assigned. This gateway is usually referred to as the default gateway as it is the gateway to use if no better route is known to the system (and present in the routing table). Network administrators often use the first or last host IP address in the network as the gateway address; for example, 192.168.10.1 or 192.168.10.254 . Not to be confused by the address which represents the network itself; in this example, 192.168.10.0 , or the subnet's broadcast address; in this example 192.168.10.255 . The default gateway is traditionally a network router. The default gateway is for any and all traffic which is not destined for the local network and for which no preferred route is specified in the routing table. Note To expand your expertise, you might also be interested in the Red Hat System Administration I (RH124) training course. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/ch-configuring_static_routes_and_the_default_gateway |
10.5. Managing Translator Settings Using Management CLI | 10.5. Managing Translator Settings Using Management CLI To manage JBoss Data Virtualization translator settings, you can use the same commands as those used for the base JBoss Data Virtualization settings, specifying a particular translator in the command. For example: Available translator names are listed under translator when you run the following command to output current JBoss Data Virtualization settings: | [
"/subsystem=teiid/translator= TRANSLATOR_NAME :read-resource",
"/subsystem=teiid:read-resource"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/managing_translator_settings_using_management_cli |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/introduction_to_the_openstack_dashboard/making-open-source-more-inclusive |
Chapter 8. Common Deployment Scenarios | Chapter 8. Common Deployment Scenarios This section provides a brief overview of common deployment scenarios for Red Hat Satellite. Note that many variations and combinations of the following layouts are possible. 8.1. Single Location An integrated Capsule is a virtual Capsule Server that is created by default in Satellite Server during the installation process. This means Satellite Server can be used to provision directly connected hosts for Satellite deployment in a single geographical location, therefore only one physical server is needed. The base systems of isolated Capsules can be directly managed by Satellite Server, however it is not recommended to use this layout to manage other hosts in remote locations. 8.2. Single Location with Segregated Subnets Your infrastructure might require multiple isolated subnets even if Red Hat Satellite is deployed in a single geographic location. This can be achieved for example by deploying multiple Capsule Servers with DHCP and DNS services, but the recommended way is to create segregated subnets using a single Capsule. This Capsule is then used to manage hosts and compute resources in those segregated networks to ensure they only have to access the Capsule for provisioning, configuration, errata, and general management. For more information on configuring subnets see Managing Hosts . 8.3. Multiple Locations It is recommended to create at least one Capsule Server per geographic location. This practice can save bandwidth since hosts obtain content from a local Capsule Server. Synchronization of content from remote repositories is done only by the Capsule, not by each host in a location. In addition, this layout makes the provisioning infrastructure more reliable and easier to configure. See Figure 1.1, "Red Hat Satellite System Architecture" for an illustration of this approach. 8.4. Disconnected Satellite In high security environments where hosts are required to function in a closed network disconnected from the Internet, Red Hat Satellite can provision systems with the latest security updates, errata, packages and other content. In such case, Satellite Server does not have direct access to the Internet, but the layout of other infrastructure components is not affected. For information about installing Satellite Server from a disconnected network, see Installing Satellite Server in a Disconnected Network Environment . For information about upgrading a disconnected Satellite, see Upgrading a Disconnected Satellite Server in Upgrading and Updating Red Hat Satellite . There are two options for importing content to a disconnected Satellite Server: Disconnected Satellite with Content ISO - in this setup, you download ISO images with content from the Red Hat Customer Portal and extract them to Satellite Server or a local web server. The content on Satellite Server is then synchronized locally. This allows for complete network isolation of Satellite Server, however, the release frequency of content ISO images is around six weeks and not all product content is included. To see the products in your subscription for which content ISO images are available, log on to the Red Hat Customer Portal at https://access.redhat.com , navigate to Downloads > Red Hat Satellite , and click Content ISOs . For instructions on how to import content ISOs to a disconnected Satellite, see Configuring Satellite to Synchronize Content with a Local CDN Server in the Content Management Guide . Note that Content ISOs previously hosted at redhat.com for import into Satellite Server have been deprecated and will be removed in the Satellite version. Disconnected Satellite with Inter-Satellite Synchronization - in this setup, you install a connected Satellite Server and export content from it to populate a disconnected Satellite using some storage device. This allows for exporting both Red Hat provided and custom content at the frequency you choose, but requires deploying an additional server with a separate subscription. For instructions on how to configure Inter-Satellite Synchronization in Satellite, see Synchronizing Content Between Satellite Servers in Managing Content . The above methods for importing content to a disconnected Satellite Server can also be used to speed up the initial population of a connected Satellite. 8.5. Capsule with External Services You can configure a Capsule Server (integrated or standalone) to use external DNS, DHCP, or TFTP service. If you already have a server that provides these services in your environment, you can integrate it with your Satellite deployment. For information about how to configure a Capsule with external services, see Configuring Capsule Server with External Services in Installing Capsule Server . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/satellite_overview_concepts_and_deployment_considerations/chap-Architecture_Guide-Deployment_Scenarios |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback: For simple comments on specific passages: Make sure you are viewing the documentation in the HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document. Use your mouse cursor to highlight the part of text that you want to comment on. Click the Add Feedback pop-up that appears below the highlighted text. Follow the displayed instructions. For submitting more complex feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/troubleshooting_openshift_data_foundation/providing-feedback-on-red-hat-documentation_rhodf |
Chapter 3. Deprecated Functionality | Chapter 3. Deprecated Functionality systemtap component The systemtap-grapher package has been removed from Red Hat Enterprise Linux 6. For more information, see https://access.redhat.com/solutions/757983 . matahari component The Matahari agent framework ( matahari-* ) packages have been removed from Red Hat Enterprise Linux 6. Focus for remote systems management has shifted towards the use of the CIM infrastructure. This infrastructure relies on an already existing standard which provides a greater degree of interoperability for all users. distribution component The following packages have been deprecated and are subjected to removal in a future release of Red Hat Enterprise Linux 6. These packages will not be updated in the Red Hat Enterprise Linux 6 repositories and customers who do not use the MRG-Messaging product are advised to uninstall them from their system. mingw-gcc mingw-boost mingw32-qpid-cpp python-qmf python-qpid qpid-cpp qpid-qmf qpid-tests qpid-tools ruby-qpid saslwrapper Red Hat MRG-Messaging customers will continue to receive updated functionality as part of their regular updates to the product. fence-virt component The libvirt-qpid is no longer part of the fence-virt package. openscap component The openscap-perl subpackage has been removed from openscap . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/deprecated_functionality |
Appendix G. Revision History | Appendix G. Revision History Note that revision numbers relate to the edition of this manual, not to version numbers of Red Hat Enterprise Linux. Revision History Revision 7.0-53 Tue Feb 16 2021 Florian Delehaye Various clarifications, in particular for IdM services and configuration files, and other minor corrections. Revision 7.0-52 Tue Sep 29 2020 Florian Delehaye Document version for 7.9 GA publication. Revision 7.0-51 Tue Mar 31 2020 Florian Delehaye Document version for 7.8 GA publication. Revision 7.0-50 Wed Aug 28 2019 Marc Muehlfeld Added the Notable Changes in IdM appendix. Several minor updates. Revision 7.0-49 Tue Aug 06 2019 Marc Muehlfeld Document version for 7.7 GA publication. Revision 7.0-48 Fri Jun 21 2019 Marc Muehlfeld Added section Renewing Expired System Certificates When IdM is Offline . Revision 7.0-47 Thu Jun 13 2019 Marc Muehlfeld Added content about configuring hidden replicas. Revision 7.0-46 Wed Jun 04 2019 Marc Muehlfeld Added section Enabling Tracking of Last Successful Kerberos Authentication . Several minor edits. Revision 7.0-45 Tue Apr 09 2019 Marc Muehlfeld Added Web UI Session Length , added two sections about authentication indicators, and several minor edits. Revision 7.0-44 Thu Nov 22 2018 Filip Hanzelka Added Identity Management components and associated services and minor edits in the Installing and Uninstalling an IdM server chapter. Revision 7.0-43 Mon Oct 29 2018 Lucie Manaskova Preparing document for 7.6 GA publication. Revision 7.0-42 Tue Jun 26 2018 Lucie Manaskova Updated Managing Certificates with the Integrated IdM CAs . Other updates. Revision 7.0-41 Fri Apr 23 2018 Filip Hanzelka Added Determining the lifetime of a Kerberos Ticket . Other minor fixes. Revision 7.0-40 Fri Apr 6 2018 Lucie Manaskova Preparing document for 7.5 GA publication. Revision 7.0-39 Wed Mar 14 2018 Filip Hanzelka Minor updates. Revision 7.0-38 Wed Feb 28 2018 Lucie Manaskova Minor updates. Revision 7.0-37 Mon Feb 12 2018 Aneta Steflova Petrova Added Users Cannot Access Their Vault Due To Insufficient 'add' Privilege . Other minor fixes. Revision 7.0-36 Mon Jan 29 2018 Aneta Steflova Petrova Updated Defining SELinux User Maps . Other minor fixes. Revision 7.0-35 Fri Dec 15 2017 Aneta Steflova Petrova Updated Managing Hosts . Other minor fixes. Revision 7.0-34 Mon Dec 4 2017 Aneta Steflova Petrova Added Kerberos PKINIT Authentication in IdM . Updated Defining Access Control for IdM Users . Other minor fixes. Revision 7.0-33 Mon Nov 20 2017 Aneta Steflova Petrova Updated chapters User and Group Schema and Defining Password Policies . Revision 7.0-32 Mon Oct 9 2017 Aneta Steflova Petrova Minor fixes. Revision 7.0-31 Tue Sep 12 2017 Aneta Steflova Petrova Updated a few web UI screenshots and procedures. Minor updates to Smart-card Authentication in Identity Management . Revision 7.0-30 Mon Aug 28 2017 Aneta Steflova Petrova Updated Smart-card Authentication in Identity Management and Identity Management Configuration Files and Directories . Revision 7.0-29 Tue Jul 18 2017 Aneta Steflova Petrova Document version for 7.4 GA publication. Revision 7.0-28 Mon Apr 24 2017 Aneta Steflova Petrova Updated and merged managing user groups, host groups, and automember. Other minor updates. Revision 7.0-27 Mon Apr 10 2017 Aneta Steflova Petrova Added Configuring TLS for Identity Management. Various minor fixes and updates. Revision 7.0-26 Mon Mar 27 2017 Aneta Steflova Petrova Added Post-installation Considerations for Clients and Enabling Password Reset. Other minor updates. Revision 7.0-25 Mon Feb 27 2017 Aneta Steflova Petrova Updated chapters on managing the Kerberos domain, upgrading, and HBAC. Other updates in various chapters. Revision 7.0-24 Wed Dec 7 2016 Aneta Steflova Petrova Updated automember and password policies chapters. Added description for NIS support plug-ins. Other minor updates. Revision 7.0-23 Tue Oct 18 2016 Aneta Steflova Petrova Version for 7.3 GA publication. Revision 7.0-22 Fri Jul 29 2016 Aneta Petrova Added a chapter on using vaults. Revision 7.0-21 Thu Jul 28 2016 Marc Muehlfeld Updated introduction, other minor fixes. Revision 7.0-19 Tue Jun 28 2016 Aneta Petrova Updated diagrams. Added a section on benefits of using IdM to the intro chapter. Other minor fixes and tweaks. Revision 7.0-18 Fri Jun 10 2016 Aneta Petrova Updated introduction, server installation, and troubleshooting chapters. Other fixes. Revision 7.0-17 Fri May 27 2016 Aneta Petrova Added a diagram for user lifecycle. Revision 7.0-16 Thu Mar 24 2016 Aneta Petrova Added user lifecycle. Updated the User Accounts, User Authentication, and Managing Replicas chapters. Revision 7.0-15 Thu Mar 03 2016 Aneta Petrova Updated several DNS sections. Moved restricting domains for PAM services to the System-Level Authentication Guide. Revision 7.0-14 Tue Feb 09 2016 Aneta Petrova Added smart cards, ID views, and OTP. Moved uninstallation procedures into installation chapters. Other minor updates. Revision 7.0-13 Thu Nov 19 2015 Aneta Petrova Minor updates to certificate profile management and promoting a replica to master. Revision 7.0-12 Fri Nov 13 2015 Aneta Petrova Version for 7.2 GA release with updates to DNS and other sections. Revision 7.0-11 Thu Nov 12 2015 Aneta Petrova Version for 7.2 GA release. Revision 7.0-10 Fri Mar 13 2015 Tomas Capek Async update with last-minute edits for 7.1. Revision 7.0-8 Wed Feb 25 2015 Tomas Capek Version for 7.1 GA release. Revision 7.0-6 Fri Dec 05 2014 Tomas Capek Rebuild to update the sort order on the splash page. Revision 7.0-4 Wed Jun 11 2014 Ella Deon Ballard Initial release. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/doc-history |
Chapter 41. Using Raw XML Messages | Chapter 41. Using Raw XML Messages Abstract The high-level JAX-WS APIs shield the developer from using native XML messages by marshaling the data into JAXB objects. However, there are cases when it is better to have direct access to the raw XML message data that is passing on the wire. The JAX-WS APIs provide two interfaces that provide access to the raw XML: the Dispatch interface is the client-side interface, and the Provider interface is the server-side interface. 41.1. Using XML in a Consumer Abstract The Dispatch interface is a low-level JAX-WS API that allows you work directly with raw messages. It accepts and returns messages, or payloads, of a number of types including DOM objects, SOAP messages, and JAXB objects. Because it is a low-level API, the Dispatch interface does not perform any of the message preparation that the higher-level JAX-WS APIs perform. You must ensure that the messages, or payloads, that you pass to the Dispatch object are properly constructed, and make sense for the remote operation being invoked. 41.1.1. Usage Modes Overview Dispatch objects have two usage modes : Message mode Message Payload mode (Payload mode) The usage mode you specify for a Dispatch object determines the amount of detail that is passed to the user level code. Message mode In message mode , a Dispatch object works with complete messages. A complete message includes any binding specific headers and wrappers. For example, a consumer interacting with a service that requires SOAP messages must provide the Dispatch object's invoke() method a fully specified SOAP message. The invoke() method also returns a fully specified SOAP message. The consumer code is responsible for completing and reading the SOAP message's headers and the SOAP message's envelope information. Message mode is not ideal when working with JAXB objects. To specify that a Dispatch object uses message mode provide the value java.xml.ws.Service.Mode.MESSAGE when creating the Dispatch object. For more information about creating a Dispatch object see the section called "Creating a Dispatch object" . Payload mode In payload mode , also called message payload mode, a Dispatch object works with only the payload of a message. For example, a Dispatch object working in payload mode works only with the body of a SOAP message. The binding layer processes any binding level wrappers and headers. When a result is returned from the invoke() method the binding level wrappers and headers are already striped away, and only the body of the message is left. When working with a binding that does not use special wrappers, such as the Apache CXF XML binding, payload mode and message mode provide the same results. To specify that a Dispatch object uses payload mode provide the value java.xml.ws.Service.Mode.PAYLOAD when creating the Dispatch object. For more information about creating a Dispatch object see the section called "Creating a Dispatch object" . 41.1.2. Data Types Overview Because Dispatch objects are low-level objects, they are not optimized for using the same JAXB generated types as the higher level consumer APIs. Dispatch objects work with the following types of objects: javax.xml.transform.Source javax.xml.soap.SOAPMessage javax.activation.DataSource the section called "Using JAXB objects" Using Source objects A Dispatch object accepts and returns objects that are derived from the javax.xml.transform.Source interface. Source objects are supported by any binding, and in either message mode or payload mode. Source objects are low level objects that hold XML documents. Each Source implementation provides methods that access the stored XML documents and then manipulate its contents. The following objects implement the Source interface: DOMSource Holds XML messages as a Document Object Model(DOM) tree. The XML message is stored as a set of Node objects that are accessed using the getNode() method. Nodes can be either updated or added to the DOM tree using the setNode() method. SAXSource Holds XML messages as a Simple API for XML (SAX) object. SAX objects contain an InputSource object that holds the raw data and an XMLReader object that parses the raw data. StreamSource Holds XML messages as a data stream. The data stream can be manipulated the same as any other data stream. If you create your Dispatch object so that it uses generic Source objects, Apache CXF returns the messages as SAXSource objects. This behavior can be changed using the endpoint's source-preferred-format property. See Part IV, "Configuring Web Service Endpoints" for information about configuring the Apache CXF runtime. Using SOAPMessage objects Dispatch objects can use javax.xml.soap.SOAPMessage objects when the following conditions are true: The Dispatch object is using the SOAP binding The Dispatch object is using message mode A SOAPMessage object holds a SOAP message. They contain one SOAPPart object and zero or more AttachmentPart objects. The SOAPPart object contains the SOAP specific portions of the SOAP message including the SOAP envelope, any SOAP headers, and the SOAP message body. The AttachmentPart objects contain binary data that is passed as an attachment. Using DataSource objects Dispatch objects can use objects that implement the javax.activation.DataSource interface when the following conditions are true: The Dispatch object is using the HTTP binding The Dispatch object is using message mode DataSource objects provide a mechanism for working with MIME typed data from a variety of sources, including URLs, files, and byte arrays. Using JAXB objects While Dispatch objects are intended to be low level APIs that allow you to work with raw messages, they also allow you to work with JAXB objects. To work with JAXB objects a Dispatch object must be passed a JAXBContext that can marshal and unmarshal the JAXB objects in use. The JAXBContext is passed when the Dispatch object is created. You can pass any JAXB object understood by the JAXBContext object as the parameter to the invoke() method. You can also cast the returned message into any JAXB object understood by the JAXBContext object. For information on creating a JAXBContext object see Chapter 39, Using A JAXBContext Object . 41.1.3. Working with Dispatch Objects Procedure To use a Dispatch object to invoke a remote service the following sequence should be followed: Create a Dispatch object. Construct a request message. Call the proper invoke() method. Parse the response message. Creating a Dispatch object To create a Dispatch object do the following: Create a Service object to represent the wsdl:service element that defines the service on which the Dispatch object will make invocations. See Section 25.2, "Creating a Service Object" . Create the Dispatch object using the Service object's createDispatch() method, shown in Example 41.1, "The createDispatch() Method" . Example 41.1. The createDispatch() Method public Dispatch<T> createDispatch QName portName java.lang.Class<T> type Service.Mode mode WebServiceException Note If you are using JAXB objects the method signature for createDispatch() is: public Dispatch<T> createDispatch QName portName javax.xml.bind.JAXBContext context Service.Mode mode WebServiceException Table 41.1, "Parameters for createDispatch() " describes the parameters for the createDispatch() method. Table 41.1. Parameters for createDispatch() Parameter Description portName Specifies the QName of the wsdl:port element that represents the service provider where the Dispatch object will make invocations. type Specifies the data type of the objects used by the Dispatch object. See Section 41.1.2, "Data Types" . When working with JAXB objects, this parameter specifies the JAXBContext object used to marshal and unmarshal the JAXB objects. mode Specifies the usage mode for the Dispatch object. See Section 41.1.1, "Usage Modes" . Example 41.2, "Creating a Dispatch Object" shows the code for creating a Dispatch object that works with DOMSource objects in payload mode. Example 41.2. Creating a Dispatch Object Constructing request messages When working with Dispatch objects, requests must be built from scratch. The developer is responsible for ensuring that the messages passed to a Dispatch object match a request that the targeted service provider can process. This requires precise knowledge about the messages used by the service provider and what, if any, header information it requires. This information can be provided by a WSDL document or an XML Schema document that defines the messages. While service providers vary greatly there are a few guidelines to be followed: The root element of the request is based in the value of the name attribute of the wsdl:operation element corresponding to the operation being invoked. Warning If the service being invoked uses doc/literal bare messages, the root element of the request is based on the value of the name attribute of the wsdl:part element referred to by the wsdl:operation element. The root element of the request is namespace qualified. If the service being invoked uses rpc/literal messages, the top-level elements in the request will not be namespace qualified. Important The children of top-level elements may be namespace qualified. To be certain you must check their schema definitions. If the service being invoked uses rpc/literal messages, none of the top-level elements can be null. If the service being invoked uses doc/literal messages, the schema definition of the message determines if any of the elements are namespace qualified. For more information about how services use XML messages see, the WS-I Basic Profile . Synchronous invocation For consumers that make synchronous invocations that generate a response, use the Dispatch object's invoke() method shown in Example 41.3, "The Dispatch.invoke() Method" . Example 41.3. The Dispatch.invoke() Method T invoke T msg WebServiceException The type of both the response and the request passed to the invoke() method are determined when the Dispatch object is created. For example if you create a Dispatch object using createDispatch(portName, SOAPMessage.class, Service.Mode.MESSAGE) , both the response and the request are SOAPMessage objects. Note When using JAXB objects, both the response and the request can be of any type the provided JAXBContext object can marshal and unmarshal. Also, the response and the request can be different JAXB objects. Example 41.4, "Making a Synchronous Invocation Using a Dispatch Object" shows code for making a synchronous invocation on a remote service using a DOMSource object. Example 41.4. Making a Synchronous Invocation Using a Dispatch Object Asynchronous invocation Dispatch objects also support asynchronous invocations. As with the higher level asynchronous APIs discussed in Chapter 40, Developing Asynchronous Applications , Dispatch objects can use both the polling approach and the callback approach. When using the polling approach, the invokeAsync() method returns a Response<t> object that can be polled to see if the response has arrived. Example 41.5, "The Dispatch.invokeAsync() Method for Polling" shows the signature of the method used to make an asynchronous invocation using the polling approach. Example 41.5. The Dispatch.invokeAsync() Method for Polling Response <T> invokeAsync T msg WebServiceException For detailed information on using the polling approach for asynchronous invocations see Section 40.4, "Implementing an Asynchronous Client with the Polling Approach" . When using the callback approach, the invokeAsync() method takes an AsyncHandler implementation that processes the response when it is returned. Example 41.6, "The Dispatch.invokeAsync() Method Using a Callback" shows the signature of the method used to make an asynchronous invocation using the callback approach. Example 41.6. The Dispatch.invokeAsync() Method Using a Callback Future<?> invokeAsync T msg AsyncHandler<T> handler WebServiceException For detailed information on using the callback approach for asynchronous invocations see Section 40.5, "Implementing an Asynchronous Client with the Callback Approach" . Note As with the synchronous invoke() method, the type of the response and the type of the request are determined when you create the Dispatch object. Oneway invocation When a request does not generate a response, make remote invocations using the Dispatch object's invokeOneWay() . Example 41.7, "The Dispatch.invokeOneWay() Method" shows the signature for this method. Example 41.7. The Dispatch.invokeOneWay() Method invokeOneWay T msg WebServiceException The type of object used to package the request is determined when the Dispatch object is created. For example if the Dispatch object is created using createDispatch(portName, DOMSource.class, Service.Mode.PAYLOAD) , then the request is packaged into a DOMSource object. Note When using JAXB objects, the response and the request can be of any type the provided JAXBContext object can marshal and unmarshal. Example 41.8, "Making a One Way Invocation Using a Dispatch Object" shows code for making a oneway invocation on a remote service using a JAXB object. Example 41.8. Making a One Way Invocation Using a Dispatch Object 41.2. Using XML in a Service Provider Abstract The Provider interface is a low-level JAX-WS API that allows you to implement a service provider that works directly with messages as raw XML. The messages are not packaged into JAXB objects before being passed to an object that implements the Provider interface. 41.2.1. Messaging Modes Overview Objects that implement the Provider interface have two messaging modes : Message mode Payload mode The messaging mode you specify determines the level of messaging detail that is passed to your implementation. Message mode When using message mode , a Provider implementation works with complete messages. A complete message includes any binding specific headers and wrappers. For example, a Provider implementation that uses a SOAP binding receives requests as fully specified SOAP message. Any response returned from the implementation must be a fully specified SOAP message. To specify that a Provider implementation uses message mode by provide the value java.xml.ws.Service.Mode.MESSAGE as the value to the javax.xml.ws.ServiceMode annotation, as shown in Example 41.9, "Specifying that a Provider Implementation Uses Message Mode" . Example 41.9. Specifying that a Provider Implementation Uses Message Mode Payload mode In payload mode a Provider implementation works with only the payload of a message. For example, a Provider implementation working in payload mode works only with the body of a SOAP message. The binding layer processes any binding level wrappers and headers. When working with a binding that does not use special wrappers, such as the Apache CXF XML binding, payload mode and message mode provide the same results. To specify that a Provider implementation uses payload mode by provide the value java.xml.ws.Service.Mode.PAYLOAD as the value to the javax.xml.ws.ServiceMode annotation, as shown in Example 41.10, "Specifying that a Provider Implementation Uses Payload Mode" . Example 41.10. Specifying that a Provider Implementation Uses Payload Mode If you do not provide a value for the @ServiceMode annotation, the Provider implementation uses payload mode. 41.2.2. Data Types Overview Because they are low-level objects, Provider implementations cannot use the same JAXB generated types as the higher level consumer APIs. Provider implementations work with the following types of objects: javax.xml.transform.Source javax.xml.soap.SOAPMessage javax.activation.DataSource Using Source objects A Provider implementation can accept and return objects that are derived from the javax.xml.transform.Source interface. Source objects are low level objects that hold XML documents. Each Source implementation provides methods that access the stored XML documents and manipulate its contents. The following objects implement the Source interface: DOMSource Holds XML messages as a Document Object Model(DOM) tree. The XML message is stored as a set of Node objects that are accessed using the getNode() method. Nodes can be either updated or added to the DOM tree using the setNode() method. SAXSource Holds XML messages as a Simple API for XML (SAX) object. SAX objects contain an InputSource object that holds the raw data and an XMLReader object that parses the raw data. StreamSource Holds XML messages as a data stream. The data stream can be manipulated the same as any other data stream. If you create your Provider object so that it uses generic Source objects, Apache CXF returns the messages as SAXSource objects. This behavior can be changed using the endpoint's source-preferred-format property. See Part IV, "Configuring Web Service Endpoints" for information about configuring the Apache CXF runtime. Important When using Source objects the developer is responsible for ensuring that all required binding specific wrappers are added to the message. For example, when interacting with a service expecting SOAP messages, the developer must ensure that the required SOAP envelope is added to the outgoing request and that the SOAP envelope's contents are correct. Using SOAPMessage objects Provider implementations can use javax.xml.soap.SOAPMessage objects when the following conditions are true: The Provider implementation is using the SOAP binding The Provider implementation is using message mode A SOAPMessage object holds a SOAP message. They contain one SOAPPart object and zero or more AttachmentPart objects. The SOAPPart object contains the SOAP specific portions of the SOAP message including the SOAP envelope, any SOAP headers, and the SOAP message body. The AttachmentPart objects contain binary data that is passed as an attachment. Using DataSource objects Provider implementations can use objects that implement the javax.activation.DataSource interface when the following conditions are true: The implementation is using the HTTP binding The implementation is using message mode DataSource objects provide a mechanism for working with MIME typed data from a variety of sources, including URLs, files, and byte arrays. 41.2.3. Implementing a Provider Object Overview The Provider interface is relatively easy to implement. It only has one method, invoke() , that must be implemented. In addition it has three simple requirements: An implementation must have the @WebServiceProvider annotation. An implementation must have a default public constructor. An implementation must implement a typed version of the Provider interface. In other words, you cannot implement a Provider<T> interface. You must implement a version of the interface that uses a concrete data type as listed in Section 41.2.2, "Data Types" . For example, you can implement an instance of a Provider<SAXSource>. The complexity of implementing the Provider interface is in the logic handling the request messages and building the proper responses. Working with messages Unlike the higher-level SEI based service implementations, Provider implementations receive requests as raw XML data, and must send responses as raw XML data. This requires that the developer has intimate knowledge of the messages used by the service being implemented. These details can typically be found in the WSDL document describing the service. WS-I Basic Profile provides guidelines about the messages used by services, including: The root element of a request is based in the value of the name attribute of the wsdl:operation element that corresponds to the operation that is invoked. Warning If the service uses doc/literal bare messages, the root element of the request is based on the value of name attribute of the wsdl:part element referred to by the wsdl:operation element. The root element of all messages is namespace qualified. If the service uses rpc/literal messages, the top-level elements in the messages are not namespace qualified. Important The children of top-level elements might be namespace qualified, but to be certain you will must check their schema definitions. If the service uses rpc/literal messages, none of the top-level elements can be null. If the service uses doc/literal messages, then the schema definition of the message determines if any of the elements are namespace qualified. The @WebServiceProvider annotation To be recognized by JAX-WS as a service implementation, a Provider implementation must be decorated with the @WebServiceProvider annotation. Table 41.2, " @WebServiceProvider Properties" describes the properties that can be set for the @WebServiceProvider annotation. Table 41.2. @WebServiceProvider Properties Property Description portName Specifies the value of the name attribute of the wsdl:port element that defines the service's endpoint. serviceName Specifies the value of the name attribute of the wsdl:service element that contains the service's endpoint. targetNamespace Specifies the targetname space of the service's WSDL definition. wsdlLocation Specifies the URI for the WSDL document defining the service. All of these properties are optional, and are empty by default. If you leave them empty, Apache CXF creates values using information from the implementation class. Implementing the invoke() method The Provider interface has only one method, invoke() , that must be implemented. The invoke() method receives the incoming request packaged into the type of object declared by the type of Provider interface being implemented, and returns the response message packaged into the same type of object. For example, an implementation of a Provider<SOAPMessage> interface receives the request as a SOAPMessage object and returns the response as a SOAPMessage object. The messaging mode used by the Provider implementation determines the amount of binding specific information the request and the response messages contain. Implementations using message mode receive all of the binding specific wrappers and headers along with the request. They must also add all of the binding specific wrappers and headers to the response message. Implementations using payload mode only receive the body of the request. The XML document returned by an implementation using payload mode is placed into the body of the request message. Examples Example 41.11, "Provider<SOAPMessage> Implementation" shows a Provider implementation that works with SOAPMessage objects in message mode. Example 41.11. Provider<SOAPMessage> Implementation The code in Example 41.11, "Provider<SOAPMessage> Implementation" does the following: Specifies that the following class implements a Provider object that implements the service whose wsdl:service element is named stockQuoteReporter , and whose wsdl:port element is named stockQuoteReporterPort . Specifies that this Provider implementation uses message mode. Provides the required default public constructor. Provides an implementation of the invoke() method that takes a SOAPMessage object and returns a SOAPMessage object. Extracts the request message from the body of the incoming SOAP message. Checks the root element of the request message to determine how to process the request. Creates the factories required for building the response. Builds the SOAP message for the response. Returns the response as a SOAPMessage object. Example 41.12, "Provider<DOMSource> Implementation" shows an example of a Provider implementation using DOMSource objects in payload mode. Example 41.12. Provider<DOMSource> Implementation The code in Example 41.12, "Provider<DOMSource> Implementation" does the following: Specifies that the class implements a Provider object that implements the service whose wsdl:service element is named stockQuoteReporter , and whose wsdl:port element is named stockQuoteReporterPort . Specifies that this Provider implementation uses payload mode. Provides the required default public constructor. Provides an implementation of the invoke() method that takes a DOMSource object and returns a DOMSource object. | [
"package com.fusesource.demo; import javax.xml.namespace.QName; import javax.xml.ws.Service; public class Client { public static void main(String args[]) { QName serviceName = new QName(\"http://org.apache.cxf\", \"stockQuoteReporter\"); Service s = Service.create(serviceName); QName portName = new QName(\"http://org.apache.cxf\", \"stockQuoteReporterPort\"); Dispatch<DOMSource> dispatch = s.createDispatch(portName, DOMSource.class, Service.Mode.PAYLOAD);",
"// Creating a DOMSource Object for the request DocumentBuilder db = DocumentBuilderFactory.newDocumentBuilder(); Document requestDoc = db.newDocument(); Element root = requestDoc.createElementNS(\"http://org.apache.cxf/stockExample\", \"getStockPrice\"); root.setNodeValue(\"DOW\"); DOMSource request = new DOMSource(requestDoc); // Dispatch disp created previously DOMSource response = disp.invoke(request);",
"// Creating a JAXBContext and an Unmarshaller for the request JAXBContext jbc = JAXBContext.newInstance(\"org.apache.cxf.StockExample\"); Unmarshaller u = jbc.createUnmarshaller(); // Read the request from disk File rf = new File(\"request.xml\"); GetStockPrice request = (GetStockPrice)u.unmarshal(rf); // Dispatch disp created previously disp.invokeOneWay(request);",
"@WebServiceProvider @ServiceMode(value=Service.Mode.MESSAGE) public class stockQuoteProvider implements Provider<SOAPMessage> { }",
"@WebServiceProvider @ServiceMode(value=Service.Mode.PAYLOAD) public class stockQuoteProvider implements Provider<DOMSource> { }",
"import javax.xml.ws.Provider; import javax.xml.ws.Service; import javax.xml.ws.ServiceMode; import javax.xml.ws.WebServiceProvider; @WebServiceProvider(portName=\"stockQuoteReporterPort\" serviceName=\"stockQuoteReporter\") @ServiceMode(value=\"Service.Mode.MESSAGE\") public class stockQuoteReporterProvider implements Provider<SOAPMessage> { public stockQuoteReporterProvider() { } public SOAPMessage invoke(SOAPMessage request) { SOAPBody requestBody = request.getSOAPBody(); if(requestBody.getElementName.getLocalName.equals(\"getStockPrice\")) { MessageFactory mf = MessageFactory.newInstance(); SOAPFactory sf = SOAPFactory.newInstance(); SOAPMessage response = mf.createMessage(); SOAPBody respBody = response.getSOAPBody(); Name bodyName = sf.createName(\"getStockPriceResponse\"); respBody.addBodyElement(bodyName); SOAPElement respContent = respBody.addChildElement(\"price\"); respContent.setValue(\"123.00\"); response.saveChanges(); return response; } } }",
"import javax.xml.ws.Provider; import javax.xml.ws.Service; import javax.xml.ws.ServiceMode; import javax.xml.ws.WebServiceProvider; @WebServiceProvider(portName=\"stockQuoteReporterPort\" serviceName=\"stockQuoteReporter\") @ServiceMode(value=\"Service.Mode.PAYLOAD\") public class stockQuoteReporterProvider implements Provider<DOMSource> public stockQuoteReporterProvider() { } public DOMSource invoke(DOMSource request) { DOMSource response = new DOMSource(); return response; } }"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/JAXWSRAWXMLMessages |
Chapter 10. Fixed issues in Red Hat Process Automation Manager 7.13.1 | Chapter 10. Fixed issues in Red Hat Process Automation Manager 7.13.1 Red Hat Process Automation Manager 7.13.1 provides increased stability and fixed issues listed in this section. 10.1. Business Central The test scenario returns an error when it is executed in the mvn test command [ RHDM-1539 ] 10.2. KIE Server A ClassCastException occurs if you submit a form in a kie-server that contains the date process variable [ RHPAM-4326 ] The EJB service saveContentFromUser method does not works with the custom usercallback and you receive an error message [ RHPAM-4234 ] The custom query response does not return the SLAdueDate with the UserTasksWithCustomVariables mapper [ RHPAM-4232 ] The EventEmitter returns wrong task statuses [ RHPAM-4091 ] The properties on custom settings are not available on the KIE Server deployments [ RHPAM-3976 ] Class retention by JSONMashaller ObjectMapper._typeFactory._typeCache [ RHDM-1933 ] The class retention by JSONMashaller ObjectMapper._typeFactory._typeCache and you receive OutOfMemoryError: Metaspace error [ RHDM-1933 ] 10.3. Process engine Process instance creation fails with the org.xmlpull.v1.XmlPullParserException error in VariableScope.validateVariable [ RHPAM-4482 ] Unable to update the task description with a long string of more than 255 characters, you receive an error with an exception [ RHPAM-4445 ] The task operations such as claiming a task using the REST API with container alias work with Red Hat Process Automation Manager version 7.11 but not with Red Hat Process Automation Manager version 7.12 [ RHPAM-4453 ] Selecting from PROCESSINSTANCELOG takes too long to execute [ RHPAM-4425 ] The kafka-clients misalignment with any supported AMQ Streams version [ RHPAM-4417 ] Orphan sessions in memory due to an exception on PerRequestRuntimeManager [ RHPAM-4386 ] The timer is not deleted at the process instance abort [ RHPAM-4380 ] The event emitter generates a TaskInstanceView object when a task event is produced. But the description field in that object contains the same value that the task has in subject when in this case the description field is empty [ RHPAM-4371 ] Non-existent timer with session id=0 is displayed when you are using the REST API to list all the available timers in a migrated process instance [ RHPAM-4312 ] Abort fails with SessionNotFoundException for process instances with multiple REST WorkItemHandlers and RETRY strategy [ RHPAM-4296 ] When you abort the workItem through the kie-server REST API, it does not the execute WorkItemHandler's abortWorkItem method. The engine must call the abortWorkItem method from WorkItemHandler after performing the workItem abort operation [ RHPAM-4282 ] The UserGroupCallback implementation is not getting injected into KIE Server when using Spring Boot [ RHPAM-4281 ] The current index settings might cause DeadLocks in the SQL server [ RHPAM-4253 ] An aborted stage remains active in the process engine [ RHPAM-4252 ] When you are trying to update the process instance description through a script task inside the process definition, the updated value is not getting reflected immediately [ RHPAM-4251 ] The task operations fail intermittently when using LDAPUserGroupCallback and you receive an error message [ RHPAM-4247 ] The transaction timeout is reported even if the RecordsPerTransaction parameter is used in LogCleanupCommand [ RHPAM-4184 ] Incorrect response for REST service when org.kie.server.bypass.auth.user is used with Spring Boot runtime [ RHPAM-4151 ] Incorrect groups are returned when org.kie.server.bypass.auth.user is set and JAASUserGroupCallbackImpl is used [ RHPAM-4136 ] The ClusteredJobFailOverListener fails to remove the data from cache memory [ RHPAM-4070 ] 10.4. Process Designer The field with LocalDateTime is forcing you to enter a value even though the field is not marked as Required [ RHPAM-4310 ] The task form with the LocalDateTime datatype displays the time format even when the option is unflagged [ RHPAM-4189 ] If the form contains an org.jbpm.document.Document object and you are uploading a file greater than 2 MB, you receive an angular page hanging error [ RHPAM-3995 ] 10.5. Red Hat build of Kogito Kogito aligned with non supported Spring Boot version [ RHPAM-4419 ] 10.6. DMN designer In the DMN designer, a text annotation is not saved correctly if it created by copying and pasting [ RHDM-1890 ] Unable to include the DMN model [ RHDM-1850 ] 10.7. Configuration Wrong managed version of Spring Boot dependencies [ RHPAM-4413 ] 10.8. Red Hat OpenShift Container Platform Upgrade Red Hat JBoss EAP version to 7.4.6 on RHPAM image [ RHPAM-4481 ] Sensitive information such as user names and passwords are exposed in environment variables and pod logs [ RHPAM-4438 ] The Kie Server OpenShift startup strategy watcher is closed and the DeploymentConfig is not updated [ RHPAM-3333 ] 10.9. Decision engine When you are using generics in accumulate inline code, you receive an error with the ClassNotFoundException exception [ RHPAM-4444 ] The metrics of rule execution must include the rules fired from a BPMN process [ RHPAM-4248 ] The kie-server-client fails to unmarshall a response suddenly with the NumberFormatException [ RHDM-1942 ] The build fails with a "_this cannot be resolved" message during the compilation of a generated executable model [ RHDM-1940 ] In an executable model, you receive a NullPointerException in LambdaConsequence error with global variable [ RHDM-1920 ] Fails to parse a constraint connected with OR with a bind variable on right side in an executable model [ RHDM-1910 ] In an executable model, when a BigDecimal literal is set to a variable with a MVEL dialect, you receive the ClassCastException exception [ RHDM-1908 ] In an executable model, a prop with method invocation is not recognized in a modify block [ RHDM-1907 ] In an executable model, an arithmetic operation with a String coercion in constraint fails to execute [ RHDM-1905 ] In an executable model, an arithmetic operation with a BigDecimal in constraint fails [ RHDM-1904 ] | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/release_notes_for_red_hat_process_automation_manager_7.13/rn-7.13.1-fixed-issues-ref |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.19/proc-providing-feedback-on-redhat-documentation |
Chapter 2. Preparing Capsule Servers for load balancing | Chapter 2. Preparing Capsule Servers for load balancing Satellite does not support configuring existing Capsule Servers for load balancing. You must create a new Capsule Server for this purpose. 2.1. Registering to Satellite Server Use this procedure to register the base operating system on which you want to install Capsule Server to Satellite Server. Red Hat subscription manifest prerequisites On Satellite Server, a manifest must be installed and it must contain the appropriate repositories for the organization you want Capsule to belong to. The manifest must contain repositories for the base operating system on which you want to install Capsule, as well as any clients that you want to connect to Capsule. The repositories must be synchronized. For more information on manifests and repositories, see Managing Red Hat Subscriptions in Managing content . Proxy and network prerequisites The Satellite Server base operating system must be able to resolve the host name of the Capsule base operating system and vice versa. Ensure HTTPS connection using client certificate authentication is possible between Capsule Server and Satellite Server. HTTP proxies between Capsule Server and Satellite Server are not supported. You must configure the host and network-based firewalls accordingly. For more information, see Port and firewall requirements in Installing Capsule Server . You can register hosts with Satellite using the host registration feature in the Satellite web UI, Hammer CLI, or the Satellite API. For more information, see Registering hosts and setting up host integration in Managing hosts . Prerequisites You have set the load balancer for host registration. For more information, see Chapter 8, Setting the load balancer for host registration . Procedure In the Satellite web UI, navigate to Hosts > Register Host . From the Capsule dropdown list, select the load balancer. Select Force to register a host that has been previously registered to a Capsule Server. From the Activation Keys list, select the activation keys to assign to your host. Click Generate to create the registration command. Click on the files icon to copy the command to your clipboard. Connect to your host using SSH and run the registration command. Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. CLI procedure Generate the host registration command using the Hammer CLI: If your hosts do not trust the SSL certificate of Satellite Server, you can disable SSL validation by adding the --insecure flag to the registration command. Include the --smart-proxy-id My_Capsule_ID option. You can use the ID of any Capsule Server that you configured for host registration load balancing. Satellite will apply the load balancer to the registration command automatically. Include the --force option to register a host that has been previously registered to a Capsule Server. Connect to your host using SSH and run the registration command. Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. API procedure Generate the host registration command using the Satellite API: If your hosts do not trust the SSL certificate of Satellite Server, you can disable SSL validation by adding the --insecure flag to the registration command. Use an activation key to simplify specifying the environments. For more information, see Managing Activation Keys in Managing content . Include { "smart_proxy_id": My_Capsule_ID } . You can use the ID of any Capsule Server that you configured for host registration load balancing. Satellite will apply the load balancer to the registration command automatically. Include { "force": true } to register a host that has been previously registered to a Capsule Server. To enter a password as a command line argument, use username:password syntax. Keep in mind this can save the password in the shell history. Alternatively, you can use a temporary personal access token instead of a password. To generate a token in the Satellite web UI, navigate to My Account > Personal Access Tokens . Connect to your host using SSH and run the registration command. Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. 2.2. Configuring repositories Prerequisite If you are installing Capsule Server as a virtual machine hosted on Red Hat Virtualization, you must also enable the Red Hat Common repository and then install Red Hat Virtualization guest agents and drivers. For more information, see Installing the Guest Agents and Drivers on Red Hat Enterprise Linux in the Virtual Machine Management Guide . Procedure Select the operating system and version you are installing on: Red Hat Enterprise Linux 9 Red Hat Enterprise Linux 8 2.2.1. Red Hat Enterprise Linux 9 Disable all repositories: Enable the following repositories: Verification Verify that the required repositories are enabled: 2.2.2. Red Hat Enterprise Linux 8 Disable all repositories: Enable the following repositories: Enable the module: Verification Verify that the required repositories are enabled: Additional Resources If there is any warning about conflicts with Ruby or PostgreSQL while enabling satellite-capsule:el8 module, see Troubleshooting DNF modules . For more information about modules and lifecycle streams on Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux 8 Application Streams Lifecycle . 2.3. Installing Capsule Server packages Before installing Capsule Server packages, you must update all packages that are installed on the base operating system. Procedure To install Capsule Server, complete the following steps: Update all packages: Install the Satellite Server packages: 2.4. Additional resources For more information about installing Capsule Servers, see Installing Capsule Server . | [
"hammer host-registration generate-command --activation-keys \" My_Activation_Key \"",
"hammer host-registration generate-command --activation-keys \" My_Activation_Key \" --insecure true",
"curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"] }}'",
"curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"], \"insecure\": true }}'",
"subscription-manager repos --disable \"*\"",
"subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms --enable=rhel-9-for-x86_64-appstream-rpms --enable=satellite-capsule-6.16-for-rhel-9-x86_64-rpms --enable=satellite-maintenance-6.16-for-rhel-9-x86_64-rpms",
"dnf repolist enabled",
"subscription-manager repos --disable \"*\"",
"subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=satellite-capsule-6.16-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.16-for-rhel-8-x86_64-rpms",
"dnf module enable satellite-capsule:el8",
"dnf repolist enabled",
"dnf upgrade",
"dnf install satellite-capsule"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/configuring_capsules_with_a_load_balancer/preparing-capsule-servers-for-load-balancing_load-balancing |
Chapter 2. Setting up an instance using the command line | Chapter 2. Setting up an instance using the command line On the command line, you can use either a .inf file or interactive installer to set up a new instance. Additionally, you can set up a new instance as a non-root user. 2.1. Prerequisites The server meets the requirements of the latest Red Hat Directory Server version as described in the Red Hat Directory Server 12 Release Notes 2.2. Setting up a new instance on the command line using a .inf file When you set up Directory Server using a .inf file on the command line you can customize advanced settings. For example, you can customize in the .inf file the following settings: The user and group the ns-slapd Directory Server process uses after the service has started. Note that, if you use a different user and group, you must manually create the user and group before you start the installation. Paths, such as the configuration, backup, and data directory. Certificate validity. 2.2.1. Installing the Directory Server packages Use the following procedure to install the Directory Server packages. Prerequisites You enabled RHEL and Directory Server repositories as described in Enabling Directory Server repositories . Procedure Enable the redhat-ds:12 module and install Directory Server packages: 2.2.2. Creating a .inf file for a Directory Server instance installation Create a .inf file for the dscreate utility, and adjust the file to your environment. In a later step, you will use this file to create the new Directory Server instance. Prerequisites You installed the redhat-ds:12 module. Procedure Use the dscreate create-template command to create a template .inf file. For example, to store the template in the /root/instance_name.inf file, enter: # dscreate create-template /root/instance_name.inf The created file contains all available parameters including descriptions. Edit the file that you created in the step: Uncomment the parameters that you want to set to customize the installation. All parameters have defaults. However, Red Hat recommends that you customize certain parameters for a production environment. For example, set at least the following parameters in the [slapd] section: To install an instance with the LMDB backend, set the following parameters: Note that mdb_max_size must be an integer value that depends on your directory size. For more details, see nsslapd-mdb-max-size attribute description. To automatically create a suffix during instance creation, set the following parameters in the [backend-userroot] section: Important If you do not create a suffix during instance creation, you must create it later manually before you can store data in this instance. Optional: Uncomment other parameters and set them to appropriate values for your environment. For example, use these parameters to specify replication options, such as authentication credentials and changelog trimming, or set different ports for the LDAP and LDAPS protocols. Note By default, new instances that you create include a self-signed certificate and TLS enabled. For increased security, Red Hat recommends that you do not disable this feature. Note that you can replace the self-signed certificate with a certificate issued by a Certificate Authority (CA) at a later date. Additional resources Enabling TLS-encrypted connections to Directory Server 2.2.3. Using a .inf file to set up a new Directory Server instance This section describes how to use a .inf file to set up a new Directory Server instance using the command line. Prerequisites You created a .inf file for the Directory Server instance. Procedure Pass the .inf file to the dscreate from-file command to create the new instance: # dscreate from-file /root/instance_name.inf Starting installation ... Validate installation settings ... Create file system structures ... Create self-signed certificate database ... Perform SELinux labeling ... Perform post-installation tasks ... Completed installation for instance: slapd-instance_name The dscreate utility automatically starts the instance and configures RHEL to start the service when the system boots. Open the required ports in the firewall: # firewall-cmd --permanent --add-port={389/tcp,636/tcp} Reload the firewall configuration: # firewall-cmd --reload 2.3. Setting up a new instance on the command line using the interactive installer Administrators can use the Directory Server interactive installer to set up a new instance by answering questions about the configuration for the new instance. If you want to customize additional settings during the installation, use a .inf file instead of the interactive installer. For details, see Setting up a new instance on the command line using a .inf file . 2.3.1. Prerequisites The server meets the requirements of the latest Red Hat Directory Server version as described in the Red Hat Directory Server 12 Release Notes . 2.3.2. Installing the Directory Server packages Use the following procedure to install the Directory Server packages. Prerequisites You enabled RHEL and Directory Server repositories as described in Enabling Directory Server repositories . Procedure Enable the redhat-ds:12 module and install Directory Server packages: 2.3.3. Creating an instance using the interactive installer This section explains how to use the interactive installer to create a new Directory Server instance. Procedure Start the interactive installer: # dscreate interactive Answer the questions of the interactive installer. To use the default values displayed in square brackets behind most questions in the installer, press Enter without entering a value. Install Directory Server (interactive mode) =========================================== Enter system's hostname [server.example.com]: Enter the instance name [server]: instance_name Enter port number [389]: Create self-signed certificate database [yes]: Enter secure port number [636]: Enter Directory Manager DN [cn=Directory Manager]: Enter the Directory Manager password: password Confirm the Directory Manager Password: password Choose whether mdb or bdb is used. [bdb]: mdb Enter the lmdb database size [15154167808.0]: database_size_in_bytes Enter the database suffix (or enter "none" to skip) [dc=server,dc=example,dc=com]: dc=example,dc=com Create sample entries in the suffix [no]: Create just the top suffix entry [no]: yes Do you want to start the instance after the installation? [yes]: Are you ready to install? [no]: yes Note Instead of setting a password in clear text you can set a { algorithm } hash string generated by the pwdhash utility. Open the required ports in the firewall: # firewall-cmd --permanent --add-port={389/tcp,636/tcp} Reload the firewall configuration: # firewall-cmd --reload 2.4. Setting up a new instance as a non-root user If you do not have root permissions, you can perform the Directory Server installation as a user. Use this method to test Directory Server and develop LDAP applications. However, note that instances running by a non-root user have limitations, such as: They do not support Simple Network Management Protocol (SNMP). They can use only ports higher or equal to 1024. 2.4.1. Preparing the environment to install Directory Server as a user Without root permissions, before you can create and administer Directory Server instances, you need to prepare a proper environment using the dscreate ds-root command. Prerequisites You installed the Directory Server packages as a root user. Procedure Ensure you have USDHOME/bin in your PATH variable. If not: Append the following to the ~/.bash_profile file: PATH="USDHOME/bin:USDPATH" Re-read the ~/bash_profile file: USD source ~/.bash_profile Configure the environment for an instance creation to use the custom location: USD dscreate ds-root USDHOME/dsroot USDHOME/bin This command replaces the standard installation paths with USDHOME/dsroot/ and creates a copy of the standard Directory Server administration utilities in the USDHOME/bin/ directory. To make the shell use new paths: Clear the cache: USD hash -r dscreate Verify that the shell uses the correct path to the command: USD which dscreate ~/bin/dscreate For the dscreate command, the shell now uses the USDHOME/bin/dscreate instead of /usr/bin/dscreate . 2.4.2. Installing a new instance as non-root user To install Directory Server without root permissions, you can use the interactive installer. After the installation, Directory Server creates an instance in the custom location and a user can run dscreate , dsctl , dsconf utilities as usual. Prerequisites You prepared the environment for non-root installation. You have sudo permissions to use the firewall-cmd utility If you want to make the Directory Server instance available from the outside. Procedure Create an instance using the interactive installer Start the interactive installer: USD dscreate interactive Answer the questions of the interactive installer. To use the default values displayed in square brackets behind most questions in the installer, press Enter without entering a value. Note During the installation, you must choose the instance port and secure port number higher than 1024 (for example, 1389 and 1636). Otherwise, a user does not have permissions to bind to a privileged port (1-1023). Install Directory Server (interactive mode) =========================================== Non privileged user cannot use semanage, will not relabel ports or files. Selinux support will be disabled, continue? [yes]: yes Enter system's hostname [server.example.com]: Enter the instance name [server]: instance_name Enter port number [389]: 1389 Create self-signed certificate database [yes]: Enter secure port number [636]: 1636 Enter Directory Manager DN [cn=Directory Manager]: Enter the Directory Manager password: password Confirm the Directory Manager Password: password Enter the database suffix (or enter "none" to skip) [dc=server,dc=example,dc=com]: dc=example,dc=com Create sample entries in the suffix [no]: Create just the top suffix entry [no]: yes Do you want to start the instance after the installation? [yes]: Are you ready to install? [no]: yes Note Instead of setting a password in clear text you can set a { algorithm } hash string generated by the pwdhash utility. Optional: If you want to make the Directory Server instance available from the outside: Open the ports in the firewall: # sudo firewall-cmd --permanent --add-port={1389/tcp,1636/tcp} Reload the firewall configuration: # sudo firewall-cmd --reload Verification Run ldapsearch command to test that a user can connect to the instance: USD ldapsearch -D "cn=Directory Manager" -W -H ldap://server.example.com:1389 -b "dc=example,dc=com" -s sub -x "(objectclass=*)" Additional resources Preparing the environment for non root user installation How to bind ports below 1024 with non-root privilege | [
"dnf module enable redhat-ds:12 dnf install 389-ds-base cockpit-389-ds",
"dscreate create-template /root/instance_name.inf",
"instance_name = instance_name root_password = password",
"db_lib = mdb mdb_max_size = 21474836480",
"create_suffix_entry = True suffix = dc=example,dc=com",
"dscreate from-file /root/instance_name.inf Starting installation Validate installation settings Create file system structures Create self-signed certificate database Perform SELinux labeling Perform post-installation tasks Completed installation for instance: slapd-instance_name",
"firewall-cmd --permanent --add-port={389/tcp,636/tcp}",
"firewall-cmd --reload",
"dnf module enable redhat-ds:12 dnf install 389-ds-base cockpit-389-ds",
"dscreate interactive",
"Install Directory Server (interactive mode) =========================================== Enter system's hostname [server.example.com]: Enter the instance name [server]: instance_name Enter port number [389]: Create self-signed certificate database [yes]: Enter secure port number [636]: Enter Directory Manager DN [cn=Directory Manager]: Enter the Directory Manager password: password Confirm the Directory Manager Password: password Choose whether mdb or bdb is used. [bdb]: mdb Enter the lmdb database size [15154167808.0]: database_size_in_bytes Enter the database suffix (or enter \"none\" to skip) [dc=server,dc=example,dc=com]: dc=example,dc=com Create sample entries in the suffix [no]: Create just the top suffix entry [no]: yes Do you want to start the instance after the installation? [yes]: Are you ready to install? [no]: yes",
"firewall-cmd --permanent --add-port={389/tcp,636/tcp}",
"firewall-cmd --reload",
"PATH=\"USDHOME/bin:USDPATH\"",
"source ~/.bash_profile",
"dscreate ds-root USDHOME/dsroot USDHOME/bin",
"hash -r dscreate",
"which dscreate ~/bin/dscreate",
"dscreate interactive",
"Install Directory Server (interactive mode) =========================================== Non privileged user cannot use semanage, will not relabel ports or files. Selinux support will be disabled, continue? [yes]: yes Enter system's hostname [server.example.com]: Enter the instance name [server]: instance_name Enter port number [389]: 1389 Create self-signed certificate database [yes]: Enter secure port number [636]: 1636 Enter Directory Manager DN [cn=Directory Manager]: Enter the Directory Manager password: password Confirm the Directory Manager Password: password Enter the database suffix (or enter \"none\" to skip) [dc=server,dc=example,dc=com]: dc=example,dc=com Create sample entries in the suffix [no]: Create just the top suffix entry [no]: yes Do you want to start the instance after the installation? [yes]: Are you ready to install? [no]: yes",
"sudo firewall-cmd --permanent --add-port={1389/tcp,1636/tcp}",
"sudo firewall-cmd --reload",
"ldapsearch -D \"cn=Directory Manager\" -W -H ldap://server.example.com:1389 -b \"dc=example,dc=com\" -s sub -x \"(objectclass=*)\""
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/installing_red_hat_directory_server/setting-up-an-instance-using-the-command-line_installing-rhds |
Chapter 50. Virtualization | Chapter 50. Virtualization USB 3.0 support for KVM guests USB 3.0 host adapter (xHCI) emulation for KVM guests remains a Technology Preview in Red Hat Enterprise Linux 7. (BZ#1103193) Select Intel network adapters now support SR-IOV as a guest on Hyper-V In this update for Red Hat Enterprise Linux guest virtual machines running on Hyper-V, a new PCI passthrough driver adds the ability to use the single-root I/O virtualization (SR-IOV) feature for Intel network adapters supported by the ixgbevf driver. This ability is enabled when the following conditions are met: SR-IOV support is enabled for the network interface controller (NIC) SR-IOV support is enabled for the virtual NIC SR-IOV support is enabled for the virtual switch The virtual function (VF) from the NIC is attached to the virtual machine. The feature is currently supported with Microsoft Windows Server 2016. (BZ#1348508) No-IOMMU mode for VFIO drivers As a Technology Preview, this update adds No-IOMMU mode for virtual function I/O (VFIO) drivers. The No-IOMMU mode provides the user with full user-space I/O (UIO) access to a direct memory access (DMA)-capable device without a I/O memory management unit (IOMMU). Note that in addition to not being supported, using this mode is not secure due to the lack of I/O management provided by IOMMU. (BZ# 1299662 ) virt-v2v can now use vmx configuration files to convert VMware guests As a Technology Preview, the virt-v2v utility now includes the vmx input mode, which enables the user to convert a guest virtual machine from a VMware vmx configuration file. Note that to do this, you also need access to the corresponding VMware storage, for example by mounting the storage using NFS. It is also possible to access the storage using SSH, by adding the -it ssh parameter. (BZ# 1441197 , BZ# 1523767 ) virt-v2v can convert Debian and Ubuntu guests As a technology preview, the virt-v2v utility can now convert Debian and Ubuntu guest virtual machines. Note that the following problems currently occur when performing this conversion: virt-v2v cannot change the default kernel in the GRUB2 configuration, and the kernel configured in the guest is not changed during the conversion, even if a more optimal version of the kernel is available on the guest. After converting a Debian or Ubuntu VMware guest to KVM, the name of the guest's network interface may change, and thus requires manual configuration. (BZ# 1387213 ) Virtio devices can now use vIOMMU As a Technology Preview, this update enables virtio devices to use virtual Input/Output Memory Management Unit (vIOMMU). This guarantees the security of Direct Memory Access (DMA) by allowing the device to DMA only to permitted addresses. However, note that only guest virtual machines using Red Hat Enterprise Linux 7.4 or later are able to use this feature. (BZ# 1283251 , BZ#1464891) virt-v2v converts VMWare guests faster and more reliably As a Technology Preview, the virt-v2v utility can now use the VMWare Virtual Disk Development Kit (VDDK) to import a VMWare guest virtual machine to a KVM guest. This enables virt-v2v to connect directly to the VMWare ESXi hypervisor, which improves the speed and reliability of the conversion. Note that this conversion import method requires the external nbdkit utility and its VDDK plug-in. (BZ#1477912) Open Virtual Machine Firmware The Open Virtual Machine Firmware (OVMF) is available as a Technology Preview in Red Hat Enterprise Linux 7. OVMF is a UEFI secure boot environment for AMD64 and Intel 64 guests. However, OVMF is not bootable with virtualization components available in RHEL 7. Note that OVMF is fully supported in RHEL 8. (BZ#653382) GPU-based mediated devices now support the VNC console As a Technology Preview, the Virtual Network Computing (VNC) console is now available for use with GPU-based mediated devices, such as the NVIDIA vGPU technology. As a result, it is now possible to use these mediated devices for real-time rendering of a virtual machine's graphical output. (BZ# 1475770 , BZ#1470154, BZ#1555246) Azure M416v2 as a host for RHEL 7 guests As a Technology Preview, the Azure M416v2 instance type can now be used as a host for virtual machines that use RHEL 7.6 and later as the guest operating systems. (BZ#1661654) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/technology_previews_virtualization |
Chapter 7. Updating an instance | Chapter 7. Updating an instance You can add and remove additional resources from running instances, such as persistent volume storage, a network interface, or a public IP address. You can also update instance metadata and the security groups that the instance belongs to. Note To execute openstack client commands on the cloud, you must specify the name of the cloud detailed in your clouds.yaml file. You can specify the name of the cloud by using one of the following methods: Use the --os-cloud option with each command, for example: Use this option if you access more than one cloud. Create an environment variable for the cloud name in your bashrc file: 7.1. Attaching a network to an instance You can attach a network to a running instance. When you attach a network to the instance, the Compute service creates the port on the network for the instance. Use a network to attach the network interface to an instance when you want to use the default security group and there is only one subnet on the network. Prerequisites The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Identify the available networks and note the name or ID of the network that you want to attach to your instance: If the network that you need is not available, create a new network: Attach the network to your instance: Optional: Include the --tag option and replace <tag_name> with the name of a tag for your virtual NIC device. Replace <instance> with the name or ID of the instance that you want to attach the network to. Replace <network> with the name or ID of the network that you want to attach to the instance. Tip To tag a virtual device at server creation time, see Tagging virtual devices . Additional resources openstack network create command in the Command line interface reference . Creating a network in the Configuring Red Hat OpenStack Platform networking guide. 7.2. Detaching a network from an instance You can detach a network from an instance. Note Detaching the network detaches all network ports. If the instance has multiple ports on a network and you want to detach only one of those ports, follow the Detaching a port from an instance procedure to detach the port. Prerequisites The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Identify the network that is attached to the instance: Detach the network from the instance: Replace <instance> with the name or ID of the instance that you want to remove the network from. Replace <network> with the name or ID of the network that you want to remove from the instance. 7.3. Attaching a port to an instance You can attach a network interface to a running instance by using a port. You can attach a port to only one instance at a time. Use a port to attach the network interface to an instance when you want to use a custom security group, or when there are multiple subnets on the network. Tip If you attach the network interface by using a network, the port is created automatically. For more information, see Attaching a network to an instance . Note Red Hat OpenStack Platform (RHOSP) provides up to 24 interfaces for each instance. By default, you can add up to 16 PCIe devices to an instance before you must reboot the instance to add more. The RHOSP administrator can use the NovaLibvirtNumPciePorts parameter to configure the number of PCIe devices that can be added to an instance, before a reboot of the instance is required to add more devices. Prerequisites If attaching a port with an SR-IOV vNIC to an instance, there must be a free SR-IOV device on the host on the appropriate physical network, and the instance must have a free PCIe slot. The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Create the port that you want to attach to your instance: Replace <network> with the name or ID of the network to create the port on. Optional: To create an SR-IOV port, replace <vnic-type> with one of the following values: direct : Creates a direct mode SR-IOV virtual function (VF) port. direct-physical : Creates a direct mode SR-IOV physical function (PF) port. macvtap : Creates an SR-IOV port that is attached to the instance through a MacVTap device. Replace <port> with the name or ID of the port that you want to attach to the instance. Attach the port to your instance: Replace <instance> with the name or ID of the instance that you want to attach the port to. Replace <port> with the name or ID of the port that you want to attach to the instance. Verify that the port is attached to your instance: Replace <instance_UUID> with the UUID of the instance that you attached the port to. Additional resources openstack port create command in the Command line interface reference . 7.4. Detaching a port from an instance You can detach a port from an instance. Prerequisites The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Identify the port that is attached to the instance: Detach the port from the instance: Replace <instance> with the name or ID of the instance that you want to remove the port from. Replace <port> with the name or ID of the port that you want to remove from the instance. 7.5. Attaching a volume to an instance You can attach a volume to an instance for persistent storage. You can attach a volume to only one instance at a time, unless the volume has been configured as a multi-attach volume. For more information about creating multi-attach volumes, see Volumes that can be attached to multiple instances . Prerequisites To attach a multi-attach volume, the environment variable OS_COMPUTE_API_VERSION is set to 2.60 or later. The instance is fully operational, or fully stopped. You cannot attach a volume to an instance when the instance is in the process of booting up or shutting down. To attach more than 26 volumes to your instance, the image you used to create the instance must have the following properties: hw_scsi_model=virtio-scsi hw_disk_bus=scsi The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Identify the available volumes and note the name or ID of the volume that you want to attach to your instance: Attach the volume to your instance: Optional: Include the --tag option and replace <tag_name> with the name of a tag for your virtual storage device. Replace <instance> with the name or ID of the instance that you want to attach the volume to. Replace <volume> with the name or ID of the volume that you want to attach to the instance. Tip To tag a virtual device at server creation time, see Tagging virtual devices . Note If the command returns the following error, the volume you chose to attach to the instance is a multi-attach volume, therefore you must use Compute API version 2.60 or later: You can either set the environment variable OS_COMPUTE_API_VERSION=2.72 , or include the --os-compute-api-version argument when adding the volume to the instance: Tip Specify --os-compute-api-version 2.20 or higher to add a volume to an instance with status SHELVED or SHELVED_OFFLOADED . Confirm that the volume is attached to the instance or instances: Replace <volume> with the name or ID of the volume to display. Example output: 7.6. Viewing the volumes attached to an instance You can view the volumes attached to a particular instance. Prerequisites You are using python-openstackclient 5.5.0 . The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. * .Procedure List the volumes attached to an instance: 7.7. Detaching a volume from an instance You can detach a volume from an instance. Note Detaching the network detaches all network ports. If the instance has multiple ports on a network and you want to detach only one of those ports, follow the Detaching a port from an instance procedure to detach the port. Prerequisites The instance is fully operational, or fully stopped. You cannot detach a volume from an instance when the instance is in the process of booting up or shutting down. The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Identify the volume that is attached to the instance: Detach the volume from the instance: Replace <instance> with the name or ID of the instance that you want to remove the volume from. Replace <volume> with the name or ID of the volume that you want to remove from the instance. Note Specify --os-compute-api-version 2.20 or higher to remove a volume from an instance with status SHELVED or SHELVED_OFFLOADED . | [
"openstack flavor list --os-cloud <cloud_name>",
"`export OS_CLOUD=<cloud_name>`",
"openstack network list",
"openstack network create <network>",
"openstack server add network [--tag <tag_name>] <instance> <network>",
"openstack server show <instance>",
"openstack server remove network <instance> <network>",
"openstack port create --network <network> [--vnic-type <vnic-type>] <port>",
"openstack server add port <instance> <port>",
"openstack port list --device-id <instance_UUID>",
"openstack server show <instance>",
"openstack server remove port <instance> <port>",
"openstack volume list",
"openstack server add volume [--tag <tag_name>] <instance> <volume>",
"Multiattach volumes are only supported starting with compute API version 2.60. (HTTP 400) (Request-ID: req-3a969c31-e360-4c79-a403-75cc6053c9e5)",
"openstack --os-compute-api-version 2.72 server add volume <instance> <volume>",
"openstack volume show <volume>",
"+-----------------------------------------------------+----------------------+---------+-----+-----------------------------------------------------------------------------------------------+ | ID | Name | Status | Size| Attached to +-----------------------------------------------------+---------------------+---------+------+---------------------------------------------------------------------------------------------+ | f3fb92f6-c77b-429f-871d-65b1e3afa750 | volMultiattach | in-use | 50 | Attached to instance1 on /dev/vdb Attached to instance2 on /dev/vdb | +-----------------------------------------------------+----------------------+---------+-----+-----------------------------------------------------------------------------------------------+",
"openstack server volume list <instance> +---------------------+----------+---------------------+-----------------------+ | ID | Device | Server ID | Volume ID | +---------------------+----------+---------------------+-----------------------+ | 1f9dcb02-9a20-4a4b- | /dev/vda | ab96b635-1e63-4487- | 1f9dcb02-9a20-4a4b-9f | | 9f25-c7846a1ce9e8 | | a85c-854197cd537b | 25-c7846a1ce9e8 | +---------------------+----------+---------------------+-----------------------+",
"openstack server show <instance>",
"openstack server remove volume <instance> <volume>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/creating_and_managing_instances/assembly_updating-an-instance_osp |
Chapter 6. Downloading files from your bucket | Chapter 6. Downloading files from your bucket To download a file from your bucket to your workbench, use the download_file() method. Prerequisites You have cloned the odh-doc-examples repository to your workbench. You have opened the s3client_examples.ipynb file in your workbench. You have installed Boto3 and configured an S3 client. Procedure In the notebook, locate the following instructions to download files from a bucket: Modify the code sample: Replace <bucket_name> with the name of the bucket that the file is located in... Replace <object_name> with the name of the file that you want to download. Replace <file_name> with the name and path that you want the file to be downloaded to, as shown in the example. Run the code cell. Verification The file that you downloaded appears in the path that you specified on your workbench. | [
"#Download file from bucket #Replace the following values with your own: #<bucket_name>: The name of the bucket. #<object_name>: The name of the file to download. Must include full path to the file on the bucket. #<file_name>: The name of the file when downloaded. s3_client.download_file('<bucket_name>','<object_name>','<file_name>')",
"s3_client.download_file('aqs086-image-registry', 'series35-image36-086.csv', '\\tmp\\series35-image36-086.csv_old')"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_with_data_in_an_s3-compatible_object_store/downloading-files-from-available-amazon-s3-buckets-using-notebook-cells_s3 |
function::inode_path | function::inode_path Name function::inode_path - get the path to an inode Synopsis Arguments inode Pointer to inode. Description Returns the full path associated with the given inode. | [
"inode_path:string(inode:long)"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-inode-path |
Chapter 2. Installing the Red Hat build of OpenTelemetry | Chapter 2. Installing the Red Hat build of OpenTelemetry Installing the Red Hat build of OpenTelemetry involves the following steps: Installing the Red Hat build of OpenTelemetry Operator. Creating a namespace for an OpenTelemetry Collector instance. Creating an OpenTelemetryCollector custom resource to deploy the OpenTelemetry Collector instance. 2.1. Installing the Red Hat build of OpenTelemetry from the web console You can install the Red Hat build of OpenTelemetry from the Administrator view of the web console. Prerequisites You are logged in to the web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. Procedure Install the Red Hat build of OpenTelemetry Operator: Go to Operators OperatorHub and search for Red Hat build of OpenTelemetry Operator . Select the Red Hat build of OpenTelemetry Operator that is provided by Red Hat Install Install View Operator . Important This installs the Operator with the default presets: Update channel stable Installation mode All namespaces on the cluster Installed Namespace openshift-operators Update approval Automatic In the Details tab of the installed Operator page, under ClusterServiceVersion details , verify that the installation Status is Succeeded . Create a project of your choice for the OpenTelemetry Collector instance that you will create in the step by going to Home Projects Create Project . Create an OpenTelemetry Collector instance. Go to Operators Installed Operators . Select OpenTelemetry Collector Create OpenTelemetry Collector YAML view . In the YAML view , customize the OpenTelemetryCollector custom resource (CR) with the OTLP, Jaeger, Zipkin receivers and the debug exporter. apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: | receivers: otlp: protocols: grpc: http: jaeger: protocols: grpc: thrift_binary: thrift_compact: thrift_http: zipkin: processors: batch: memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: debug: service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug] Select Create . Verification Use the Project: dropdown list to select the project of the OpenTelemetry Collector instance. Go to Operators Installed Operators to verify that the Status of the OpenTelemetry Collector instance is Condition: Ready . Go to Workloads Pods to verify that all the component pods of the OpenTelemetry Collector instance are running. 2.2. Installing the Red Hat build of OpenTelemetry by using the CLI You can install the Red Hat build of OpenTelemetry from the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run oc login : USD oc login --username=<your_username> Procedure Install the Red Hat build of OpenTelemetry Operator: Create a project for the Red Hat build of OpenTelemetry Operator by running the following command: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-opentelemetry-operator openshift.io/cluster-monitoring: "true" name: openshift-opentelemetry-operator EOF Create an Operator group by running the following command: USD oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-opentelemetry-operator namespace: openshift-opentelemetry-operator spec: upgradeStrategy: Default EOF Create a subscription by running the following command: USD oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: opentelemetry-product namespace: openshift-opentelemetry-operator spec: channel: stable installPlanApproval: Automatic name: opentelemetry-product source: redhat-operators sourceNamespace: openshift-marketplace EOF Check the Operator status by running the following command: USD oc get csv -n openshift-opentelemetry-operator Create a project of your choice for the OpenTelemetry Collector instance that you will create in a subsequent step: To create a project without metadata, run the following command: USD oc new-project <project_of_opentelemetry_collector_instance> To create a project with metadata, run the following command: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_opentelemetry_collector_instance> EOF Create an OpenTelemetry Collector instance in the project that you created for it. Note You can create multiple OpenTelemetry Collector instances in separate projects on the same cluster. Customize the OpenTelemetry Collector custom resource (CR) with the OTLP, Jaeger, and Zipkin receivers and the debug exporter: apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: | receivers: otlp: protocols: grpc: http: jaeger: protocols: grpc: thrift_binary: thrift_compact: thrift_http: zipkin: processors: batch: memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: debug: service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug] Apply the customized CR by running the following command: USD oc apply -f - << EOF <OpenTelemetryCollector_custom_resource> EOF Verification Verify that the status.phase of the OpenTelemetry Collector pod is Running and the conditions are type: Ready by running the following command: USD oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml Get the OpenTelemetry Collector service by running the following command: USD oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> 2.3. Additional resources Creating a cluster admin OperatorHub.io Accessing the web console Installing from OperatorHub using the web console Creating applications from installed Operators Getting started with the OpenShift CLI | [
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: | receivers: otlp: protocols: grpc: http: jaeger: protocols: grpc: thrift_binary: thrift_compact: thrift_http: zipkin: processors: batch: memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: debug: service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug]",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-opentelemetry-operator openshift.io/cluster-monitoring: \"true\" name: openshift-opentelemetry-operator EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-opentelemetry-operator namespace: openshift-opentelemetry-operator spec: upgradeStrategy: Default EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: opentelemetry-product namespace: openshift-opentelemetry-operator spec: channel: stable installPlanApproval: Automatic name: opentelemetry-product source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get csv -n openshift-opentelemetry-operator",
"oc new-project <project_of_opentelemetry_collector_instance>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_opentelemetry_collector_instance> EOF",
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: otel namespace: <project_of_opentelemetry_collector_instance> spec: mode: deployment config: | receivers: otlp: protocols: grpc: http: jaeger: protocols: grpc: thrift_binary: thrift_compact: thrift_http: zipkin: processors: batch: memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 exporters: debug: service: pipelines: traces: receivers: [otlp,jaeger,zipkin] processors: [memory_limiter,batch] exporters: [debug]",
"oc apply -f - << EOF <OpenTelemetryCollector_custom_resource> EOF",
"oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml",
"oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/red_hat_build_of_opentelemetry/install-otel |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.