title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
probe::tty.open | probe::tty.open Name probe::tty.open - Called when a tty is opened Synopsis tty.open Values inode_state the inode state file_mode the file mode inode_number the inode number file_flags the file flags file_name the file name inode_flags the inode flags | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-tty-open |
9.5. Import From Flat File Source | 9.5. Import From Flat File Source You can import metadata from your flat file data sources and create the metamodels required to query your data in minutes. Using the steps below you will define your flat file data source, configure your parsing parameters for the flat file, generate a source model containing the standard Teiid flat file procedure and create view tables containing the SQL defining the column data in your flat file. JBoss Data Virtualization supports Flat Files as data sources. Teiid Designer provides an Import wizard designed to assist in creating the metadata models required to access the data in your flat files. As with Designer's JDBC, Salesforce and WSDL importers, the Flat File importer is based on utilizing a specific Data Tools Connection Profile. The results of the importer will include a source model containing the getTextFiles() procedures supported by JBoss Data Virtualization . The importer will also create a new view model containing a view table for your selected flat file source file. Within the view table will be generated SQL transformation containing the getTextFiles() procedure from your source model as well as the column definitions and parameters required for the Teiid TEXTTABLE() function used to query the data file. You can also choose to update an existing view model instead of creating a new view model. The TEXTTABLE function processes character input to produce tabular output. It supports both fixed and delimited file format parsing. The function itself defines what columns it projects. The TEXTTABLE function is implicitly a nested table and may be correlated to preceding FROM clause entries. TEXTTABLE(expression COLUMNS <COLUMN>, ... [DELIMITER char] [(QUOTE|ESCAPE) char] [HEADER [integer]] [SKIP integer]) AS name Teiid Designer will construct the full SQL statement for each view table in the form: SELECT A.Name, A.Sport, A.Position, A.Team, A.City, A.StateCode, A.AnnualSalary FROM (EXEC PlayerDataSource.getTextFiles('PlayerData.txt')) AS f, TEXTTABLE(f.file COLUMNS Name string, Sport string, Position string, Team string, City string, StateCode string, AnnualSalary string HEADER 2 SKIP 3) AS A To import from your flat file source follow the steps below. In Model Explorer , click File > Import action in the toolbar or select a project, folder or model in the tree and click Import... Select the import option Teiid Designer > File Source (Flat) >> Source and View Model and click > . Figure 9.10. Import from Flat File Source Select Flat File Import Mode and then select either Flat file on local file system or Flat file via remote URL and click > . Select existing or connection profile from the drop-down selector or click New... button to launch the New Connection Profile dialog or Edit... to modify or change an existing connection profile prior to selection. Note that the Flat File Source selection list will be populated with only Flat File connection profiles. After selecting a Connection Profile, the file contents of the folder defined in the connection profile will be displayed in the Available Data Files panel. Select the data file you wish to process. The data from this file, along with your custom import options, will be used to construct a view table containing the required SQL transformation for retrieving your data and returning a result set. Note The path to the data file you select must not contain spaces. Lastly enter a unique source model name in the Source Model Definition section at the bottom of the page or select an existing source model using the Browse button. Note the Model Status section which will indicate the validity of the model name, whether the model exists or not and whether the model already contains the getTextFiles() procedure. In this case, the source model nor the procedure will be generated. When finished with this page, click > . Figure 9.11. Data File Source Selection Page On the page enter the JNDI name and click > . The page, titled Flat File Column Format Definition, requires defining the format of your column data in the file. The options are Character delimited and Fixed width . This page contains a preview of the contents of your file to aid in determining the format. The wizard defaults to displaying the first 20 lines, but you can change that value if you wish. When finished with this page, click > . Figure 9.12. Data File Source Selection Page Character Delimited Option - The primary purpose of this importer is to help you create a view table containing the transformation required to query the user defined data file. This page presents a number of options you can use to customize the Generated SQL Statement, shown in the bottom panel, for the character delimited option. Specify header options (Column names in header, header line number and first data line number), Parse selected row and changed character delimiter. If columns names are not defined in a file header or if you wish to modify or create custom columns, you can use the ADD , DELETE , UP , DOWN to manage the column info in your SQL. When finished with this page, click > . Figure 9.13. Flat File Delimited Columns Options Page To aid in determining if your parser settings are correct you can select a data row in your File Contents Preview section and click the Parse Selected Row button. A dialog will be displayed showing the list of columns and the resulting column data. If your column data is not what you expected, you'll need to adjust your settings accordingly. Figure 9.14. Parse Column Data Dialog Fixed Column Width Option - The primary purpose of this importer is to help you create a view table containing the transformation required to query the user defined data file. This page presents a number of options you can use to customize the Generated SQL Statement, shown in the bottom panel, for the fixed column width option. Specify header options such as first data line number and changed character delimiter. If columns names are not defined in a file header or if you wish to modify or create custom columns, you can use the ADD , DELETE , UP , DOWN to manage the column info in your SQL. You can also utilize the cursor position and text length values in the upper left panel to determine what your column widths are in your data file. When finished with this page, click > . Figure 9.15. Flat File Fixed Columns Width Options Page On the View Model Definition page, select the target folder location where your new view model will be created. You can also select an existing model for your new view tables. Note the Model Status section which will indicate the validity of the model name, whether the model exists or not. Lastly, enter a unique, valid view table name. Click Finish to generate your models and finish the wizard. Figure 9.16. View Model Definition Page When your import is finished your source model will be opened in an editor and show a diagram containing the your getTextFiles() procedure. Figure 9.17. Generated Flat File Procedures In addition, the view model will be opened in an editor and will show the generated view tables containing the completed SQL required to access the data in your flat file using the getTextFiles procedure above and the Teiid TEXTTABLE() function. The following figure is an example of a generated view table. Figure 9.18. Generated Flat File View Table | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/import_from_flat_file_source |
Preface | Preface The Red Hat build of Cryostat is a container-native implementation of JDK Flight Recorder (JFR) that you can use to securely monitor the Java Virtual Machine (JVM) performance in workloads that run on an OpenShift Container Platform cluster. You can use Cryostat 3.0 to start, stop, retrieve, archive, import, and export JFR data for JVMs inside your containerized applications by using a web console or an HTTP API. Depending on your use case, you can store and analyze your recordings directly on your Red Hat OpenShift cluster by using the built-in tools that Cryostat provides or you can export recordings to an external monitoring application to perform a more in-depth analysis of your recorded data. Important Red Hat build of Cryostat is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/configuring_client-side_notifications_for_cryostat/preface-cryostat |
probe::socket.write_iter | probe::socket.write_iter Name probe::socket.write_iter - Message send via sock_write_iter Synopsis socket.write_iter Values state Socket state value family Protocol family value protocol Protocol value name Name of this probe type Socket type value size Message size in bytes flags Socket flags value Context The message sender Description Fires at the beginning of sending a message on a socket via the sock_write_iter function | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-socket-write-iter |
Chapter 4. Distribution of content | Chapter 4. Distribution of content RHEL 9 for SAP Solutions is installed using ISO images. For more information, see Installing RHEL 9 for SAP Solutions . For information on RHEL for SAP Solutions offerings on Certified Cloud Providers, see SAP Offerings on Certified Cloud Providers . If you need help installing your product, contact Red Hat Customer Service or Technical Support . SAP specific content is available on separate SAP repositories and ISOs and only for SAP-supported architectures (Intel x86_64, IBM Power LE). See How to subscribe SAP HANA systems to the Update Services for SAP Solutions . Performing a standard RHEL 9 installation Package manifest Considerations in adopting RHEL 9 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/9.x_release_notes/distribution-of-content_9.x_release_notes |
Introduction to the Migration Toolkit for Applications | Introduction to the Migration Toolkit for Applications Migration Toolkit for Applications 7.1 Introduction to the Migration Toolkit for Applications for managing applications during their migration to OpenShift Container Platform. Red Hat Customer Content Services | [
"when(condition) message(message) tag(tags)"
] | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html-single/introduction_to_the_migration_toolkit_for_applications/index |
Chapter 2. Configuring an Azure Stack Hub account | Chapter 2. Configuring an Azure Stack Hub account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 2.1. Azure Stack Hub account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure Stack Hub components, and the default Quota types in Azure Stack Hub affect your ability to install OpenShift Container Platform clusters. The following table summarizes the Azure Stack Hub components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Description vCPU 56 A default cluster requires 56 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap, control plane, and worker machines use Standard_DS4_v2 virtual machines, which use 8 vCPUs, a default cluster requires 56 vCPUs. The bootstrap node VM is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. VNet 1 Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 2 The public load balancer uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Additional resources Optimizing storage . 2.2. Configuring a DNS zone in Azure Stack Hub To successfully install OpenShift Container Platform on Azure Stack Hub, you must create DNS records in an Azure Stack Hub DNS zone. The DNS zone must be authoritative for the domain. To delegate a registrar's DNS zone to Azure Stack Hub, see Microsoft's documentation for Azure Stack Hub datacenter DNS integration . 2.3. Required Azure Stack Hub roles Your Microsoft Azure Stack Hub account must have the following roles for the subscription that you use: Owner To set roles on the Azure portal, see the Manage access to resources in Azure Stack Hub with role-based access control in the Microsoft documentation. 2.4. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. Procedure Register your environment: USD az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1 1 Specify the Azure Resource Manager endpoint, `https://management.<region>.<fqdn>/`. See the Microsoft documentation for details. Set the active environment: USD az cloud set -n AzureStackCloud Update your environment configuration to use the specific API version for Azure Stack Hub: USD az cloud update --profile 2019-03-01-hybrid Log in to the Azure CLI: USD az login If you are in a multitenant environment, you must also supply the tenant ID. If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": AzureStackCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> \ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3 1 Specify the service principal name. 2 Specify the subscription ID. 3 Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 2.5. steps Install an OpenShift Container Platform cluster: Installing a cluster quickly on Azure Stack Hub . Install an OpenShift Container Platform cluster on Azure Stack Hub with user-provisioned infrastructure by following Installing a cluster on Azure Stack Hub using ARM templates . | [
"az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1",
"az cloud set -n AzureStackCloud",
"az cloud update --profile 2019-03-01-hybrid",
"az login",
"az account list --refresh",
"[ { \"cloudName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id> 1",
"az account show",
"{ \"environmentName\": AzureStackCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_azure_stack_hub/installing-azure-stack-hub-account |
4.4. Identifying Contended User-Space Locks | 4.4. Identifying Contended User-Space Locks This section describes how to identify contended user-space locks throughout the system within a specific time period. The ability to identify contended user-space locks can help you investigate hangs that you suspect may be caused by futex contentions. Simply put, a futex contention occurs when multiple processes are trying to access the same region of memory. In some cases, this can result in a deadlock between the processes in contention, thereby appearing as an application hang. To do this, Example 4.34, "futexes.stp" probes the futex system call. Example 4.34. futexes.stp #! /usr/bin/env stap # This script tries to identify contended user-space locks by hooking # into the futex system call. global thread_thislock # short global thread_blocktime # global FUTEX_WAIT = 0 /*, FUTEX_WAKE = 1 */ global lock_waits # long-lived stats on (tid,lock) blockage elapsed time global process_names # long-lived pid-to-execname mapping probe syscall.futex { if (op != FUTEX_WAIT) # don't care about WAKE event originator t = tid () process_names[pid()] = execname() thread_thislock[t] = USDuaddr thread_blocktime[t] = gettimeofday_us() } probe syscall.futex.return { t = tid() ts = thread_blocktime[t] if (ts) { elapsed = gettimeofday_us() - ts lock_waits[pid(), thread_thislock[t]] <<< elapsed delete thread_blocktime[t] delete thread_thislock[t] } } probe end { foreach ([pid+, lock] in lock_waits) printf ("%s[%d] lock %p contended %d times, %d avg us\n", process_names[pid], pid, lock, @count(lock_waits[pid,lock]), @avg(lock_waits[pid,lock])) } Example 4.34, "futexes.stp" needs to be manually stopped; upon exit, it prints the following information: Name and ID of the process responsible for a contention The region of memory it contested How many times the region of memory was contended Average time of contention throughout the probe Example 4.35, "Example 4.34, "futexes.stp" Sample Output" contains an excerpt from the output of Example 4.34, "futexes.stp" upon exiting the script (after approximately 20 seconds). Example 4.35. Example 4.34, "futexes.stp" Sample Output | [
"#! /usr/bin/env stap This script tries to identify contended user-space locks by hooking into the futex system call. global thread_thislock # short global thread_blocktime # global FUTEX_WAIT = 0 /*, FUTEX_WAKE = 1 */ global lock_waits # long-lived stats on (tid,lock) blockage elapsed time global process_names # long-lived pid-to-execname mapping probe syscall.futex { if (op != FUTEX_WAIT) next # don't care about WAKE event originator t = tid () process_names[pid()] = execname() thread_thislock[t] = USDuaddr thread_blocktime[t] = gettimeofday_us() } probe syscall.futex.return { t = tid() ts = thread_blocktime[t] if (ts) { elapsed = gettimeofday_us() - ts lock_waits[pid(), thread_thislock[t]] <<< elapsed delete thread_blocktime[t] delete thread_thislock[t] } } probe end { foreach ([pid+, lock] in lock_waits) printf (\"%s[%d] lock %p contended %d times, %d avg us\\n\", process_names[pid], pid, lock, @count(lock_waits[pid,lock]), @avg(lock_waits[pid,lock])) }",
"[...] automount[2825] lock 0x00bc7784 contended 18 times, 999931 avg us synergyc[3686] lock 0x0861e96c contended 192 times, 101991 avg us synergyc[3758] lock 0x08d98744 contended 192 times, 101990 avg us synergyc[3938] lock 0x0982a8b4 contended 192 times, 101997 avg us [...]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_beginners_guide/futexcontentionsect |
Chapter 3. Deploying OpenShift sandboxed containers on AWS | Chapter 3. Deploying OpenShift sandboxed containers on AWS You can deploy OpenShift sandboxed containers on AWS Cloud Computing Services by using the OpenShift Container Platform web console or the command line interface (CLI). OpenShift sandboxed containers deploys peer pods. The peer pod design circumvents the need for nested virtualization. For more information, see peer pod and Peer pods technical deep dive . Cluster requirements You have installed Red Hat OpenShift Container Platform 4.14 or later on the cluster where you are installing the OpenShift sandboxed containers Operator. Your cluster has at least one worker node. 3.1. Peer pod resource requirements You must ensure that your cluster has sufficient resources. Peer pod virtual machines (VMs) require resources in two locations: The worker node. The worker node stores metadata, Kata shim resources ( containerd-shim-kata-v2 ), remote-hypervisor resources ( cloud-api-adaptor ), and the tunnel setup between the worker nodes and the peer pod VM. The cloud instance. This is the actual peer pod VM running in the cloud. The CPU and memory resources used in the Kubernetes worker node are handled by the pod overhead included in the RuntimeClass ( kata-remote ) definition used for creating peer pods. The total number of peer pod VMs running in the cloud is defined as Kubernetes Node extended resources. This limit is per node and is set by the limit attribute in the peerpodConfig custom resource (CR). The peerpodConfig CR, named peerpodconfig-openshift , is created when you create the kataConfig CR and enable peer pods, and is located in the openshift-sandboxed-containers-operator namespace. The following peerpodConfig CR example displays the default spec values: apiVersion: confidentialcontainers.org/v1alpha1 kind: PeerPodConfig metadata: name: peerpodconfig-openshift namespace: openshift-sandboxed-containers-operator spec: cloudSecretName: peer-pods-secret configMapName: peer-pods-cm limit: "10" 1 nodeSelector: node-role.kubernetes.io/kata-oc: "" 1 The default limit is 10 VMs per node. The extended resource is named kata.peerpods.io/vm , and enables the Kubernetes scheduler to handle capacity tracking and accounting. You can edit the limit per node based on the requirements for your environment after you install the OpenShift sandboxed containers Operator. A mutating webhook adds the extended resource kata.peerpods.io/vm to the pod specification. It also removes any resource-specific entries from the pod specification, if present. This enables the Kubernetes scheduler to account for these extended resources, ensuring the peer pod is only scheduled when resources are available. The mutating webhook modifies a Kubernetes pod as follows: The mutating webhook checks the pod for the expected RuntimeClassName value, specified in the TARGET_RUNTIME_CLASS environment variable. If the value in the pod specification does not match the value in the TARGET_RUNTIME_CLASS , the webhook exits without modifying the pod. If the RuntimeClassName values match, the webhook makes the following changes to the pod spec: The webhook removes every resource specification from the resources field of all containers and init containers in the pod. The webhook adds the extended resource ( kata.peerpods.io/vm ) to the spec by modifying the resources field of the first container in the pod. The extended resource kata.peerpods.io/vm is used by the Kubernetes scheduler for accounting purposes. Note The mutating webhook excludes specific system namespaces in OpenShift Container Platform from mutation. If a peer pod is created in those system namespaces, then resource accounting using Kubernetes extended resources does not work unless the pod spec includes the extended resource. As a best practice, define a cluster-wide policy to only allow peer pod creation in specific namespaces. 3.2. Deploying OpenShift sandboxed containers by using the web console You can deploy OpenShift sandboxed containers on AWS by using the OpenShift Container Platform web console to perform the following tasks: Install the OpenShift sandboxed containers Operator. Enable ports 15150 and 9000 to allow internal communication with peer pods. Create the peer pods secret. Create the peer pods config map. Create the KataConfig custom resource. Configure the OpenShift sandboxed containers workload objects. 3.2.1. Installing the OpenShift sandboxed containers Operator You can install the OpenShift sandboxed containers Operator by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the web console, navigate to Operators OperatorHub . In the Filter by keyword field, type OpenShift sandboxed containers . Select the OpenShift sandboxed containers Operator tile and click Install . On the Install Operator page, select stable from the list of available Update Channel options. Verify that Operator recommended Namespace is selected for Installed Namespace . This installs the Operator in the mandatory openshift-sandboxed-containers-operator namespace. If this namespace does not yet exist, it is automatically created. Note Attempting to install the OpenShift sandboxed containers Operator in a namespace other than openshift-sandboxed-containers-operator causes the installation to fail. Verify that Automatic is selected for Approval Strategy . Automatic is the default value, and enables automatic updates to OpenShift sandboxed containers when a new z-stream release is available. Click Install . Navigate to Operators Installed Operators to verify that the Operator is installed. Additional resources Using Operator Lifecycle Manager on restricted networks . Configuring proxy support in Operator Lifecycle Manager for disconnected environments. 3.2.2. Enabling ports for AWS You must enable ports 15150 and 9000 to allow internal communication with peer pods running on AWS. Prerequisites You have installed the OpenShift sandboxed containers Operator. You have installed the AWS command line tool. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to your OpenShift Container Platform cluster and retrieve the instance ID: USD INSTANCE_ID=USD(oc get nodes -l 'node-role.kubernetes.io/worker' \ -o jsonpath='{.items[0].spec.providerID}' | sed 's#[^ ]*/##g') Retrieve the AWS region: USD AWS_REGION=USD(oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.aws.region}') Retrieve the security group IDs and store them in an array: USD AWS_SG_IDS=(USD(aws ec2 describe-instances --instance-ids USD{INSTANCE_ID} \ --query 'Reservations[*].Instances[*].SecurityGroups[*].GroupId' \ --output text --region USDAWS_REGION)) For each security group ID, authorize the peer pods shim to access kata-agent communication, and set up the peer pods tunnel: USD for AWS_SG_ID in "USD{AWS_SG_IDS[@]}"; do \ aws ec2 authorize-security-group-ingress --group-id USDAWS_SG_ID --protocol tcp --port 15150 --source-group USDAWS_SG_ID --region USDAWS_REGION \ aws ec2 authorize-security-group-ingress --group-id USDAWS_SG_ID --protocol tcp --port 9000 --source-group USDAWS_SG_ID --region USDAWS_REGION \ done The ports are now enabled. 3.2.3. Creating the peer pods secret You must create the peer pods secret for OpenShift sandboxed containers. The secret stores credentials for creating the pod virtual machine (VM) image and peer pod instances. By default, the OpenShift sandboxed containers Operator creates the secret based on the credentials used to create the cluster. However, you can manually create a secret that uses different credentials. Prerequisites You have the following values generated by using the AWS console: AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY Procedure In the OpenShift Container Platform web console, navigate to Operators Installed Operators . Click the OpenShift sandboxed containers Operator tile. Click the Import icon ( + ) on the top right corner. In the Import YAML window, paste the following YAML manifest: apiVersion: v1 kind: Secret metadata: name: peer-pods-secret namespace: openshift-sandboxed-containers-operator type: Opaque stringData: AWS_ACCESS_KEY_ID: "<aws_access_key>" 1 AWS_SECRET_ACCESS_KEY: "<aws_secret_access_key>" 2 1 Specify the AWS_ACCESS_KEY_ID value. 2 Specify the AWS_SECRET_ACCESS_KEY value. Click Save to apply the changes. Navigate to Workloads Secrets to verify the peer pods secret. 3.2.4. Creating the peer pods config map You must create the peer pods config map for OpenShift sandboxed containers. Prerequisites You have your Amazon Machine Image (AMI) ID if you are not using the default AMI ID based on your cluster credentials. Procedure Obtain the following values from your AWS instance: Retrieve and record the instance ID: USD INSTANCE_ID=USD(oc get nodes -l 'node-role.kubernetes.io/worker' -o jsonpath='{.items[0].spec.providerID}' | sed 's#[^ ]*/##g') This is used to retrieve other values for the secret object. Retrieve and record the AWS region: USD AWS_REGION=USD(oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.aws.region}') && echo "AWS_REGION: \"USDAWS_REGION\"" Retrieve and record the AWS subnet ID: USD AWS_SUBNET_ID=USD(aws ec2 describe-instances --instance-ids USD{INSTANCE_ID} --query 'Reservations[*].Instances[*].SubnetId' --region USD{AWS_REGION} --output text) && echo "AWS_SUBNET_ID: \"USDAWS_SUBNET_ID\"" Retrieve and record the AWS VPC ID: USD AWS_VPC_ID=USD(aws ec2 describe-instances --instance-ids USD{INSTANCE_ID} --query 'Reservations[*].Instances[*].VpcId' --region USD{AWS_REGION} --output text) && echo "AWS_VPC_ID: \"USDAWS_VPC_ID\"" Retrieve and record the AWS security group IDs: USD AWS_SG_IDS=USD(aws ec2 describe-instances --instance-ids USD{INSTANCE_ID} --query 'Reservations[*].Instances[*].SecurityGroups[*].GroupId' --region USDAWS_REGION --output json | jq -r '.[][][]' | paste -sd ",") && echo "AWS_SG_IDS: \"USDAWS_SG_IDS\"" In the OpenShift Container Platform web console, navigate to Operators Installed Operators . Select the OpenShift sandboxed containers Operator from the list of operators. Click the Import icon ( + ) in the top right corner. In the Import YAML window, paste the following YAML manifest: apiVersion: v1 kind: ConfigMap metadata: name: peer-pods-cm namespace: openshift-sandboxed-containers-operator data: CLOUD_PROVIDER: "aws" VXLAN_PORT: "9000" PODVM_INSTANCE_TYPE: "t3.medium" 1 PODVM_INSTANCE_TYPES: "t2.small,t2.medium,t3.large" 2 PROXY_TIMEOUT: "5m" PODVM_AMI_ID: "<podvm_ami_id>" 3 AWS_REGION: "<aws_region>" 4 AWS_SUBNET_ID: "<aws_subnet_id>" 5 AWS_VPC_ID: "<aws_vpc_id>" 6 AWS_SG_IDS: "<aws_sg_ids>" 7 DISABLECVM: "true" 1 Defines the default instance type that is used when a type is not defined in the workload. 2 Lists all of the instance types you can specify when creating the pod. This allows you to define smaller instance types for workloads that need less memory and fewer CPUs or larger instance types for larger workloads. 3 Optional: By default, this value is populated when you run the KataConfig CR, using an AMI ID based on your cluster credentials. If you create your own AMI, specify the correct AMI ID. 4 Specify the AWS_REGION value you retrieved. 5 Specify the AWS_SUBNET_ID value you retrieved. 6 Specify the AWS_VPC_ID value you retrieved. 7 Specify the AWS_SG_IDS value you retrieved. Click Save to apply the changes. Navigate to Workloads ConfigMaps to view the new config map. 3.2.5. Creating the KataConfig custom resource You must create the KataConfig custom resource (CR) to install kata-remote as a RuntimeClass on your worker nodes. The kata-remote runtime class is installed on all worker nodes by default. If you want to install kata-remote on specific nodes, you can add labels to those nodes and then define the label in the KataConfig CR. OpenShift sandboxed containers installs kata-remote as a secondary, optional runtime on the cluster and not as the primary runtime. Important Creating the KataConfig CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. The following factors might increase the reboot time: A larger OpenShift Container Platform deployment with a greater number of worker nodes. Activation of the BIOS and Diagnostics utility. Deployment on a hard disk drive rather than an SSD. Deployment on physical nodes such as bare metal, rather than on virtual nodes. A slow CPU and network. Prerequisites You have access to the cluster as a user with the cluster-admin role. Optional: You have installed the Node Feature Discovery Operator if you want to enable node eligibility checks. Procedure In the OpenShift Container Platform web console, navigate to Operators Installed Operators . Select the OpenShift sandboxed containers Operator. On the KataConfig tab, click Create KataConfig . Enter the following details: Name : Optional: The default name is example-kataconfig . Labels : Optional: Enter any relevant, identifying attributes to the KataConfig resource. Each label represents a key-value pair. enablePeerPods : Select for public cloud, IBM Z(R), and IBM(R) LinuxONE deployments. kataConfigPoolSelector . Optional: To install kata-remote on selected nodes, add a match expression for the labels on the selected nodes: Expand the kataConfigPoolSelector area. In the kataConfigPoolSelector area, expand matchExpressions . This is a list of label selector requirements. Click Add matchExpressions . In the Key field, enter the label key the selector applies to. In the Operator field, enter the key's relationship to the label values. Valid operators are In , NotIn , Exists , and DoesNotExist . Expand the Values area and then click Add value . In the Value field, enter true or false for key label value. logLevel : Define the level of log data retrieved for nodes with the kata-remote runtime class. Click Create . The KataConfig CR is created and installs the kata-remote runtime class on the worker nodes. Wait for the kata-remote installation to complete and the worker nodes to reboot before verifying the installation. Verification On the KataConfig tab, click the KataConfig CR to view its details. Click the YAML tab to view the status stanza. The status stanza contains the conditions and kataNodes keys. The value of status.kataNodes is an array of nodes, each of which lists nodes in a particular state of kata-remote installation. A message appears each time there is an update. Click Reload to refresh the YAML. When all workers in the status.kataNodes array display the values installed and conditions.InProgress: False with no specified reason, the kata-remote is installed on the cluster. Additional resources KataConfig status messages Verifying the pod VM image After kata-remote is installed on your cluster, the OpenShift sandboxed containers Operator creates a pod VM image, which is used to create peer pods. This process can take a long time because the image is created on the cloud instance. You can verify that the pod VM image was created successfully by checking the config map that you created for the cloud provider. Procedure Navigate to Workloads ConfigMaps . Click the provider config map to view its details. Click the YAML tab. Check the status stanza of the YAML file. If the PODVM_AMI_ID parameter is populated, the pod VM image was created successfully. Troubleshooting Retrieve the events log by running the following command: USD oc get events -n openshift-sandboxed-containers-operator --field-selector involvedObject.name=osc-podvm-image-creation Retrieve the job log by running the following command: USD oc logs -n openshift-sandboxed-containers-operator jobs/osc-podvm-image-creation If you cannot resolve the issue, submit a Red Hat Support case and attach the output of both logs. 3.2.6. Configuring workload objects You must configure OpenShift sandboxed containers workload objects by setting kata-remote as the runtime class for the following pod-templated objects: Pod objects ReplicaSet objects ReplicationController objects StatefulSet objects Deployment objects DeploymentConfig objects Important Do not deploy workloads in an Operator namespace. Create a dedicated namespace for these resources. You can define whether the workload should be deployed using the default instance type, which you defined in the config map, by adding an annotation to the YAML file. If you do not want to define the instance type manually, you can add an annotation to use an automatic instance type, based on the memory available. Prerequisites You have created the KataConfig custom resource (CR). Procedure In the OpenShift Container Platform web console, navigate to Workloads workload type, for example, Pods . On the workload type page, click an object to view its details. Click the YAML tab. Add spec.runtimeClassName: kata-remote to the manifest of each pod-templated workload object as in the following example: apiVersion: v1 kind: <object> # ... spec: runtimeClassName: kata-remote # ... Add an annotation to the pod-templated object to use a manually defined instance type or an automatic instance type: To use a manually defined instance type, add the following annotation: apiVersion: v1 kind: <object> metadata: annotations: io.katacontainers.config.hypervisor.machine_type: "t3.medium" 1 # ... 1 Specify the instance type that you defined in the config map. To use an automatic instance type, add the following annotations: apiVersion: v1 kind: <Pod> metadata: annotations: io.katacontainers.config.hypervisor.default_vcpus: <vcpus> io.katacontainers.config.hypervisor.default_memory: <memory> # ... Define the amount of memory available for the workload to use. The workload will run on an automatic instance type based on the amount of memory available. Click Save to apply the changes. OpenShift Container Platform creates the workload object and begins scheduling it. Verification Inspect the spec.runtimeClassName field of a pod-templated object. If the value is kata-remote , then the workload is running on OpenShift sandboxed containers, using peer pods. 3.3. Deploying OpenShift sandboxed containers by using the command line You can deploy OpenShift sandboxed containers on AWS by using the command line interface (CLI) to perform the following tasks: Install the OpenShift sandboxed containers Operator. Optional: Change the number of virtual machines running on each worker node. Enable ports 15150 and 9000 to allow internal communication with peer pods. Create the peer pods secret. Create the peer pods config map. Create the KataConfig custom resource. Configure the OpenShift sandboxed containers workload objects. 3.3.1. Installing the OpenShift sandboxed containers Operator You can install the OpenShift sandboxed containers Operator by using the CLI. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure Create an osc-namespace.yaml manifest file: apiVersion: v1 kind: Namespace metadata: name: openshift-sandboxed-containers-operator Create the namespace by running the following command: USD oc apply -f osc-namespace.yaml Create an osc-operatorgroup.yaml manifest file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sandboxed-containers-operator-group namespace: openshift-sandboxed-containers-operator spec: targetNamespaces: - openshift-sandboxed-containers-operator Create the operator group by running the following command: USD oc apply -f osc-operatorgroup.yaml Create an osc-subscription.yaml manifest file: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sandboxed-containers-operator namespace: openshift-sandboxed-containers-operator spec: channel: stable installPlanApproval: Automatic name: sandboxed-containers-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: sandboxed-containers-operator.v1.8.1 Create the subscription by running the following command: USD oc apply -f osc-subscription.yaml Verify that the Operator is correctly installed by running the following command: USD oc get csv -n openshift-sandboxed-containers-operator This command can take several minutes to complete. Watch the process by running the following command: USD watch oc get csv -n openshift-sandboxed-containers-operator Example output NAME DISPLAY VERSION REPLACES PHASE openshift-sandboxed-containers openshift-sandboxed-containers-operator 1.8.1 1.7.0 Succeeded Additional resources Using Operator Lifecycle Manager on restricted networks . Configuring proxy support in Operator Lifecycle Manager for disconnected environments. 3.3.2. Modifying the number of peer pod VMs per node You can change the limit of peer pod virtual machines (VMs) per node by editing the peerpodConfig custom resource (CR). Procedure Check the current limit by running the following command: USD oc get peerpodconfig peerpodconfig-openshift -n openshift-sandboxed-containers-operator \ -o jsonpath='{.spec.limit}{"\n"}' Modify the limit attribute of the peerpodConfig CR by running the following command: USD oc patch peerpodconfig peerpodconfig-openshift -n openshift-sandboxed-containers-operator \ --type merge --patch '{"spec":{"limit":"<value>"}}' 1 1 Replace <value> with the limit you want to define. 3.3.3. Enabling ports for AWS You must enable ports 15150 and 9000 to allow internal communication with peer pods running on AWS. Prerequisites You have installed the OpenShift sandboxed containers Operator. You have installed the AWS command line tool. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to your OpenShift Container Platform cluster and retrieve the instance ID: USD INSTANCE_ID=USD(oc get nodes -l 'node-role.kubernetes.io/worker' \ -o jsonpath='{.items[0].spec.providerID}' | sed 's#[^ ]*/##g') Retrieve the AWS region: USD AWS_REGION=USD(oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.aws.region}') Retrieve the security group IDs and store them in an array: USD AWS_SG_IDS=(USD(aws ec2 describe-instances --instance-ids USD{INSTANCE_ID} \ --query 'Reservations[*].Instances[*].SecurityGroups[*].GroupId' \ --output text --region USDAWS_REGION)) For each security group ID, authorize the peer pods shim to access kata-agent communication, and set up the peer pods tunnel: USD for AWS_SG_ID in "USD{AWS_SG_IDS[@]}"; do \ aws ec2 authorize-security-group-ingress --group-id USDAWS_SG_ID --protocol tcp --port 15150 --source-group USDAWS_SG_ID --region USDAWS_REGION \ aws ec2 authorize-security-group-ingress --group-id USDAWS_SG_ID --protocol tcp --port 9000 --source-group USDAWS_SG_ID --region USDAWS_REGION \ done The ports are now enabled. 3.3.4. Creating the peer pods secret You must create the peer pods secret for OpenShift sandboxed containers. The secret stores credentials for creating the pod virtual machine (VM) image and peer pod instances. By default, the OpenShift sandboxed containers Operator creates the secret based on the credentials used to create the cluster. However, you can manually create a secret that uses different credentials. Prerequisites You have the following values generated by using the AWS console: AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY Procedure Create a peer-pods-secret.yaml manifest file according to the following example: apiVersion: v1 kind: Secret metadata: name: peer-pods-secret namespace: openshift-sandboxed-containers-operator type: Opaque stringData: AWS_ACCESS_KEY_ID: "<aws_access_key>" 1 AWS_SECRET_ACCESS_KEY: "<aws_secret_access_key>" 2 1 Specify the AWS_ACCESS_KEY_ID value. 2 Specify the AWS_SECRET_ACCESS_KEY value. Create the secret by running the following command: USD oc apply -f peer-pods-secret.yaml 3.3.5. Creating the peer pods config map You must create the peer pods config map for OpenShift sandboxed containers. Prerequisites You have your Amazon Machine Image (AMI) ID if you are not using the default AMI ID based on your cluster credentials. Procedure Obtain the following values from your AWS instance: Retrieve and record the instance ID: USD INSTANCE_ID=USD(oc get nodes -l 'node-role.kubernetes.io/worker' -o jsonpath='{.items[0].spec.providerID}' | sed 's#[^ ]*/##g') This is used to retrieve other values for the secret object. Retrieve and record the AWS region: USD AWS_REGION=USD(oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.aws.region}') && echo "AWS_REGION: \"USDAWS_REGION\"" Retrieve and record the AWS subnet ID: USD AWS_SUBNET_ID=USD(aws ec2 describe-instances --instance-ids USD{INSTANCE_ID} --query 'Reservations[*].Instances[*].SubnetId' --region USD{AWS_REGION} --output text) && echo "AWS_SUBNET_ID: \"USDAWS_SUBNET_ID\"" Retrieve and record the AWS VPC ID: USD AWS_VPC_ID=USD(aws ec2 describe-instances --instance-ids USD{INSTANCE_ID} --query 'Reservations[*].Instances[*].VpcId' --region USD{AWS_REGION} --output text) && echo "AWS_VPC_ID: \"USDAWS_VPC_ID\"" Retrieve and record the AWS security group IDs: USD AWS_SG_IDS=USD(aws ec2 describe-instances --instance-ids USD{INSTANCE_ID} --query 'Reservations[*].Instances[*].SecurityGroups[*].GroupId' --region USDAWS_REGION --output json | jq -r '.[][][]' | paste -sd ",") && echo "AWS_SG_IDS: \"USDAWS_SG_IDS\"" Create a peer-pods-cm.yaml manifest file according to the following example: apiVersion: v1 kind: ConfigMap metadata: name: peer-pods-cm namespace: openshift-sandboxed-containers-operator data: CLOUD_PROVIDER: "aws" VXLAN_PORT: "9000" PODVM_INSTANCE_TYPE: "t3.medium" 1 PODVM_INSTANCE_TYPES: "t2.small,t2.medium,t3.large" 2 PROXY_TIMEOUT: "5m" PODVM_AMI_ID: "<podvm_ami_id>" 3 AWS_REGION: "<aws_region>" 4 AWS_SUBNET_ID: "<aws_subnet_id>" 5 AWS_VPC_ID: "<aws_vpc_id>" 6 AWS_SG_IDS: "<aws_sg_ids>" 7 DISABLECVM: "true" 1 Defines the default instance type that is used when a type is not defined in the workload. 2 Lists all of the instance types you can specify when creating the pod. This allows you to define smaller instance types for workloads that need less memory and fewer CPUs or larger instance types for larger workloads. 3 Optional: By default, this value is populated when you run the KataConfig CR, using an AMI ID based on your cluster credentials. If you create your own AMI, specify the correct AMI ID. 4 Specify the AWS_REGION value you retrieved. 5 Specify the AWS_SUBNET_ID value you retrieved. 6 Specify the AWS_VPC_ID value you retrieved. 7 Specify the AWS_SG_IDS value you retrieved. Create the config map by running the following command: USD oc apply -f peer-pods-cm.yaml 3.3.6. Creating the KataConfig custom resource You must create the KataConfig custom resource (CR) to install kata-remote as a runtime class on your worker nodes. Creating the KataConfig CR triggers the OpenShift sandboxed containers Operator to do the following: Create a RuntimeClass CR named kata-remote with a default configuration. This enables users to configure workloads to use kata-remote as the runtime by referencing the CR in the RuntimeClassName field. This CR also specifies the resource overhead for the runtime. OpenShift sandboxed containers installs kata-remote as a secondary, optional runtime on the cluster and not as the primary runtime. Important Creating the KataConfig CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. Factors that impede reboot time are as follows: A larger OpenShift Container Platform deployment with a greater number of worker nodes. Activation of the BIOS and Diagnostics utility. Deployment on a hard disk drive rather than an SSD. Deployment on physical nodes such as bare metal, rather than on virtual nodes. A slow CPU and network. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Create an example-kataconfig.yaml manifest file according to the following example: apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: enablePeerPods: true logLevel: info # kataConfigPoolSelector: # matchLabels: # <label_key>: '<label_value>' 1 1 Optional: If you have applied node labels to install kata-remote on specific nodes, specify the key and value, for example, osc: 'true' . Create the KataConfig CR by running the following command: USD oc apply -f example-kataconfig.yaml The new KataConfig CR is created and installs kata-remote as a runtime class on the worker nodes. Wait for the kata-remote installation to complete and the worker nodes to reboot before verifying the installation. Monitor the installation progress by running the following command: USD watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p" When the status of all workers under kataNodes is installed and the condition InProgress is False without specifying a reason, the kata-remote is installed on the cluster. Verify the daemon set by running the following command: USD oc get -n openshift-sandboxed-containers-operator ds/peerpodconfig-ctrl-caa-daemon Verify the runtime classes by running the following command: USD oc get runtimeclass Example output NAME HANDLER AGE kata kata 152m kata-remote kata-remote 152m Verifying the pod VM image After kata-remote is installed on your cluster, the OpenShift sandboxed containers Operator creates a pod VM image, which is used to create peer pods. This process can take a long time because the image is created on the cloud instance. You can verify that the pod VM image was created successfully by checking the config map that you created for the cloud provider. Procedure Obtain the config map you created for the peer pods: USD oc get configmap peer-pods-cm -n openshift-sandboxed-containers-operator -o yaml Check the status stanza of the YAML file. If the PODVM_AMI_ID parameter is populated, the pod VM image was created successfully. Troubleshooting Retrieve the events log by running the following command: USD oc get events -n openshift-sandboxed-containers-operator --field-selector involvedObject.name=osc-podvm-image-creation Retrieve the job log by running the following command: USD oc logs -n openshift-sandboxed-containers-operator jobs/osc-podvm-image-creation If you cannot resolve the issue, submit a Red Hat Support case and attach the output of both logs. 3.3.7. Configuring workload objects You must configure OpenShift sandboxed containers workload objects by setting kata-remote as the runtime class for the following pod-templated objects: Pod objects ReplicaSet objects ReplicationController objects StatefulSet objects Deployment objects DeploymentConfig objects Important Do not deploy workloads in an Operator namespace. Create a dedicated namespace for these resources. You can define whether the workload should be deployed using the default instance type, which you defined in the config map, by adding an annotation to the YAML file. If you do not want to define the instance type manually, you can add an annotation to use an automatic instance type, based on the memory available. Prerequisites You have created the KataConfig custom resource (CR). Procedure Add spec.runtimeClassName: kata-remote to the manifest of each pod-templated workload object as in the following example: apiVersion: v1 kind: <object> # ... spec: runtimeClassName: kata-remote # ... Add an annotation to the pod-templated object to use a manually defined instance type or an automatic instance type: To use a manually defined instance type, add the following annotation: apiVersion: v1 kind: <object> metadata: annotations: io.katacontainers.config.hypervisor.machine_type: "t3.medium" 1 # ... 1 Specify the instance type that you defined in the config map. To use an automatic instance type, add the following annotations: apiVersion: v1 kind: <Pod> metadata: annotations: io.katacontainers.config.hypervisor.default_vcpus: <vcpus> io.katacontainers.config.hypervisor.default_memory: <memory> # ... Define the amount of memory available for the workload to use. The workload will run on an automatic instance type based on the amount of memory available. Apply the changes to the workload object by running the following command: USD oc apply -f <object.yaml> OpenShift Container Platform creates the workload object and begins scheduling it. Verification Inspect the spec.runtimeClassName field of a pod-templated object. If the value is kata-remote , then the workload is running on OpenShift sandboxed containers, using peer pods. | [
"apiVersion: confidentialcontainers.org/v1alpha1 kind: PeerPodConfig metadata: name: peerpodconfig-openshift namespace: openshift-sandboxed-containers-operator spec: cloudSecretName: peer-pods-secret configMapName: peer-pods-cm limit: \"10\" 1 nodeSelector: node-role.kubernetes.io/kata-oc: \"\"",
"INSTANCE_ID=USD(oc get nodes -l 'node-role.kubernetes.io/worker' -o jsonpath='{.items[0].spec.providerID}' | sed 's#[^ ]*/##g')",
"AWS_REGION=USD(oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.aws.region}')",
"AWS_SG_IDS=(USD(aws ec2 describe-instances --instance-ids USD{INSTANCE_ID} --query 'Reservations[*].Instances[*].SecurityGroups[*].GroupId' --output text --region USDAWS_REGION))",
"for AWS_SG_ID in \"USD{AWS_SG_IDS[@]}\"; do aws ec2 authorize-security-group-ingress --group-id USDAWS_SG_ID --protocol tcp --port 15150 --source-group USDAWS_SG_ID --region USDAWS_REGION aws ec2 authorize-security-group-ingress --group-id USDAWS_SG_ID --protocol tcp --port 9000 --source-group USDAWS_SG_ID --region USDAWS_REGION done",
"apiVersion: v1 kind: Secret metadata: name: peer-pods-secret namespace: openshift-sandboxed-containers-operator type: Opaque stringData: AWS_ACCESS_KEY_ID: \"<aws_access_key>\" 1 AWS_SECRET_ACCESS_KEY: \"<aws_secret_access_key>\" 2",
"INSTANCE_ID=USD(oc get nodes -l 'node-role.kubernetes.io/worker' -o jsonpath='{.items[0].spec.providerID}' | sed 's#[^ ]*/##g')",
"AWS_REGION=USD(oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.aws.region}') && echo \"AWS_REGION: \\\"USDAWS_REGION\\\"\"",
"AWS_SUBNET_ID=USD(aws ec2 describe-instances --instance-ids USD{INSTANCE_ID} --query 'Reservations[*].Instances[*].SubnetId' --region USD{AWS_REGION} --output text) && echo \"AWS_SUBNET_ID: \\\"USDAWS_SUBNET_ID\\\"\"",
"AWS_VPC_ID=USD(aws ec2 describe-instances --instance-ids USD{INSTANCE_ID} --query 'Reservations[*].Instances[*].VpcId' --region USD{AWS_REGION} --output text) && echo \"AWS_VPC_ID: \\\"USDAWS_VPC_ID\\\"\"",
"AWS_SG_IDS=USD(aws ec2 describe-instances --instance-ids USD{INSTANCE_ID} --query 'Reservations[*].Instances[*].SecurityGroups[*].GroupId' --region USDAWS_REGION --output json | jq -r '.[][][]' | paste -sd \",\") && echo \"AWS_SG_IDS: \\\"USDAWS_SG_IDS\\\"\"",
"apiVersion: v1 kind: ConfigMap metadata: name: peer-pods-cm namespace: openshift-sandboxed-containers-operator data: CLOUD_PROVIDER: \"aws\" VXLAN_PORT: \"9000\" PODVM_INSTANCE_TYPE: \"t3.medium\" 1 PODVM_INSTANCE_TYPES: \"t2.small,t2.medium,t3.large\" 2 PROXY_TIMEOUT: \"5m\" PODVM_AMI_ID: \"<podvm_ami_id>\" 3 AWS_REGION: \"<aws_region>\" 4 AWS_SUBNET_ID: \"<aws_subnet_id>\" 5 AWS_VPC_ID: \"<aws_vpc_id>\" 6 AWS_SG_IDS: \"<aws_sg_ids>\" 7 DISABLECVM: \"true\"",
"oc get events -n openshift-sandboxed-containers-operator --field-selector involvedObject.name=osc-podvm-image-creation",
"oc logs -n openshift-sandboxed-containers-operator jobs/osc-podvm-image-creation",
"apiVersion: v1 kind: <object> spec: runtimeClassName: kata-remote",
"apiVersion: v1 kind: <object> metadata: annotations: io.katacontainers.config.hypervisor.machine_type: \"t3.medium\" 1",
"apiVersion: v1 kind: <Pod> metadata: annotations: io.katacontainers.config.hypervisor.default_vcpus: <vcpus> io.katacontainers.config.hypervisor.default_memory: <memory>",
"apiVersion: v1 kind: Namespace metadata: name: openshift-sandboxed-containers-operator",
"oc apply -f osc-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sandboxed-containers-operator-group namespace: openshift-sandboxed-containers-operator spec: targetNamespaces: - openshift-sandboxed-containers-operator",
"oc apply -f osc-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sandboxed-containers-operator namespace: openshift-sandboxed-containers-operator spec: channel: stable installPlanApproval: Automatic name: sandboxed-containers-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: sandboxed-containers-operator.v1.8.1",
"oc apply -f osc-subscription.yaml",
"oc get csv -n openshift-sandboxed-containers-operator",
"watch oc get csv -n openshift-sandboxed-containers-operator",
"NAME DISPLAY VERSION REPLACES PHASE openshift-sandboxed-containers openshift-sandboxed-containers-operator 1.8.1 1.7.0 Succeeded",
"oc get peerpodconfig peerpodconfig-openshift -n openshift-sandboxed-containers-operator -o jsonpath='{.spec.limit}{\"\\n\"}'",
"oc patch peerpodconfig peerpodconfig-openshift -n openshift-sandboxed-containers-operator --type merge --patch '{\"spec\":{\"limit\":\"<value>\"}}' 1",
"INSTANCE_ID=USD(oc get nodes -l 'node-role.kubernetes.io/worker' -o jsonpath='{.items[0].spec.providerID}' | sed 's#[^ ]*/##g')",
"AWS_REGION=USD(oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.aws.region}')",
"AWS_SG_IDS=(USD(aws ec2 describe-instances --instance-ids USD{INSTANCE_ID} --query 'Reservations[*].Instances[*].SecurityGroups[*].GroupId' --output text --region USDAWS_REGION))",
"for AWS_SG_ID in \"USD{AWS_SG_IDS[@]}\"; do aws ec2 authorize-security-group-ingress --group-id USDAWS_SG_ID --protocol tcp --port 15150 --source-group USDAWS_SG_ID --region USDAWS_REGION aws ec2 authorize-security-group-ingress --group-id USDAWS_SG_ID --protocol tcp --port 9000 --source-group USDAWS_SG_ID --region USDAWS_REGION done",
"apiVersion: v1 kind: Secret metadata: name: peer-pods-secret namespace: openshift-sandboxed-containers-operator type: Opaque stringData: AWS_ACCESS_KEY_ID: \"<aws_access_key>\" 1 AWS_SECRET_ACCESS_KEY: \"<aws_secret_access_key>\" 2",
"oc apply -f peer-pods-secret.yaml",
"INSTANCE_ID=USD(oc get nodes -l 'node-role.kubernetes.io/worker' -o jsonpath='{.items[0].spec.providerID}' | sed 's#[^ ]*/##g')",
"AWS_REGION=USD(oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.aws.region}') && echo \"AWS_REGION: \\\"USDAWS_REGION\\\"\"",
"AWS_SUBNET_ID=USD(aws ec2 describe-instances --instance-ids USD{INSTANCE_ID} --query 'Reservations[*].Instances[*].SubnetId' --region USD{AWS_REGION} --output text) && echo \"AWS_SUBNET_ID: \\\"USDAWS_SUBNET_ID\\\"\"",
"AWS_VPC_ID=USD(aws ec2 describe-instances --instance-ids USD{INSTANCE_ID} --query 'Reservations[*].Instances[*].VpcId' --region USD{AWS_REGION} --output text) && echo \"AWS_VPC_ID: \\\"USDAWS_VPC_ID\\\"\"",
"AWS_SG_IDS=USD(aws ec2 describe-instances --instance-ids USD{INSTANCE_ID} --query 'Reservations[*].Instances[*].SecurityGroups[*].GroupId' --region USDAWS_REGION --output json | jq -r '.[][][]' | paste -sd \",\") && echo \"AWS_SG_IDS: \\\"USDAWS_SG_IDS\\\"\"",
"apiVersion: v1 kind: ConfigMap metadata: name: peer-pods-cm namespace: openshift-sandboxed-containers-operator data: CLOUD_PROVIDER: \"aws\" VXLAN_PORT: \"9000\" PODVM_INSTANCE_TYPE: \"t3.medium\" 1 PODVM_INSTANCE_TYPES: \"t2.small,t2.medium,t3.large\" 2 PROXY_TIMEOUT: \"5m\" PODVM_AMI_ID: \"<podvm_ami_id>\" 3 AWS_REGION: \"<aws_region>\" 4 AWS_SUBNET_ID: \"<aws_subnet_id>\" 5 AWS_VPC_ID: \"<aws_vpc_id>\" 6 AWS_SG_IDS: \"<aws_sg_ids>\" 7 DISABLECVM: \"true\"",
"oc apply -f peer-pods-cm.yaml",
"apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: enablePeerPods: true logLevel: info kataConfigPoolSelector: matchLabels: <label_key>: '<label_value>' 1",
"oc apply -f example-kataconfig.yaml",
"watch \"oc describe kataconfig | sed -n /^Status:/,/^Events/p\"",
"oc get -n openshift-sandboxed-containers-operator ds/peerpodconfig-ctrl-caa-daemon",
"oc get runtimeclass",
"NAME HANDLER AGE kata kata 152m kata-remote kata-remote 152m",
"oc get configmap peer-pods-cm -n openshift-sandboxed-containers-operator -o yaml",
"oc get events -n openshift-sandboxed-containers-operator --field-selector involvedObject.name=osc-podvm-image-creation",
"oc logs -n openshift-sandboxed-containers-operator jobs/osc-podvm-image-creation",
"apiVersion: v1 kind: <object> spec: runtimeClassName: kata-remote",
"apiVersion: v1 kind: <object> metadata: annotations: io.katacontainers.config.hypervisor.machine_type: \"t3.medium\" 1",
"apiVersion: v1 kind: <Pod> metadata: annotations: io.katacontainers.config.hypervisor.default_vcpus: <vcpus> io.katacontainers.config.hypervisor.default_memory: <memory>",
"oc apply -f <object.yaml>"
] | https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/1.8/html/user_guide/deploying-aws |
Chapter 9. SecurityContextConstraints [security.openshift.io/v1] | Chapter 9. SecurityContextConstraints [security.openshift.io/v1] Description SecurityContextConstraints governs the ability to make requests that affect the SecurityContext that will be applied to a container. For historical reasons SCC was exposed under the core Kubernetes API group. That exposure is deprecated and will be removed in a future release - users should instead use the security.openshift.io group to manage SecurityContextConstraints. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required allowHostDirVolumePlugin allowHostIPC allowHostNetwork allowHostPID allowHostPorts allowPrivilegedContainer readOnlyRootFilesystem 9.1. Specification Property Type Description allowHostDirVolumePlugin boolean AllowHostDirVolumePlugin determines if the policy allow containers to use the HostDir volume plugin allowHostIPC boolean AllowHostIPC determines if the policy allows host ipc in the containers. allowHostNetwork boolean AllowHostNetwork determines if the policy allows the use of HostNetwork in the pod spec. allowHostPID boolean AllowHostPID determines if the policy allows host pid in the containers. allowHostPorts boolean AllowHostPorts determines if the policy allows host ports in the containers. allowPrivilegeEscalation `` AllowPrivilegeEscalation determines if a pod can request to allow privilege escalation. If unspecified, defaults to true. allowPrivilegedContainer boolean AllowPrivilegedContainer determines if a container can request to be run as privileged. allowedCapabilities `` AllowedCapabilities is a list of capabilities that can be requested to add to the container. Capabilities in this field maybe added at the pod author's discretion. You must not list a capability in both AllowedCapabilities and RequiredDropCapabilities. To allow all capabilities you may use '*'. allowedFlexVolumes `` AllowedFlexVolumes is a whitelist of allowed Flexvolumes. Empty or nil indicates that all Flexvolumes may be used. This parameter is effective only when the usage of the Flexvolumes is allowed in the "Volumes" field. allowedUnsafeSysctls `` AllowedUnsafeSysctls is a list of explicitly allowed unsafe sysctls, defaults to none. Each entry is either a plain sysctl name or ends in " " in which case it is considered as a prefix of allowed sysctls. Single * means all unsafe sysctls are allowed. Kubelet has to whitelist all allowed unsafe sysctls explicitly to avoid rejection. Examples: e.g. "foo/ " allows "foo/bar", "foo/baz", etc. e.g. "foo.*" allows "foo.bar", "foo.baz", etc. apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources defaultAddCapabilities `` DefaultAddCapabilities is the default set of capabilities that will be added to the container unless the pod spec specifically drops the capability. You may not list a capabiility in both DefaultAddCapabilities and RequiredDropCapabilities. defaultAllowPrivilegeEscalation `` DefaultAllowPrivilegeEscalation controls the default setting for whether a process can gain more privileges than its parent process. forbiddenSysctls `` ForbiddenSysctls is a list of explicitly forbidden sysctls, defaults to none. Each entry is either a plain sysctl name or ends in " " in which case it is considered as a prefix of forbidden sysctls. Single * means all sysctls are forbidden. Examples: e.g. "foo/ " forbids "foo/bar", "foo/baz", etc. e.g. "foo.*" forbids "foo.bar", "foo.baz", etc. fsGroup `` FSGroup is the strategy that will dictate what fs group is used by the SecurityContext. groups `` The groups that have permission to use this security context constraints kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata priority `` Priority influences the sort order of SCCs when evaluating which SCCs to try first for a given pod request based on access in the Users and Groups fields. The higher the int, the higher priority. An unset value is considered a 0 priority. If scores for multiple SCCs are equal they will be sorted from most restrictive to least restrictive. If both priorities and restrictions are equal the SCCs will be sorted by name. readOnlyRootFilesystem boolean ReadOnlyRootFilesystem when set to true will force containers to run with a read only root file system. If the container specifically requests to run with a non-read only root file system the SCC should deny the pod. If set to false the container may run with a read only root file system if it wishes but it will not be forced to. requiredDropCapabilities `` RequiredDropCapabilities are the capabilities that will be dropped from the container. These are required to be dropped and cannot be added. runAsUser `` RunAsUser is the strategy that will dictate what RunAsUser is used in the SecurityContext. seLinuxContext `` SELinuxContext is the strategy that will dictate what labels will be set in the SecurityContext. seccompProfiles `` SeccompProfiles lists the allowed profiles that may be set for the pod or container's seccomp annotations. An unset (nil) or empty value means that no profiles may be specifid by the pod or container. The wildcard '*' may be used to allow all profiles. When used to generate a value for a pod the first non-wildcard profile will be used as the default. supplementalGroups `` SupplementalGroups is the strategy that will dictate what supplemental groups are used by the SecurityContext. users `` The users who have permissions to use this security context constraints volumes `` Volumes is a white list of allowed volume plugins. FSType corresponds directly with the field names of a VolumeSource (azureFile, configMap, emptyDir). To allow all volumes you may use "*". To allow no volumes, set to ["none"]. 9.2. API endpoints The following API endpoints are available: /apis/security.openshift.io/v1/securitycontextconstraints DELETE : delete collection of SecurityContextConstraints GET : list objects of kind SecurityContextConstraints POST : create SecurityContextConstraints /apis/security.openshift.io/v1/watch/securitycontextconstraints GET : watch individual changes to a list of SecurityContextConstraints. deprecated: use the 'watch' parameter with a list operation instead. /apis/security.openshift.io/v1/securitycontextconstraints/{name} DELETE : delete SecurityContextConstraints GET : read the specified SecurityContextConstraints PATCH : partially update the specified SecurityContextConstraints PUT : replace the specified SecurityContextConstraints /apis/security.openshift.io/v1/watch/securitycontextconstraints/{name} GET : watch changes to an object of kind SecurityContextConstraints. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 9.2.1. /apis/security.openshift.io/v1/securitycontextconstraints HTTP method DELETE Description delete collection of SecurityContextConstraints Table 9.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind SecurityContextConstraints Table 9.2. HTTP responses HTTP code Reponse body 200 - OK SecurityContextConstraintsList schema 401 - Unauthorized Empty HTTP method POST Description create SecurityContextConstraints Table 9.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.4. Body parameters Parameter Type Description body SecurityContextConstraints schema Table 9.5. HTTP responses HTTP code Reponse body 200 - OK SecurityContextConstraints schema 201 - Created SecurityContextConstraints schema 202 - Accepted SecurityContextConstraints schema 401 - Unauthorized Empty 9.2.2. /apis/security.openshift.io/v1/watch/securitycontextconstraints HTTP method GET Description watch individual changes to a list of SecurityContextConstraints. deprecated: use the 'watch' parameter with a list operation instead. Table 9.6. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.3. /apis/security.openshift.io/v1/securitycontextconstraints/{name} Table 9.7. Global path parameters Parameter Type Description name string name of the SecurityContextConstraints HTTP method DELETE Description delete SecurityContextConstraints Table 9.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified SecurityContextConstraints Table 9.10. HTTP responses HTTP code Reponse body 200 - OK SecurityContextConstraints schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified SecurityContextConstraints Table 9.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.12. HTTP responses HTTP code Reponse body 200 - OK SecurityContextConstraints schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified SecurityContextConstraints Table 9.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.14. Body parameters Parameter Type Description body SecurityContextConstraints schema Table 9.15. HTTP responses HTTP code Reponse body 200 - OK SecurityContextConstraints schema 201 - Created SecurityContextConstraints schema 401 - Unauthorized Empty 9.2.4. /apis/security.openshift.io/v1/watch/securitycontextconstraints/{name} Table 9.16. Global path parameters Parameter Type Description name string name of the SecurityContextConstraints HTTP method GET Description watch changes to an object of kind SecurityContextConstraints. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 9.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/security_apis/securitycontextconstraints-security-openshift-io-v1 |
Chapter 2. Planning your OVN deployment | Chapter 2. Planning your OVN deployment Deploy OVN in HA deployments only. We recommend you deploy with distributed virtual routing (DVR) enabled. Note To use OVN, your director deployment must use Generic Network Virtualization Encapsulation (Geneve), and not VXLAN. Geneve allows OVN to identify the network using the 24-bit Virtual Network Identifier (VNI) field and an additional 32-bit Type Length Value (TLV) to specify both the source and destination logical ports. You should account for this larger protocol header when you determine your MTU setting. DVR HA with OVN Deploy OVN with DVR in an HA environment. OVN is supported only in an HA environment. DVR is enabled by default in new ML2/OVN deployments and disabled by default in new ML2/OVS deployments. The neutron-ovn-dvr-ha.yaml environment file configures the required DVR-specific parameters for deployments using OVN in an HA environment. 2.1. The ovn-controller on Compute nodes The ovn-controller service runs on each Compute node and connects to the OVN SB database server to retrieve the logical flows. The ovn-controller translates these logical flows into physical OpenFlow flows and adds the flows to the OVS bridge ( br-int ). To communicate with ovs-vswitchd and install the OpenFlow flows, the ovn-controller connects to the local ovsdb-server (that hosts conf.db ) using the UNIX socket path that was passed when ovn-controller was started (for example unix:/var/run/openvswitch/db.sock ). The ovn-controller service expects certain key-value pairs in the external_ids column of the Open_vSwitch table; puppet-ovn uses puppet-vswitch to populate these fields. Below are the key-value pairs that puppet-vswitch configures in the external_ids column: 2.2. The OVN composable service The director has a composable service for OVN named ovn-dbs with two profiles: the base profile and the pacemaker HA profile. The OVN northbound and southbound databases are hosted by the ovsdb-server service. Similarly, the ovsdb-server process runs alongside ovs-vswitchd to host the OVS database ( conf.db ). Note The schema file for the NB database is located in /usr/share/openvswitch/ovn-nb.ovsschema , and the SB database schema file is in /usr/share/openvswitch/ovn-sb.ovsschema . 2.3. High Availability with pacemaker and DVR In addition to the using the required HA profile, deploy OVN with the DVR to ensure the availability of networking services. With the HA profile enabled, the OVN database servers start on all the Controllers, and pacemaker then selects one controller to serve in the master role. The ovsdb-server service does not currently support active-active mode. It does support HA with the master-slave mode, which is managed by Pacemaker using the resource agent Open Cluster Framework (OCF) script. Having ovsdb-server run in master mode allows write access to the database, while all the other slave ovsdb-server services replicate the database locally from the master , and do not allow write access. The YAML file for this profile is the tripleo-heat-templates/environments/services/neutron-ovn-dvr-ha.yaml file. When enabled, the OVN database servers are managed by Pacemaker, and puppet-tripleo creates a pacemaker OCF resource named ovn:ovndb-servers . The OVN database servers are started on each Controller node, and the controller owning the virtual IP address ( OVN_DBS_VIP ) runs the OVN DB servers in master mode. The OVN ML2 mechanism driver and ovn-controller then connect to the database servers using the OVN_DBS_VIP value. In the event of a failover, Pacemaker moves the virtual IP address ( OVN_DBS_VIP ) to another controller, and also promotes the OVN database server running on that node to master . 2.4. Layer 3 high availability with OVN OVN supports Layer 3 high availability (L3 HA) without any special configuration. OVN automatically schedules the router port to all available gateway nodes that can act as an L3 gateway on the specified external network. OVN L3 HA uses the gateway_chassis column in the OVN Logical_Router_Port table. Most functionality is managed by OpenFlow rules with bundled active_passive outputs. The ovn-controller handles the Address Resolution Protocol (ARP) responder and router enablement and disablement. Gratuitous ARPs for FIPs and router external addresses are also periodically sent by the ovn-controller . Note L3HA uses OVN to balance the routers back to the original gateway nodes to avoid any nodes becoming a bottleneck. BFD monitoring OVN uses the Bidirectional Forwarding Detection (BFD) protocol to monitor the availability of the gateway nodes. This protocol is encapsulated on top of the Geneve tunnels established from node to node. Each gateway node monitors all the other gateway nodes in a star topology in the deployment. Gateway nodes also monitor the compute nodes to let the gateways enable and disable routing of packets and ARP responses and announcements. Each compute node uses BFD to monitor each gateway node and automatically steers external traffic, such as source and destination Network Address Translation (SNAT and DNAT), through the active gateway node for a given router. Compute nodes do not need to monitor other compute nodes. Note External network failures are not detected as would happen with an ML2-OVS configuration. L3 HA for OVN supports the following failure modes: The gateway node becomes disconnected from the network (tunneling interface). ovs-vswitchd stops ( ovs-switchd is responsible for BFD signaling) ovn-controller stops ( ovn-controller removes itself as a registered node). Note This BFD monitoring mechanism only works for link failures, not for routing failures. | [
"hostname=<HOST NAME> ovn-encap-ip=<IP OF THE NODE> ovn-encap-type=geneve ovn-remote=tcp:OVN_DBS_VIP:6642"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/networking_with_open_virtual_network/planning_your_ovn_deployment |
Chapter 5. Migration | Chapter 5. Migration This chapter provides information on migrating to versions of components included in Red Hat Software Collections 3.4. 5.1. Migrating to MariaDB 10.3 The rh-mariadb103 Software Collection is available for Red Hat Enterprise Linux 7, which includes MariaDB 5.5 as the default MySQL implementation. The rh-mariadb103 Software Collection does not conflict with the mysql or mariadb packages from the core systems. Unless the *-syspaths packages are installed (see below), it is possible to install the rh-mariadb103 Software Collection together with the mysql or mariadb packages. It is also possible to run both versions at the same time, however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Additionally, it is possible to install the rh-mariadb103 Software Collection while the rh-mariadb102 Collection is still installed and even running. The rh-mariadb103 Software Collection includes the rh-mariadb103-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mariadb103*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mariadb103* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 and rh-mysql80 Software Collections. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . The recommended migration path from MariaDB 5.5 to MariaDB 10.3 is to upgrade to MariaDB 10.0 first, and then upgrade by one version successively. For details, see instructions in earlier Red Hat Software Collections Release Notes: Migrating to MariaDB 10.0 , Migrating to MariaDB 10.1 , and Migrating to MariaDB 10.2 . Note The rh-mariadb103 Software Collection supports neither mounting over NFS nor dynamical registering using the scl register command. 5.1.1. Notable Differences Between the rh-mariadb102 and rh-mariadb103 Software Collections The mariadb-bench subpackage has been removed. The default allowed level of the plug-in maturity has been changed to one level less than the server maturity. As a result, plug-ins with a lower maturity level that were previously working, will no longer load. For more information regarding MariaDB 10.3 , see the upstream documentation about changes and about upgrading . 5.1.2. Upgrading from the rh-mariadb102 to the rh-mariadb103 Software Collection Important Prior to upgrading, back up all your data, including any MariaDB databases. Stop the rh-mariadb102 database server if it is still running. Before stopping the server, set the innodb_fast_shutdown option to 0 , so that InnoDB performs a slow shutdown, including a full purge and insert buffer merge. Read more about this option in the upstream documentation . This operation can take a longer time than in case of a normal shutdown. mysql -uroot -p -e "SET GLOBAL innodb_fast_shutdown = 0" Stop the rh-mariadb102 server. systemctl stop rh-mariadb102-mariadb.service Install the rh-mariadb103 Software Collection, including the subpackage providing the mysql_upgrade utility. yum install rh-mariadb103-mariadb-server rh-mariadb103-mariadb-server-utils Note that it is possible to install the rh-mariadb103 Software Collection while the rh-mariadb102 Software Collection is still installed because these Collections do not conflict. Inspect configuration of rh-mariadb103 , which is stored in the /etc/opt/rh/rh-mariadb103/my.cnf file and the /etc/opt/rh/rh-mariadb103/my.cnf.d/ directory. Compare it with configuration of rh-mariadb102 stored in /etc/opt/rh/rh-mariadb102/my.cnf and /etc/opt/rh/rh-mariadb102/my.cnf.d/ and adjust it if necessary. All data of the rh-mariadb102 Software Collection is stored in the /var/opt/rh/rh-mariadb102/lib/mysql/ directory unless configured differently. Copy the whole content of this directory to /var/opt/rh/rh-mariadb103/lib/mysql/ . You can move the content but remember to back up your data before you continue to upgrade. Make sure the data are owned by the mysql user and SELinux context is correct. Start the rh-mariadb103 database server. systemctl start rh-mariadb103-mariadb.service Perform the data migration. Note that running the mysql_upgrade command is required due to upstream changes introduced in MDEV-14637 . scl enable rh-mariadb103 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mariadb103 -- mysql_upgrade -p Note that when the rh-mariadb103*-syspaths packages are installed, the scl enable command is not required. However, the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 and rh-mysql80 Software Collections. 5.2. Migrating to MariaDB 10.2 Red Hat Enterprise Linux 6 contains MySQL 5.1 as the default MySQL implementation. Red Hat Enterprise Linux 7 includes MariaDB 5.5 as the default MySQL implementation. MariaDB is a community-developed drop-in replacement for MySQL . MariaDB 10.1 has been available as a Software Collection since Red Hat Software Collections 2.2; Red Hat Software Collections 3.4 is distributed with MariaDB 10.2 . The rh-mariadb102 Software Collection, available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7, does not conflict with the mysql or mariadb packages from the core systems. Unless the *-syspaths packages are installed (see below), it is possible to install the rh-mariadb102 Software Collection together with the mysql or mariadb packages. It is also possible to run both versions at the same time, however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Additionally, it is possible to install the rh-mariadb102 Software Collection while the rh-mariadb101 Collection is still installed and even running. The recommended migration path from MariaDB 5.5 to MariaDB 10.3 is to upgrade to MariaDB 10.0 first, and then upgrade by one version successively. For details, see instructions in earlier Red Hat Software Collections Release Notes: Migrating to MariaDB 10.0 and Migrating to MariaDB 10.1 . For more information about MariaDB 10.2 , see the upstream documentation about changes in version 10.2 and about upgrading . Note The rh-mariadb102 Software Collection supports neither mounting over NFS nor dynamical registering using the scl register command. 5.2.1. Notable Differences Between the rh-mariadb101 and rh-mariadb102 Software Collections Major changes in MariaDB 10.2 are described in the Red Hat Software Collections 3.0 Release Notes . Since MariaDB 10.2 , behavior of the SQL_MODE variable has been changed; see the upstream documentation for details. Multiple options have changed their default values or have been deprecated or removed. For details, see the Knowledgebase article Migrating from MariaDB 10.1 to the MariaDB 10.2 Software Collection . The rh-mariadb102 Software Collection includes the rh-mariadb102-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mariadb102*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mariadb102* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mysql80 Software Collection. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . 5.2.2. Upgrading from the rh-mariadb101 to the rh-mariadb102 Software Collection Important Prior to upgrading, back up all your data, including any MariaDB databases. Stop the rh-mariadb101 database server if it is still running. Before stopping the server, set the innodb_fast_shutdown option to 0 , so that InnoDB performs a slow shutdown, including a full purge and insert buffer merge. Read more about this option in the upstream documentation . This operation can take a longer time than in case of a normal shutdown. mysql -uroot -p -e "SET GLOBAL innodb_fast_shutdown = 0" Stop the rh-mariadb101 server. service rh-mariadb101-mariadb stop Install the rh-mariadb102 Software Collection. yum install rh-mariadb102-mariadb-server Note that it is possible to install the rh-mariadb102 Software Collection while the rh-mariadb101 Software Collection is still installed because these Collections do not conflict. Inspect configuration of rh-mariadb102 , which is stored in the /etc/opt/rh/rh-mariadb102/my.cnf file and the /etc/opt/rh/rh-mariadb102/my.cnf.d/ directory. Compare it with configuration of rh-mariadb101 stored in /etc/opt/rh/rh-mariadb101/my.cnf and /etc/opt/rh/rh-mariadb101/my.cnf.d/ and adjust it if necessary. All data of the rh-mariadb101 Software Collection is stored in the /var/opt/rh/rh-mariadb101/lib/mysql/ directory unless configured differently. Copy the whole content of this directory to /var/opt/rh/rh-mariadb102/lib/mysql/ . You can move the content but remember to back up your data before you continue to upgrade. Make sure the data are owned by the mysql user and SELinux context is correct. Start the rh-mariadb102 database server. service rh-mariadb102-mariadb start Perform the data migration. scl enable rh-mariadb102 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mariadb102 -- mysql_upgrade -p Note that when the rh-mariadb102*-syspaths packages are installed, the scl enable command is not required. However, the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mysql80 Software Collection. 5.3. Migrating to MySQL 8.0 The rh-mysql80 Software Collection is available for Red Hat Enterprise Linux 7, which includes MariaDB 5.5 as the default MySQL implementation. The rh-mysql80 Software Collection conflicts neither with the mysql or mariadb packages from the core systems nor with the rh-mysql* or rh-mariadb* Software Collections, unless the *-syspaths packages are installed (see below). It is also possible to run multiple versions at the same time; however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Note that it is possible to upgrade to MySQL 8.0 only from MySQL 5.7 . If you need to upgrade from an earlier version, upgrade to MySQL 5.7 first. For instructions, see Migration to MySQL 5.7 . 5.3.1. Notable Differences Between MySQL 5.7 and MySQL 8.0 Differences Specific to the rh-mysql80 Software Collection The MySQL 8.0 server provided by the rh-mysql80 Software Collection is configured to use mysql_native_password as the default authentication plug-in because client tools and libraries in Red Hat Enterprise Linux 7 are incompatible with the caching_sha2_password method, which is used by default in the upstream MySQL 8.0 version. To change the default authentication plug-in to caching_sha2_password , edit the /etc/opt/rh/rh-mysql80/my.cnf.d/mysql-default-authentication-plugin.cnf file as follows: For more information about the caching_sha2_password authentication plug-in, see the upstream documentation . The rh-mysql80 Software Collection includes the rh-mysql80-syspaths package, which installs the rh-mysql80-mysql-config-syspaths , rh-mysql80-mysql-server-syspaths , and rh-mysql80-mysql-syspaths packages. These subpackages provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mysql80*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mysql80* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 and rh-mariadb103 Software Collections. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . General Changes in MySQL 8.0 Binary logging is enabled by default during the server startup. The log_bin system variable is now set to ON by default even if the --log-bin option has not been specified. To disable binary logging, specify the --skip-log-bin or --disable-log-bin option at startup. For a CREATE FUNCTION statement to be accepted, at least one of the DETERMINISTIC , NO SQL , or READS SQL DATA keywords must be specified explicitly, otherwise an error occurs. Certain features related to account management have been removed. Namely, using the GRANT statement to modify account properties other than privilege assignments, such as authentication, SSL, and resource-limit, is no longer possible. To establish the mentioned properties at account-creation time, use the CREATE USER statement. To modify these properties, use the ALTER USER statement. Certain SSL-related options have been removed on the client-side. Use the --ssl-mode=REQUIRED option instead of --ssl=1 or --enable-ssl . Use the --ssl-mode=DISABLED option instead of --ssl=0 , --skip-ssl , or --disable-ssl . Use the --ssl-mode=VERIFY_IDENTITY option instead of --ssl-verify-server-cert options. Note that these option remains unchanged on the server side. The default character set has been changed from latin1 to utf8mb4 . The utf8 character set is currently an alias for utf8mb3 but in the future, it will become a reference to utf8mb4 . To prevent ambiguity, specify utf8mb4 explicitly for character set references instead of utf8 . Setting user variables in statements other than SET has been deprecated. The log_syslog variable, which previously configured error logging to the system logs, has been removed. Certain incompatible changes to spatial data support have been introduced. The deprecated ASC or DESC qualifiers for GROUP BY clauses have been removed. To produce a given sort order, provide an ORDER BY clause. For detailed changes in MySQL 8.0 compared to earlier versions, see the upstream documentation: What Is New in MySQL 8.0 and Changes Affecting Upgrades to MySQL 8.0 . 5.3.2. Upgrading to the rh-mysql80 Software Collection Important Prior to upgrading, back-up all your data, including any MySQL databases. Install the rh-mysql80 Software Collection. yum install rh-mysql80-mysql-server Inspect the configuration of rh-mysql80 , which is stored in the /etc/opt/rh/rh-mysql80/my.cnf file and the /etc/opt/rh/rh-mysql80/my.cnf.d/ directory. Compare it with the configuration of rh-mysql57 stored in /etc/opt/rh/rh-mysql57/my.cnf and /etc/opt/rh/rh-mysql57/my.cnf.d/ and adjust it if necessary. Stop the rh-mysql57 database server, if it is still running. systemctl stop rh-mysql57-mysqld.service All data of the rh-mysql57 Software Collection is stored in the /var/opt/rh/rh-mysql57/lib/mysql/ directory. Copy the whole content of this directory to /var/opt/rh/rh-mysql80/lib/mysql/ . You can also move the content but remember to back up your data before you continue to upgrade. Start the rh-mysql80 database server. systemctl start rh-mysql80-mysqld.service Perform the data migration. scl enable rh-mysql80 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mysql80 -- mysql_upgrade -p Note that when the rh-mysql80*-syspaths packages are installed, the scl enable command is not required. However, the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 and rh-mariadb103 Software Collections. 5.4. Migrating to MongoDB 3.6 Red Hat Software Collections 3.4 is released with MongoDB 3.6 , provided by the rh-mongodb36 Software Collection and available only for Red Hat Enterprise Linux 7. The rh-mongodb36 Software Collection includes the rh-mongodb36-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mongodb36*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mongodb36* packages. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . 5.4.1. Notable Differences Between MongoDB 3.4 and MongoDB 3.6 General Changes The rh-mongodb36 Software Collection introduces the following significant general change: On Non-Uniform Access Memory (NUMA) hardware, it is possible to configure systemd services to be launched using the numactl command; see the upstream recommendation . To use MongoDB with the numactl command, you need to install the numactl RPM package and change the /etc/opt/rh/rh-mongodb36/sysconfig/mongod and /etc/opt/rh/rh-mongodb36/sysconfig/mongos configuration files accordingly. Compatibility Changes MongoDB 3.6 includes various minor changes that can affect compatibility with versions of MongoDB : MongoDB binaries now bind to localhost by default, so listening on different IP addresses needs to be explicitly enabled. Note that this is already the default behavior for systemd services distributed with MongoDB Software Collections. The MONGODB-CR authentication mechanism has been deprecated. For databases with users created by MongoDB versions earlier than 3.0, upgrade authentication schema to SCRAM . The HTTP interface and REST API have been removed Arbiters in replica sets have priority 0 Master-slave replication has been deprecated For detailed compatibility changes in MongoDB 3.6 , see the upstream release notes . Backwards Incompatible Features The following MongoDB 3.6 features are backwards incompatible and require the version to be set to 3.6 using the featureCompatibilityVersion command : UUID for collections USDjsonSchema document validation Change streams Chunk aware secondaries View definitions, document validators, and partial index filters that use version 3.6 query features Sessions and retryable writes Users and roles with authenticationRestrictions For details regarding backward incompatible changes in MongoDB 3.6 , see the upstream release notes . 5.4.2. Upgrading from the rh-mongodb34 to the rh-mongodb36 Software Collection Important Before migrating from the rh-mongodb34 to the rh-mongodb36 Software Collection, back up all your data, including any MongoDB databases, which are by default stored in the /var/opt/rh/rh-mongodb34/lib/mongodb/ directory. In addition, see the Compatibility Changes to ensure that your applications and deployments are compatible with MongoDB 3.6 . To upgrade to the rh-mongodb36 Software Collection, perform the following steps. To be able to upgrade, the rh-mongodb34 instance must have featureCompatibilityVersion set to 3.4 . Check featureCompatibilityVersion : ~]USD scl enable rh-mongodb34 'mongo --host localhost --port 27017 admin' --eval 'db.adminCommand({getParameter: 1, featureCompatibilityVersion: 1})' If the mongod server is configured with enabled access control, add the --username and --password options to the mongo command. Install the MongoDB servers and shells from the rh-mongodb36 Software Collections: ~]# yum install rh-mongodb36 Stop the MongoDB 3.4 server: ~]# systemctl stop rh-mongodb34-mongod.service Copy your data to the new location: ~]# cp -a /var/opt/rh/rh-mongodb34/lib/mongodb/* /var/opt/rh/rh-mongodb36/lib/mongodb/ Configure the rh-mongodb36-mongod daemon in the /etc/opt/rh/rh-mongodb36/mongod.conf file. Start the MongoDB 3.6 server: ~]# systemctl start rh-mongodb36-mongod.service Enable backwards incompatible features: ~]USD scl enable rh-mongodb36 'mongo --host localhost --port 27017 admin' --eval 'db.adminCommand( { setFeatureCompatibilityVersion: "3.6" } )' If the mongod server is configured with enabled access control, add the --username and --password options to the mongo command. Note After upgrading, it is recommended to run the deployment first without enabling the backwards incompatible features for a burn-in period of time, to minimize the likelihood of a downgrade. For detailed information about upgrading, see the upstream release notes . For information about upgrading a Replica Set, see the upstream MongoDB Manual . For information about upgrading a Sharded Cluster, see the upstream MongoDB Manual . 5.5. Migrating to MongoDB 3.4 The rh-mongodb34 Software Collection, available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7, provides MongoDB 3.4 . 5.5.1. Notable Differences Between MongoDB 3.2 and MongoDB 3.4 General Changes The rh-mongodb34 Software Collection introduces various general changes. Major changes are listed in the Knowledgebase article Migrating from MongoDB 3.2 to MongoDB 3.4 . For detailed changes, see the upstream release notes . In addition, this Software Collection includes the rh-mongodb34-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mongodb34*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mongodb34* packages. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . Compatibility Changes MongoDB 3.4 includes various minor changes that can affect compatibility with versions of MongoDB . For details, see the Knowledgebase article Migrating from MongoDB 3.2 to MongoDB 3.4 and the upstream documentation . Notably, the following MongoDB 3.4 features are backwards incompatible and require that the version is set to 3.4 using the featureCompatibilityVersion command: Support for creating read-only views from existing collections or other views Index version v: 2 , which adds support for collation, decimal data and case-insensitive indexes Support for the decimal128 format with the new decimal data type For details regarding backward incompatible changes in MongoDB 3.4 , see the upstream release notes . 5.5.2. Upgrading from the rh-mongodb32 to the rh-mongodb34 Software Collection Note that once you have upgraded to MongoDB 3.4 and started using new features, cannot downgrade to version 3.2.7 or earlier. You can only downgrade to version 3.2.8 or later. Important Before migrating from the rh-mongodb32 to the rh-mongodb34 Software Collection, back up all your data, including any MongoDB databases, which are by default stored in the /var/opt/rh/rh-mongodb32/lib/mongodb/ directory. In addition, see the compatibility changes to ensure that your applications and deployments are compatible with MongoDB 3.4 . To upgrade to the rh-mongodb34 Software Collection, perform the following steps. Install the MongoDB servers and shells from the rh-mongodb34 Software Collections: ~]# yum install rh-mongodb34 Stop the MongoDB 3.2 server: ~]# systemctl stop rh-mongodb32-mongod.service Use the service rh-mongodb32-mongodb stop command on a Red Hat Enterprise Linux 6 system. Copy your data to the new location: ~]# cp -a /var/opt/rh/rh-mongodb32/lib/mongodb/* /var/opt/rh/rh-mongodb34/lib/mongodb/ Configure the rh-mongodb34-mongod daemon in the /etc/opt/rh/rh-mongodb34/mongod.conf file. Start the MongoDB 3.4 server: ~]# systemctl start rh-mongodb34-mongod.service On Red Hat Enterprise Linux 6, use the service rh-mongodb34-mongodb start command instead. Enable backwards-incompatible features: ~]USD scl enable rh-mongodb34 'mongo --host localhost --port 27017 admin' --eval 'db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } )' If the mongod server is configured with enabled access control, add the --username and --password options to mongo command. Note that it is recommended to run the deployment after the upgrade without enabling these features first. For detailed information about upgrading, see the upstream release notes . For information about upgrading a Replica Set, see the upstream MongoDB Manual . For information about upgrading a Sharded Cluster, see the upstream MongoDB Manual . 5.6. Migrating to PostgreSQL 12 Red Hat Software Collections 3.4 is distributed with PostgreSQL 12 , available only for Red Hat Enterprise Linux 7. The rh-postgresql12 Software Collection can be safely installed on the same machine in parallel with the base Red Hat Enterprise Linux system version of PostgreSQL or any PostgreSQL Software Collection. It is also possible to run more than one version of PostgreSQL on a machine at the same time, but you need to use different ports or IP addresses and adjust SELinux policy. See Section 5.7, "Migrating to PostgreSQL 9.6" for instructions how to migrate to an earlier version or when using Red Hat Enterprise Linux 6. The rh-postgresql12 Software Collection includes the rh-postgresql12-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-postgreqsl12*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-postgreqsl12* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . Important Before migrating to PostgreSQL 12 , see the upstream compatibility notes for PostgreSQL 11 and PostgreSQL 12 . In case of upgrading the PostgreSQL database in a container, see the container-specific instructions . The following table provides an overview of different paths in a Red Hat Enterprise Linux 7 system version of PostgreSQL provided by the postgresql package, and in the rh-postgresql10 and rh-postgresql12 Software Colections. Table 5.1. Diferences in the PostgreSQL paths Content postgresql rh-postgresql10 rh-postgresql12 Executables /usr/bin/ /opt/rh/rh-postgresql10/root/usr/bin/ /opt/rh/rh-postgresql12/root/usr/bin/ Libraries /usr/lib64/ /opt/rh/rh-postgresql10/root/usr/lib64/ /opt/rh/rh-postgresql12/root/usr/lib64/ Documentation /usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql10/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql12/root/usr/share/doc/postgresql/html/ PDF documentation /usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql10/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql12/root/usr/share/doc/postgresql-docs/ Contrib documentation /usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql10/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql12/root/usr/share/doc/postgresql-contrib/ Source not installed not installed not installed Data /var/lib/pgsql/data/ /var/opt/rh/rh-postgresql10/lib/pgsql/data/ /var/opt/rh/rh-postgresql12/lib/pgsql/data/ Backup area /var/lib/pgsql/backups/ /var/opt/rh/rh-postgresql10/lib/pgsql/backups/ /var/opt/rh/rh-postgresql12/lib/pgsql/backups/ Templates /usr/share/pgsql/ /opt/rh/rh-postgresql10/root/usr/share/pgsql/ /opt/rh/rh-postgresql12/root/usr/share/pgsql/ Procedural Languages /usr/lib64/pgsql/ /opt/rh/rh-postgresql10/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql12/root/usr/lib64/pgsql/ Development Headers /usr/include/pgsql/ /opt/rh/rh-postgresql10/root/usr/include/pgsql/ /opt/rh/rh-postgresql12/root/usr/include/pgsql/ Other shared data /usr/share/pgsql/ /opt/rh/rh-postgresql10/root/usr/share/pgsql/ /opt/rh/rh-postgresql12/root/usr/share/pgsql/ Regression tests /usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql10/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql12/root/usr/lib64/pgsql/test/regress/ (in the -test package) 5.6.1. Migrating from a Red Hat Enterprise Linux System Version of PostgreSQL to the PostgreSQL 12 Software Collection Red Hat Enterprise Linux 7 is distributed with PostgreSQL 9.2 . To migrate your data from a Red Hat Enterprise Linux system version of PostgreSQL to the rh-postgresql12 Software Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from a Red Hat Enterprise Linux system version of PostgreSQL to PostgreSQL 12, make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/lib/pgsql/data/ directory. Procedure 5.1. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : systemctl stop postgresql.service To verify that the server is not running, type: systemctl status postgresql.service Verify that the old directory /var/lib/pgsql/data/ exists: file /var/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql12/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql12/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 12 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql12/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql12 -- postgresql-setup --upgrade Alternatively, you can use the /opt/rh/rh-postgresql12/root/usr/bin/postgresql-setup --upgrade command. Note that you can use the --upgrade-from option for upgrade from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql12-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : systemctl start rh-postgresql12-postgresql.service It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql12 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 12 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 12 server, type as root : chkconfig rh-postgresql12-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql12/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.2. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : systemctl start postgresql.service Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'pg_dumpall > ~/pgdump_file.sql' Stop the old server by running the following command as root : systemctl stop postgresql.service Initialize the data directory for the new server as root : scl enable rh-postgresql12 -- postgresql-setup initdb Start the new server as root : systemctl start rh-postgresql12-postgresql.service Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql12 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 12 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 12 server, type as root : chkconfig rh-postgresql12-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql12/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.6.2. Migrating from the PostgreSQL 10 Software Collection to the PostgreSQL 12 Software Collection To migrate your data from the rh-postgresql10 Software Collection to the rh-postgresql12 Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from PostgreSQL 10 to PostgreSQL 12 , make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/opt/rh/rh-postgresql10/lib/pgsql/data/ directory. Procedure 5.3. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : systemctl stop rh-postgresql10-postgresql.service To verify that the server is not running, type: systemctl status rh-postgresql10-postgresql.service Verify that the old directory /var/opt/rh/rh-postgresql10/lib/pgsql/data/ exists: file /var/opt/rh/rh-postgresql10/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql12/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql12/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 12 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql12/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql12 -- postgresql-setup --upgrade --upgrade-from=rh-postgresql10-postgresql Alternatively, you can use the /opt/rh/rh-postgresql12/root/usr/bin/postgresql-setup --upgrade --upgrade-from=rh-postgresql10-postgresql command. Note that you can use the --upgrade-from option for upgrading from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql12-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : systemctl start rh-postgresql12-postgresql.service It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql12 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 12 server to start automatically at boot time. To disable the old PostgreSQL 10 server, type the following command as root : chkconfig rh-postgresql10-postgreqsql off To enable the PostgreSQL 12 server, type as root : chkconfig rh-postgresql12-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql12/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.4. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : systemctl start rh-postgresql10-postgresql.service Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'scl enable rh-postgresql10 "pg_dumpall > ~/pgdump_file.sql"' Stop the old server by running the following command as root : systemctl stop rh-postgresql10-postgresql.service Initialize the data directory for the new server as root : scl enable rh-postgresql12 -- postgresql-setup initdb Start the new server as root : systemctl start rh-postgresql12-postgresql.service Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql12 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 12 server to start automatically at boot time. To disable the old PostgreSQL 10 server, type the following command as root : chkconfig rh-postgresql10-postgresql off To enable the PostgreSQL 12 server, type as root : chkconfig rh-postgresql12-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql12/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.7. Migrating to PostgreSQL 9.6 PostgreSQL 9.6 is available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 and it can be safely installed on the same machine in parallel with PostgreSQL 8.4 from Red Hat Enterprise Linux 6, PostgreSQL 9.2 from Red Hat Enterprise Linux 7, or any version of PostgreSQL released in versions of Red Hat Software Collections. It is also possible to run more than one version of PostgreSQL on a machine at the same time, but you need to use different ports or IP addresses and adjust SELinux policy. Important In case of upgrading the PostgreSQL database in a container, see the container-specific instructions . Note that it is currently impossible to upgrade PostgreSQL from 9.5 to 9.6 in a container in an OpenShift environment that is configured with Gluster file volumes. 5.7.1. Notable Differences Between PostgreSQL 9.5 and PostgreSQL 9.6 The most notable changes between PostgreSQL 9.5 and PostgreSQL 9.6 are described in the upstream release notes . The rh-postgresql96 Software Collection includes the rh-postgresql96-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-postgreqsl96*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-postgreqsl96* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . The following table provides an overview of different paths in a Red Hat Enterprise Linux system version of PostgreSQL ( postgresql ) and in the postgresql92 , rh-postgresql95 , and rh-postgresql96 Software Collections. Note that the paths of PostgreSQL 8.4 distributed with Red Hat Enterprise Linux 6 and the system version of PostgreSQL 9.2 shipped with Red Hat Enterprise Linux 7 are the same; the paths for the rh-postgresql94 Software Collection are analogous to rh-postgresql95 . Table 5.2. Diferences in the PostgreSQL paths Content postgresql postgresql92 rh-postgresql95 rh-postgresql96 Executables /usr/bin/ /opt/rh/postgresql92/root/usr/bin/ /opt/rh/rh-postgresql95/root/usr/bin/ /opt/rh/rh-postgresql96/root/usr/bin/ Libraries /usr/lib64/ /opt/rh/postgresql92/root/usr/lib64/ /opt/rh/rh-postgresql95/root/usr/lib64/ /opt/rh/rh-postgresql96/root/usr/lib64/ Documentation /usr/share/doc/postgresql/html/ /opt/rh/postgresql92/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql95/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql/html/ PDF documentation /usr/share/doc/postgresql-docs/ /opt/rh/postgresql92/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql95/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql-docs/ Contrib documentation /usr/share/doc/postgresql-contrib/ /opt/rh/postgresql92/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql95/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql-contrib/ Source not installed not installed not installed not installed Data /var/lib/pgsql/data/ /opt/rh/postgresql92/root/var/lib/pgsql/data/ /var/opt/rh/rh-postgresql95/lib/pgsql/data/ /var/opt/rh/rh-postgresql96/lib/pgsql/data/ Backup area /var/lib/pgsql/backups/ /opt/rh/postgresql92/root/var/lib/pgsql/backups/ /var/opt/rh/rh-postgresql95/lib/pgsql/backups/ /var/opt/rh/rh-postgresql96/lib/pgsql/backups/ Templates /usr/share/pgsql/ /opt/rh/postgresql92/root/usr/share/pgsql/ /opt/rh/rh-postgresql95/root/usr/share/pgsql/ /opt/rh/rh-postgresql96/root/usr/share/pgsql/ Procedural Languages /usr/lib64/pgsql/ /opt/rh/postgresql92/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql95/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql96/root/usr/lib64/pgsql/ Development Headers /usr/include/pgsql/ /opt/rh/postgresql92/root/usr/include/pgsql/ /opt/rh/rh-postgresql95/root/usr/include/pgsql/ /opt/rh/rh-postgresql96/root/usr/include/pgsql/ Other shared data /usr/share/pgsql/ /opt/rh/postgresql92/root/usr/share/pgsql/ /opt/rh/rh-postgresql95/root/usr/share/pgsql/ /opt/rh/rh-postgresql96/root/usr/share/pgsql/ Regression tests /usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/postgresql92/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql95/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql96/root/usr/lib64/pgsql/test/regress/ (in the -test package) For changes between PostgreSQL 8.4 and PostgreSQL 9.2 , refer to the Red Hat Software Collections 1.2 Release Notes . Notable changes between PostgreSQL 9.2 and PostgreSQL 9.4 are described in Red Hat Software Collections 2.0 Release Notes . For differences between PostgreSQL 9.4 and PostgreSQL 9.5 , refer to Red Hat Software Collections 2.2 Release Notes . 5.7.2. Migrating from a Red Hat Enterprise Linux System Version of PostgreSQL to the PostgreSQL 9.6 Software Collection Red Hat Enterprise Linux 6 includes PostgreSQL 8.4 , Red Hat Enterprise Linux 7 is distributed with PostgreSQL 9.2 . To migrate your data from a Red Hat Enterprise Linux system version of PostgreSQL to the rh-postgresql96 Software Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. The following procedures are applicable for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 system versions of PostgreSQL . Important Before migrating your data from a Red Hat Enterprise Linux system version of PostgreSQL to PostgreSQL 9.6, make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/lib/pgsql/data/ directory. Procedure 5.5. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : service postgresql stop To verify that the server is not running, type: service postgresql status Verify that the old directory /var/lib/pgsql/data/ exists: file /var/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql96/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql96/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 9.6 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql96/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql96 -- postgresql-setup --upgrade Alternatively, you can use the /opt/rh/rh-postgresql96/root/usr/bin/postgresql-setup --upgrade command. Note that you can use the --upgrade-from option for upgrade from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql96-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : service rh-postgresql96-postgresql start It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql96 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.6. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : service postgresql start Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'pg_dumpall > ~/pgdump_file.sql' Stop the old server by running the following command as root : service postgresql stop Initialize the data directory for the new server as root : scl enable rh-postgresql96-postgresql -- postgresql-setup --initdb Start the new server as root : service rh-postgresql96-postgresql start Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql96 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.7.3. Migrating from the PostgreSQL 9.5 Software Collection to the PostgreSQL 9.6 Software Collection To migrate your data from the rh-postgresql95 Software Collection to the rh-postgresql96 Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from PostgreSQL 9.5 to PostgreSQL 9.6 , make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/opt/rh/rh-postgresql95/lib/pgsql/data/ directory. Procedure 5.7. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : service rh-postgresql95-postgresql stop To verify that the server is not running, type: service rh-postgresql95-postgresql status Verify that the old directory /var/opt/rh/rh-postgresql95/lib/pgsql/data/ exists: file /var/opt/rh/rh-postgresql95/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql96/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql96/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 9.6 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql96/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql96 -- postgresql-setup --upgrade --upgrade-from=rh-postgresql95-postgresql Alternatively, you can use the /opt/rh/rh-postgresql96/root/usr/bin/postgresql-setup --upgrade --upgrade-from=rh-postgresql95-postgresql command. Note that you can use the --upgrade-from option for upgrading from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql96-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : service rh-postgresql96-postgresql start It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql96 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old PostgreSQL 9.5 server, type the following command as root : chkconfig rh-postgresql95-postgreqsql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.8. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : service rh-postgresql95-postgresql start Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'scl enable rh-postgresql95 "pg_dumpall > ~/pgdump_file.sql"' Stop the old server by running the following command as root : service rh-postgresql95-postgresql stop Initialize the data directory for the new server as root : scl enable rh-postgresql96-postgresql -- postgresql-setup --initdb Start the new server as root : service rh-postgresql96-postgresql start Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql96 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old PostgreSQL 9.5 server, type the following command as root : chkconfig rh-postgresql95-postgresql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. If you need to migrate from the postgresql92 Software Collection, refer to Red Hat Software Collections 2.0 Release Notes ; the procedure is the same, you just need to adjust the version of the new Collection. The same applies to migration from the rh-postgresql94 Software Collection, which is described in Red Hat Software Collections 2.2 Release Notes . 5.8. Migrating to nginx 1.16 The root directory for the rh-nginx116 Software Collection is located in /opt/rh/rh-nginx116/root/ . The error log is stored in /var/opt/rh/rh-nginx116/log/nginx by default. Configuration files are stored in the /etc/opt/rh/rh-nginx116/nginx/ directory. Configuration files in nginx 1.16 have the same syntax and largely the same format as nginx Software Collections. Configuration files (with a .conf extension) in the /etc/opt/rh/rh-nginx116/nginx/default.d/ directory are included in the default server block configuration for port 80 . Important Before upgrading from nginx 1.14 to nginx 1.16 , back up all your data, including web pages located in the /opt/rh/nginx114/root/ tree and configuration files located in the /etc/opt/rh/nginx114/nginx/ tree. If you have made any specific changes, such as changing configuration files or setting up web applications, in the /opt/rh/nginx114/root/ tree, replicate those changes in the new /opt/rh/rh-nginx116/root/ and /etc/opt/rh/rh-nginx116/nginx/ directories, too. You can use this procedure to upgrade directly from nginx 1.8 , nginx 1.10 , nginx 1.12 , or nginx 1.14 to nginx 1.16 . Use the appropriate paths in this case. For the official nginx documentation, refer to http://nginx.org/en/docs/ . 5.9. Migrating to Redis 5 Redis 3.2 , provided by the rh-redis32 Software Collection, is mostly a strict subset of Redis 4.0 , which is mostly a strict subset of Redis 5.0 . Therefore, no major issues should occur when upgrading from version 3.2 to version 5.0. To upgrade a Redis Cluster to version 5.0, a mass restart of all the instances is needed. Compatibility Notes The format of RDB files has been changed. Redis 5 is able to read formats of all the earlier versions, but earlier versions are incapable of reading the Redis 5 format. Since version 4.0, the Redis Cluster bus protocol is no longer compatible with Redis 3.2 . For minor non-backward compatible changes, see the upstream release notes for version 4.0 and version 5.0 . | [
"[mysqld] default_authentication_plugin=caching_sha2_password"
] | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.4_release_notes/chap-Migration |
Chapter 11. Installing a cluster into a shared VPC on GCP using Deployment Manager templates | Chapter 11. Installing a cluster into a shared VPC on GCP using Deployment Manager templates In OpenShift Container Platform version 4.16, you can install a cluster into a shared Virtual Private Cloud (VPC) on Google Cloud Platform (GCP) that uses infrastructure that you provide. In this context, a cluster installed into a shared VPC is a cluster that is configured to use a VPC from a project different from where the cluster is being deployed. A shared VPC enables an organization to connect resources from multiple projects to a common VPC network. You can communicate within the organization securely and efficiently by using internal IPs from that network. For more information about shared VPC, see Shared VPC overview in the GCP documentation. The steps for performing a user-provided infrastructure installation into a shared VPC are outlined here. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 11.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . Note Be sure to also review this site list if you are configuring a proxy. 11.2. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 11.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 11.4. Configuring the GCP project that hosts your cluster Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 11.4.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 11.4.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You may also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 11.1. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 11.2. Optional API services API service Console service name Cloud Deployment Manager V2 API deploymentmanager.googleapis.com Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 11.4.3. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 11.3. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 6 1 Firewall rules Networking Global 11 1 Forwarding rules Compute Global 2 0 Health checks Compute Global 2 0 Images Compute Global 1 0 Networks Networking Global 1 0 Routers Networking Global 1 0 Routes Networking Global 2 0 Subnetworks Compute Global 2 0 Target pools Networking Global 2 0 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 11.4.4. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. You can create the service account key in JSON format, or attach the service account to a GCP virtual machine. See Creating service account keys and Creating and enabling service accounts for instances in the GCP documentation. Note If you use a virtual machine with an attached service account to create your cluster, you must set credentialsMode: Manual in the install-config.yaml file before installation. 11.4.4.1. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create a service account with the following permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin Role Administrator Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for using the Cloud Credential Operator in passthrough mode Compute Load Balancer Admin Required roles for user-provisioned GCP infrastructure Deployment Manager Editor The following roles are applied to the service accounts that the control plane and compute machines use: Table 11.4. GCP service account roles Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin roles/artifactregistry.reader 11.4.5. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: africa-south1 (Johannesburg, South Africa) asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) europe-west12 (Turin, Italy) me-central1 (Doha, Qatar, Middle East) me-central2 (Dammam, Saudi Arabia, Middle East) me-west1 (Tel Aviv, Israel) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) Note To determine which machine type instances are available by region and zone, see the Google documentation . 11.4.6. Installing and configuring CLI tools for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must install and configure the CLI tools for GCP. Prerequisites You created a project to host your cluster. You created a service account and granted it the required permissions. Procedure Install the following binaries in USDPATH : gcloud gsutil See Install the latest Cloud SDK version in the GCP documentation. Authenticate using the gcloud tool with your configured service account. See Authorizing with a service account in the GCP documentation. 11.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 11.5.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 11.5. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 11.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 11.6. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 11.5.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 11.1. Machine series A2 A3 C2 C2D C3 C3D E2 M1 N1 N2 N2D Tau T2D 11.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . 11.6. Configuring the GCP project that hosts your shared VPC network If you use a shared Virtual Private Cloud (VPC) to host your OpenShift Container Platform cluster in Google Cloud Platform (GCP), you must configure the project that hosts it. Note If you already have a project that hosts the shared VPC network, review this section to ensure that the project meets all of the requirements to install an OpenShift Container Platform cluster. Procedure Create a project to host the shared VPC for your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Create a service account in the project that hosts your shared VPC. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. The service account for the project that hosts the shared VPC network requires the following roles: Compute Network User Compute Security Admin Deployment Manager Editor DNS Administrator Security Admin Network Management Admin 11.6.1. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the project that hosts the shared VPC that you install the cluster into. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 11.6.2. Creating a VPC in GCP You must create a VPC in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements. One way to create the VPC is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for the VPC section of this topic and save it as 01_vpc.py on your computer. This template describes the VPC that your cluster requires. Export the following variables required by the resource definition: Export the control plane CIDR: USD export MASTER_SUBNET_CIDR='10.0.0.0/17' Export the compute CIDR: USD export WORKER_SUBNET_CIDR='10.0.128.0/17' Export the region to deploy the VPC network and cluster to: USD export REGION='<region>' Export the variable for the ID of the project that hosts the shared VPC: USD export HOST_PROJECT=<host_project> Export the variable for the email of the service account that belongs to host project: USD export HOST_PROJECT_ACCOUNT=<host_service_account_email> Create a 01_vpc.yaml resource definition file: USD cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: '<prefix>' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF 1 infra_id is the prefix of the network name. 2 region is the region to deploy the cluster into, for example us-central1 . 3 master_subnet_cidr is the CIDR for the master subnet, for example 10.0.0.0/17 . 4 worker_subnet_cidr is the CIDR for the worker subnet, for example 10.0.128.0/17 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create <vpc_deployment_name> --config 01_vpc.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} 1 1 For <vpc_deployment_name> , specify the name of the VPC to deploy. Export the VPC variable that other components require: Export the name of the host project network: USD export HOST_PROJECT_NETWORK=<vpc_network> Export the name of the host project control plane subnet: USD export HOST_PROJECT_CONTROL_SUBNET=<control_plane_subnet> Export the name of the host project compute subnet: USD export HOST_PROJECT_COMPUTE_SUBNET=<compute_subnet> Set up the shared VPC. See Setting up Shared VPC in the GCP documentation. 11.6.2.1. Deployment Manager template for the VPC You can use the following Deployment Manager template to deploy the VPC that you need for your OpenShift Container Platform cluster: Example 11.2. 01_vpc.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources} 11.7. Creating the installation files for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 11.7.1. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for GCP 11.7.2. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 11.7.3. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 11.7.4. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c tags: 5 - control-plane-tag1 - control-plane-tag2 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c tags: 8 - compute-tag1 - compute-tag2 replicas: 0 metadata: name: test-cluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: gcp: defaultMachinePlatform: tags: 10 - global-tag1 - global-tag2 projectID: openshift-production 11 region: us-central1 12 pullSecret: '{"auths": ...}' fips: false 13 sshKey: ssh-ed25519 AAAA... 14 publish: Internal 15 1 Specify the public DNS on the host project. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 5 8 10 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter applies to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 9 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 11 Specify the main project where the VM instances reside. 12 Specify the region that your VPC network is in. 13 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 14 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 15 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . To use a shared VPC in a cluster that uses infrastructure that you provision, you must set publish to Internal . The installation program will no longer be able to access the public DNS zone for the base domain in the host project. 11.7.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 11.7.6. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Remove the privateZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone status: {} 1 Remove this section completely. Configure the cloud provider for your VPC. Open the <installation_directory>/manifests/cloud-provider-config.yaml file. Add the network-project-id parameter and set its value to the ID of project that hosts the shared VPC network. Add the network-name parameter and set its value to the name of the shared VPC network that hosts the OpenShift Container Platform cluster. Replace the value of the subnetwork-name parameter with the value of the shared VPC subnet that hosts your compute machines. The contents of the <installation_directory>/manifests/cloud-provider-config.yaml resemble the following example: config: |+ [global] project-id = example-project regional = true multizone = true node-tags = opensh-ptzzx-master node-tags = opensh-ptzzx-worker node-instance-prefix = opensh-ptzzx external-instance-groups-prefix = opensh-ptzzx network-project-id = example-shared-vpc network-name = example-network subnetwork-name = example-worker-subnet If you deploy a cluster that is not on a private network, open the <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml file and replace the value of the scope parameter with External . The contents of the file resemble the following example: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External type: LoadBalancerService status: availableReplicas: 0 domain: '' selector: '' To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 11.8. Exporting common variables 11.8.1. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Google Cloud Platform (GCP). The infrastructure name is also used to locate the appropriate GCP resources during an OpenShift Container Platform installation. The provided Deployment Manager templates contain references to this infrastructure name, so you must extract it. Prerequisites You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 11.8.2. Exporting common variables for Deployment Manager templates You must export a common set of variables that are used with the provided Deployment Manager templates used to assist in completing a user-provided infrastructure install on Google Cloud Platform (GCP). Note Specific Deployment Manager templates can also require additional exported variables, which are detailed in their related procedures. Procedure Export the following common variables to be used by the provided Deployment Manager templates: USD export BASE_DOMAIN='<base_domain>' 1 USD export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' 2 USD export NETWORK_CIDR='10.0.0.0/16' USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 3 USD export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` USD export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` USD export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` 1 2 Supply the values for the host project. 3 For <installation_directory> , specify the path to the directory that you stored the installation files in. 11.9. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 11.9.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 11.9.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 11.7. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 11.8. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 11.9. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 11.10. Creating load balancers in GCP You must configure load balancers in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for the internal load balancer section of this topic and save it as 02_lb_int.py on your computer. This template describes the internal load balancing objects that your cluster requires. For an external cluster, also copy the template from the Deployment Manager template for the external load balancer section of this topic and save it as 02_lb_ext.py on your computer. This template describes the external load balancing objects that your cluster requires. Export the variables that the deployment template uses: Export the cluster network location: USD export CLUSTER_NETWORK=(`gcloud compute networks describe USD{HOST_PROJECT_NETWORK} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`) Export the control plane subnet location: USD export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_CONTROL_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`) Export the three zones that the cluster uses: USD export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`) USD export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`) USD export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`) Create a 02_infra.yaml resource definition file: USD cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF 1 2 Required only when deploying an external cluster. 3 infra_id is the INFRA_ID infrastructure name from the extraction step. 4 region is the region to deploy the cluster into, for example us-central1 . 5 control_subnet is the URI to the control subnet. 6 zones are the zones to deploy the control plane instances into, like us-east1-b , us-east1-c , and us-east1-d . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml Export the cluster IP address: USD export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`) For an external cluster, also export the cluster public IP address: USD export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`) 11.10.1. Deployment Manager template for the external load balancer You can use the following Deployment Manager template to deploy the external load balancer that you need for your OpenShift Container Platform cluster: Example 11.3. 02_lb_ext.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources} 11.10.2. Deployment Manager template for the internal load balancer You can use the following Deployment Manager template to deploy the internal load balancer that you need for your OpenShift Container Platform cluster: Example 11.4. 02_lb_int.py Deployment Manager template def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': "HTTPS" } }, { 'name': context.properties['infra_id'] + '-api-internal', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources} You will need this template in addition to the 02_lb_ext.py template when you create an external cluster. 11.11. Creating a private DNS zone in GCP You must configure a private DNS zone in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create this component is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Procedure Copy the template from the Deployment Manager template for the private DNS section of this topic and save it as 02_dns.py on your computer. This template describes the private DNS objects that your cluster requires. Create a 02_dns.yaml resource definition file: USD cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 cluster_domain is the domain for the cluster, for example openshift.example.com . 3 cluster_network is the selfLink URL to the cluster network. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} The templates do not create DNS entries due to limitations of Deployment Manager, so you must create them manually: Add the internal DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} For an external cluster, also add the external DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} 11.11.1. Deployment Manager template for the private DNS You can use the following Deployment Manager template to deploy the private DNS that you need for your OpenShift Container Platform cluster: Example 11.5. 02_dns.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources} 11.12. Creating firewall rules in GCP You must create firewall rules in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Procedure Copy the template from the Deployment Manager template for firewall rules section of this topic and save it as 03_firewall.py on your computer. This template describes the security groups that your cluster requires. Create a 03_firewall.yaml resource definition file: USD cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF 1 allowed_external_cidr is the CIDR range that can access the cluster API and SSH to the bootstrap host. For an internal cluster, set this value to USD{NETWORK_CIDR} . 2 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 cluster_network is the selfLink URL to the cluster network. 4 network_cidr is the CIDR of the VPC network, for example 10.0.0.0/16 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} 11.12.1. Deployment Manager template for firewall rules You can use the following Deployment Manager template to deploy the firewall rues that you need for your OpenShift Container Platform cluster: Example 11.6. 03_firewall.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources} 11.13. Creating IAM roles in GCP You must create IAM roles in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for IAM roles section of this topic and save it as 03_iam.py on your computer. This template describes the IAM roles that your cluster requires. Create a 03_iam.yaml resource definition file: USD cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml Export the variable for the master service account: USD export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the worker service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Assign the permissions that the installation program requires to the service accounts for the subnets that host the control plane and compute subnets: Grant the networkViewer role of the project that hosts your shared VPC to the master service account: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} projects add-iam-policy-binding USD{HOST_PROJECT} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkViewer" Grant the networkUser role to the master service account for the control plane subnet: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding "USD{HOST_PROJECT_CONTROL_SUBNET}" --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region USD{REGION} Grant the networkUser role to the worker service account for the control plane subnet: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding "USD{HOST_PROJECT_CONTROL_SUBNET}" --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region USD{REGION} Grant the networkUser role to the master service account for the compute subnet: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding "USD{HOST_PROJECT_COMPUTE_SUBNET}" --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region USD{REGION} Grant the networkUser role to the worker service account for the compute subnet: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding "USD{HOST_PROJECT_COMPUTE_SUBNET}" --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region USD{REGION} The templates do not create the policy bindings due to limitations of Deployment Manager, so you must create them manually: USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.instanceAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.securityAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/iam.serviceAccountUser" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/storage.admin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.viewer" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/storage.admin" Create a service account key and store it locally for later use: USD gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT} 11.13.1. Deployment Manager template for IAM roles You can use the following Deployment Manager template to deploy the IAM roles that you need for your OpenShift Container Platform cluster: Example 11.7. 03_iam.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources} 11.14. Creating the RHCOS cluster image for the GCP infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Google Cloud Platform (GCP) for your OpenShift Container Platform nodes. Procedure Obtain the RHCOS image from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The file name contains the OpenShift Container Platform version number in the format rhcos-<version>-<arch>-gcp.<arch>.tar.gz . Create the Google storage bucket: USD gsutil mb gs://<bucket_name> Upload the RHCOS image to the Google storage bucket: USD gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name> Export the uploaded RHCOS image location as a variable: USD export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz Create the cluster image: USD gcloud compute images create "USD{INFRA_ID}-rhcos-image" \ --source-uri="USD{IMAGE_SOURCE}" 11.15. Creating the bootstrap machine in GCP You must create the bootstrap machine in Google Cloud Platform (GCP) to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Ensure you installed pyOpenSSL. Procedure Copy the template from the Deployment Manager template for the bootstrap machine section of this topic and save it as 04_bootstrap.py on your computer. This template describes the bootstrap machine that your cluster requires. Export the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that the installation program requires: USD export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`) Create a bucket and upload the bootstrap.ign file: USD gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition USD gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/ Create a signed URL for the bootstrap instance to use to access the Ignition config. Export the URL from the output as a variable: USD export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print USD5}'` Create a 04_bootstrap.yaml resource definition file: USD cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 zone is the zone to deploy the bootstrap instance into, for example us-central1-b . 4 cluster_network is the selfLink URL to the cluster network. 5 control_subnet is the selfLink URL to the control subnet. 6 image is the selfLink URL to the RHCOS image. 7 machine_type is the machine type of the instance, for example n1-standard-4 . 8 root_volume_size is the boot disk size for the bootstrap machine. 9 bootstrap_ign is the URL output when creating a signed URL. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml Add the bootstrap instance to the internal load balancer instance group: USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap Add the bootstrap instance group to the internal load balancer backend service: USD gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} 11.15.1. Deployment Manager template for the bootstrap machine You can use the following Deployment Manager template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 11.8. 04_bootstrap.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{"ignition":{"config":{"replace":{"source":"' + context.properties['bootstrap_ign'] + '"}},"version":"3.2.0"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources} 11.16. Creating the control plane machines in GCP You must create the control plane machines in Google Cloud Platform (GCP) for your cluster to use. One way to create these machines is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables , Creating load balancers in GCP , Creating IAM roles in GCP , and Creating the bootstrap machine in GCP sections. Create the bootstrap machine. Procedure Copy the template from the Deployment Manager template for control plane machines section of this topic and save it as 05_control_plane.py on your computer. This template describes the control plane machines that your cluster requires. Export the following variable required by the resource definition: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign` Create a 05_control_plane.yaml resource definition file: USD cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 zones are the zones to deploy the control plane instances into, for example us-central1-a , us-central1-b , and us-central1-c . 3 control_subnet is the selfLink URL to the control subnet. 4 image is the selfLink URL to the RHCOS image. 5 machine_type is the machine type of the instance, for example n1-standard-4 . 6 service_account_email is the email address for the master service account that you created. 7 ignition is the contents of the master.ign file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the control plane machines manually. Run the following commands to add the control plane machines to the appropriate instance groups: USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2 For an external cluster, you must also run the following commands to add the control plane machines to the target pools: USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_0}" --instances=USD{INFRA_ID}-master-0 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_1}" --instances=USD{INFRA_ID}-master-1 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_2}" --instances=USD{INFRA_ID}-master-2 11.16.1. Deployment Manager template for control plane machines You can use the following Deployment Manager template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 11.9. 05_control_plane.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources} 11.17. Wait for bootstrap completion and remove bootstrap resources in GCP After you create all of the required infrastructure in Google Cloud Platform (GCP), wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} USD gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign USD gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition USD gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap 11.18. Creating additional worker machines in GCP You can create worker machines in Google Cloud Platform (GCP) for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Deployment Manager template. Additional instances can be launched by including additional resources of type 06_worker.py in the file. Note If you do not use the provided Deployment Manager template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables , Creating load balancers in GCP , and Creating the bootstrap machine in GCP sections. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the Deployment Manager template for worker machines section of this topic and save it as 06_worker.py on your computer. This template describes the worker machines that your cluster requires. Export the variables that the resource definition uses. Export the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_COMPUTE_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`) Export the email address for your service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the location of the compute machine Ignition config file: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign` Create a 06_worker.yaml resource definition file: USD cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF 1 name is the name of the worker machine, for example worker-0 . 2 9 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 10 zone is the zone to deploy the worker machine into, for example us-central1-a . 4 11 compute_subnet is the selfLink URL to the compute subnet. 5 12 image is the selfLink URL to the RHCOS image. 1 6 13 machine_type is the machine type of the instance, for example n1-standard-4 . 7 14 service_account_email is the email address for the worker service account that you created. 8 15 ignition is the contents of the worker.ign file. Optional: If you want to launch additional instances, include additional resources of type 06_worker.py in your 06_worker.yaml resource definition file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 11.18.1. Deployment Manager template for worker machines You can use the following Deployment Manager template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 11.10. 06_worker.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources} 11.19. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 11.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You installed the oc CLI. Ensure the bootstrap process completed successfully. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 11.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 11.22. Adding the ingress DNS records DNS zone configuration is removed when creating Kubernetes manifests and generating Ignition configs. You must manually create DNS records that point at the ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites Ensure you defined the variables in the Exporting common variables section. Remove the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs. Ensure the bootstrap process completed successfully. Procedure Wait for the Ingress router to create a load balancer and populate the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98 Add the A record to your zones: To use A records: Export the variable for the router IP address: USD export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add the A record to the private zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} For an external cluster, also add the A record to the public zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} To add explicit domains instead of using a wildcard, create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com 11.23. Adding ingress firewall rules The cluster requires several firewall rules. If you do not use a shared VPC, these rules are created by the Ingress Controller via the GCP cloud provider. When you use a shared VPC, you can either create cluster-wide firewall rules for all services now or create each rule based on events, when the cluster requests access. By creating each rule when the cluster requests access, you know exactly which firewall rules are required. By creating cluster-wide firewall rules, you can apply the same rule set across multiple clusters. If you choose to create each rule based on events, you must create firewall rules after you provision the cluster and during the life of the cluster when the console notifies you that rules are missing. Events that are similar to the following event are displayed, and you must add the firewall rules that are required: USD oc get events -n openshift-ingress --field-selector="reason=LoadBalancerManualChange" Example output Firewall change required by security admin: `gcloud compute firewall-rules create k8s-fw-a26e631036a3f46cba28f8df67266d55 --network example-network --description "{\"kubernetes.io/service-name\":\"openshift-ingress/router-default\", \"kubernetes.io/service-ip\":\"35.237.236.234\"}\" --allow tcp:443,tcp:80 --source-ranges 0.0.0.0/0 --target-tags exampl-fqzq7-master,exampl-fqzq7-worker --project example-project` If you encounter issues when creating these rule-based events, you can configure the cluster-wide firewall rules while your cluster is running. 11.23.1. Creating cluster-wide firewall rules for a shared VPC in GCP You can create cluster-wide firewall rules to allow the access that the OpenShift Container Platform cluster requires. Warning If you do not choose to create firewall rules based on cluster events, you must create cluster-wide firewall rules. Prerequisites You exported the variables that the Deployment Manager templates require to deploy your cluster. You created the networking and load balancing components in GCP that your cluster requires. Procedure Add a single firewall rule to allow the Google Cloud Engine health checks to access all of the services. This rule enables the ingress load balancers to determine the health status of their instances. USD gcloud compute firewall-rules create --allow='tcp:30000-32767,udp:30000-32767' --network="USD{CLUSTER_NETWORK}" --source-ranges='130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22' --target-tags="USD{INFRA_ID}-master,USD{INFRA_ID}-worker" USD{INFRA_ID}-ingress-hc --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} Add a single firewall rule to allow access to all cluster services: For an external cluster: USD gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network="USD{CLUSTER_NETWORK}" --source-ranges="0.0.0.0/0" --target-tags="USD{INFRA_ID}-master,USD{INFRA_ID}-worker" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} For a private cluster: USD gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network="USD{CLUSTER_NETWORK}" --source-ranges=USD{NETWORK_CIDR} --target-tags="USD{INFRA_ID}-master,USD{INFRA_ID}-worker" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} Because this rule only allows traffic on TCP ports 80 and 443 , ensure that you add all the ports that your services use. 11.24. Completing a GCP installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Google Cloud Platform (GCP) user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Ensure the bootstrap process completed successfully. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Observe the running state of your cluster. Run the following command to view the current cluster version and status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete Run the following command to view the Operators managed on the control plane by the Cluster Version Operator (CVO): USD oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m Run the following command to view your cluster pods: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m ... openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m When the current cluster version is AVAILABLE , the installation is complete. 11.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 11.26. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"export MASTER_SUBNET_CIDR='10.0.0.0/17'",
"export WORKER_SUBNET_CIDR='10.0.128.0/17'",
"export REGION='<region>'",
"export HOST_PROJECT=<host_project>",
"export HOST_PROJECT_ACCOUNT=<host_service_account_email>",
"cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: '<prefix>' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF",
"gcloud deployment-manager deployments create <vpc_deployment_name> --config 01_vpc.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} 1",
"export HOST_PROJECT_NETWORK=<vpc_network>",
"export HOST_PROJECT_CONTROL_SUBNET=<control_plane_subnet>",
"export HOST_PROJECT_COMPUTE_SUBNET=<compute_subnet>",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}",
"mkdir <installation_directory>",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c tags: 5 - control-plane-tag1 - control-plane-tag2 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c tags: 8 - compute-tag1 - compute-tag2 replicas: 0 metadata: name: test-cluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: gcp: defaultMachinePlatform: tags: 10 - global-tag1 - global-tag2 projectID: openshift-production 11 region: us-central1 12 pullSecret: '{\"auths\": ...}' fips: false 13 sshKey: ssh-ed25519 AAAA... 14 publish: Internal 15",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone status: {}",
"config: |+ [global] project-id = example-project regional = true multizone = true node-tags = opensh-ptzzx-master node-tags = opensh-ptzzx-worker node-instance-prefix = opensh-ptzzx external-instance-groups-prefix = opensh-ptzzx network-project-id = example-shared-vpc network-name = example-network subnetwork-name = example-worker-subnet",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External type: LoadBalancerService status: availableReplicas: 0 domain: '' selector: ''",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"export BASE_DOMAIN='<base_domain>' 1 export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' 2 export NETWORK_CIDR='10.0.0.0/16' export KUBECONFIG=<installation_directory>/auth/kubeconfig 3 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json`",
"export CLUSTER_NETWORK=(`gcloud compute networks describe USD{HOST_PROJECT_NETWORK} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)",
"export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_CONTROL_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)",
"export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)",
"export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)",
"export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)",
"cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml",
"export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)",
"export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}",
"def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}",
"cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}",
"cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}",
"cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml",
"export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} projects add-iam-policy-binding USD{HOST_PROJECT} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkViewer\"",
"gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_CONTROL_SUBNET}\" --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}",
"gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_CONTROL_SUBNET}\" --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}",
"gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_COMPUTE_SUBNET}\" --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}",
"gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_COMPUTE_SUBNET}\" --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}",
"gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"",
"gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}",
"gsutil mb gs://<bucket_name>",
"gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>",
"export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz",
"gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"",
"export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)",
"gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition",
"gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/",
"export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`",
"cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap",
"gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign`",
"cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}",
"gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign",
"gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition",
"gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap",
"export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_COMPUTE_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)",
"export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign`",
"cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98",
"export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com",
"oc get events -n openshift-ingress --field-selector=\"reason=LoadBalancerManualChange\"",
"Firewall change required by security admin: `gcloud compute firewall-rules create k8s-fw-a26e631036a3f46cba28f8df67266d55 --network example-network --description \"{\\\"kubernetes.io/service-name\\\":\\\"openshift-ingress/router-default\\\", \\\"kubernetes.io/service-ip\\\":\\\"35.237.236.234\\\"}\\\" --allow tcp:443,tcp:80 --source-ranges 0.0.0.0/0 --target-tags exampl-fqzq7-master,exampl-fqzq7-worker --project example-project`",
"gcloud compute firewall-rules create --allow='tcp:30000-32767,udp:30000-32767' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges='130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22' --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress-hc --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}",
"gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges=\"0.0.0.0/0\" --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}",
"gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges=USD{NETWORK_CIDR} --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete",
"oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_gcp/installing-gcp-user-infra-vpc |
function::sock_state_str2num | function::sock_state_str2num Name function::sock_state_str2num - Given a socket state string, return the corresponding state number Synopsis Arguments state The state name | [
"sock_state_str2num:long(state:string)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-sock-state-str2num |
1.2.2. Red Hat Enterprise Virtualization | 1.2.2. Red Hat Enterprise Virtualization The Red Hat Enterprise Virtualization platform is a richly featured virtualization management solution providing fully integrated management across virtual machines. It is based on the leading open source virtualization platform and provides superior technical capabilities. The platform offers scalability in the management of large numbers of virtual machines. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/sect-introducing_v2v-about_this_guide_rhev |
Part I. Vulnerability reporting with Clair on Red Hat Quay overview | Part I. Vulnerability reporting with Clair on Red Hat Quay overview The content in this guide explains the key purposes and concepts of Clair on Red Hat Quay. It also contains information about Clair releases and the location of official Clair containers. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/vulnerability_reporting_with_clair_on_red_hat_quay/vulnerability-reporting-clair-quay-overview |
Chapter 4. KVM Live Migration | Chapter 4. KVM Live Migration This chapter covers migrating guest virtual machines running on one host physical machine to another. In both instances, the host physical machines are running the KVM hypervisor. Migration describes the process of moving a guest virtual machine from one host physical machine to another. This is possible because guest virtual machines are running in a virtualized environment instead of directly on the hardware. Migration is useful for: Load balancing - guest virtual machines can be moved to host physical machines with lower usage when their host physical machine becomes overloaded, or another host physical machine is under-utilized. Hardware independence - when we need to upgrade, add, or remove hardware devices on the host physical machine, we can safely relocate guest virtual machines to other host physical machines. This means that guest virtual machines do not experience any downtime for hardware improvements. Energy saving - guest virtual machines can be redistributed to other host physical machines and can thus be powered off to save energy and cut costs in low usage periods. Geographic migration - guest virtual machines can be moved to another location for lower latency or in serious circumstances. Migration works by sending the state of the guest virtual machine's memory and any virtualized devices to a destination host physical machine. It is recommended to use shared, networked storage to store the guest virtual machine's images to be migrated. It is also recommended to use libvirt-managed storage pools for shared storage when migrating virtual machines. Migrations can be performed live or not. In a live migration, the guest virtual machine continues to run on the source host physical machine while its memory pages are transferred, in order, to the destination host physical machine. During migration, KVM monitors the source for any changes in pages it has already transferred, and begins to transfer these changes when all of the initial pages have been transferred. KVM also estimates transfer speed during migration, so when the remaining amount of data to transfer will take a certain configurable period of time (10 milliseconds by default), KVM suspends the original guest virtual machine, transfers the remaining data, and resumes the same guest virtual machine on the destination host physical machine. A migration that is not performed live, suspends the guest virtual machine, then moves an image of the guest virtual machine's memory to the destination host physical machine. The guest virtual machine is then resumed on the destination host physical machine and the memory the guest virtual machine used on the source host physical machine is freed. The time it takes to complete such a migration depends on network bandwidth and latency. If the network is experiencing heavy use or low bandwidth, the migration will take much longer. If the original guest virtual machine modifies pages faster than KVM can transfer them to the destination host physical machine, offline migration must be used, as live migration would never complete. 4.1. Live Migration Requirements Migrating guest virtual machines requires the following: Migration requirements A guest virtual machine installed on shared storage using one of the following protocols: Fibre Channel-based LUNs iSCSI FCoE NFS GFS2 SCSI RDMA protocols (SCSI RCP): the block export protocol used in Infiniband and 10GbE iWARP adapters The migration platforms and versions should be checked against table Table 4.1, "Live Migration Compatibility" . It should also be noted that Red Hat Enterprise Linux 6 supports live migration of guest virtual machines using raw and qcow2 images on shared storage. Both systems must have the appropriate TCP/IP ports open. In cases where a firewall is used, refer to the Red Hat Enterprise Linux Virtualization Security Guide which can be found at https://access.redhat.com/site/documentation/ for detailed port information. A separate system exporting the shared storage medium. Storage should not reside on either of the two host physical machines being used for migration. Shared storage must mount at the same location on source and destination systems. The mounted directory names must be identical. Although it is possible to keep the images using different paths, it is not recommended. Note that, if you are intending to use virt-manager to perform the migration, the path names must be identical. If however you intend to use virsh to perform the migration, different network configurations and mount directories can be used with the help of --xml option or pre-hooks when doing migrations. Even without shared storage, migration can still succeed with the option --copy-storage-all (deprecated). For more information on prehooks , refer to libvirt.org , and for more information on the XML option, refer to Chapter 20, Manipulating the Domain XML . When migration is attempted on an existing guest virtual machine in a public bridge+tap network, the source and destination host physical machines must be located in the same network. Otherwise, the guest virtual machine network will not operate after migration. In Red Hat Enterprise Linux 5 and 6, the default cache mode of KVM guest virtual machines is set to none , which prevents inconsistent disk states. Setting the cache option to none (using virsh attach-disk cache none , for example), causes all of the guest virtual machine's files to be opened using the O_DIRECT flag (when calling the open syscall), thus bypassing the host physical machine's cache, and only providing caching on the guest virtual machine. Setting the cache mode to none prevents any potential inconsistency problems, and when used makes it possible to live-migrate virtual machines. For information on setting cache to none , refer to Section 13.3, "Adding Storage Devices to Guests" . Make sure that the libvirtd service is enabled ( # chkconfig libvirtd on ) and running ( # service libvirtd start ). It is also important to note that the ability to migrate effectively is dependent on the parameter settings in the /etc/libvirt/libvirtd.conf configuration file. Procedure 4.1. Configuring libvirtd.conf Opening the libvirtd.conf requires running the command as root: Change the parameters as needed and save the file. Restart the libvirtd service: | [
"vim /etc/libvirt/libvirtd.conf",
"service libvirtd restart"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/chap-virtualization_administration_guide-kvm_live_migration |
Release notes for the Red Hat build of Cryostat 2.3 | Release notes for the Red Hat build of Cryostat 2.3 Red Hat build of Cryostat 2 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.3/index |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_automation_mesh_guide_for_vm-based_installations/providing-feedback |
Chapter 4. Installing a cluster on IBM Cloud VPC with customizations | Chapter 4. Installing a cluster on IBM Cloud VPC with customizations In OpenShift Container Platform version 4.13, you can install a customized cluster on infrastructure that the installation program provisions on IBM Cloud VPC. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud VPC . 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 4.5. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud account. Procedure Export your API key for your account as a global variable: USD export IC_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 4.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on IBM Cloud. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select ibmcloud as the platform to target. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 4.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 4.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 4.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 4.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 4.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 4.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 4.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 4.6.1.4. Additional IBM Cloud VPC configuration parameters Additional IBM Cloud VPC configuration parameters are described in the following table: Table 4.4. Additional IBM Cloud VPC parameters Parameter Description Values platform.ibmcloud.resourceGroupName The name of an existing resource group. By default, an installer-provisioned VPC and cluster resources are placed in this resource group. When not specified, the installation program creates the resource group for the cluster. If you are deploying the cluster into an existing VPC, the installer-provisioned cluster resources are placed in this resource group. When not specified, the installation program creates the resource group for the cluster. The VPC resources that you have provisioned must exist in a resource group that you specify using the networkResourceGroupName parameter. In either case, this resource group must only be used for a single cluster installation, as the cluster components assume ownership of all of the resources in the resource group. [ 1 ] String, for example existing_resource_group . platform.ibmcloud.networkResourceGroupName The name of an existing resource group. This resource contains the existing VPC and subnets to which the cluster will be deployed. This parameter is required when deploying the cluster to a VPC that you have provisioned. String, for example existing_network_resource_group . platform.ibmcloud.dedicatedHosts.profile The new dedicated host to create. If you specify a value for platform.ibmcloud.dedicatedHosts.name , this parameter is not required. Valid IBM Cloud VPC dedicated host profile, such as cx2-host-152x304 . [ 2 ] platform.ibmcloud.dedicatedHosts.name An existing dedicated host. If you specify a value for platform.ibmcloud.dedicatedHosts.profile , this parameter is not required. String, for example my-dedicated-host-name . platform.ibmcloud.type The instance type for all IBM Cloud VPC machines. Valid IBM Cloud VPC instance type, such as bx2-8x32 . [ 2 ] platform.ibmcloud.vpcName The name of the existing VPC that you want to deploy your cluster to. String. platform.ibmcloud.controlPlaneSubnets The name(s) of the existing subnet(s) in your VPC that you want to deploy your control plane machines to. Specify a subnet for each availability zone. String array platform.ibmcloud.computeSubnets The name(s) of the existing subnet(s) in your VPC that you want to deploy your compute machines to. Specify a subnet for each availability zone. Subnet IDs are not supported. String array Whether you define an existing resource group, or if the installer creates one, determines how the resource group is treated when the cluster is uninstalled. If you define a resource group, the installer removes all of the installer-provisioned resources, but leaves the resource group alone; if a resource group is created as part of the installation, the installer removes all of the installer-provisioned resources and the resource group. To determine which profile best meets your needs, see Instance Profiles in the IBM documentation. 4.6.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 4.5. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 4.6.3. Sample customized install-config.yaml file for IBM Cloud VPC You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and then modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-south 10 credentialsMode: Manual publish: External pullSecret: '{"auths": ...}' 11 fips: false 12 sshKey: ssh-ed25519 AAAA... 13 1 8 10 11 Required. The installation program prompts you for this value. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 9 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 Enables or disables FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 13 Optional: provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 4.6.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.7. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud VPC resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, obtain the OpenShift Container Platform release image that your openshift-install binary is built to use: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --cloud=<provider_name> \ 1 --to=<path_to_credential_requests_directory> 2 1 The name of the provider. For example: ibmcloud or powervs . 2 The directory where the credential requests will be stored. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components. Example credrequests directory contents for OpenShift Container Platform 4.13 on IBM Cloud VPC 0000_26_cloud-controller-manager-operator_15_credentialsrequest-ibm.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request-ibmcos.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-storage-operator_03_credentials_request_ibm.yaml 5 1 The Cloud Controller Manager Operator CR is required. 2 The Machine API Operator CR is required. 3 The Image Registry Operator CR is required. 4 The Ingress Operator CR is required. 5 The Storage Operator CR is an optional component and might be disabled in your cluster. Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir <path_to_credential_requests_directory> \ 1 --name <cluster_name> \ 2 --output-dir <installation_directory> \ --resource-group-name <resource_group_name> 3 1 The directory where the credential requests are stored. 2 The name of the OpenShift Container Platform cluster. 3 Optional: The name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 4.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 4.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 4.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 4.12. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IC_API_KEY=<api_key>",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-south 10 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 11 fips: false 12 sshKey: ssh-ed25519 AAAA... 13",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=<provider_name> \\ 1 --to=<path_to_credential_requests_directory> 2",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"0000_26_cloud-controller-manager-operator_15_credentialsrequest-ibm.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request-ibmcos.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-storage-operator_03_credentials_request_ibm.yaml 5",
"ccoctl ibmcloud create-service-id --credentials-requests-dir <path_to_credential_requests_directory> \\ 1 --name <cluster_name> \\ 2 --output-dir <installation_directory> --resource-group-name <resource_group_name> 3",
"grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_ibm_cloud_vpc/installing-ibm-cloud-customizations |
Chapter 5. Using the Red Hat Satellite API | Chapter 5. Using the Red Hat Satellite API This chapter provides a range of examples of how to use the Red Hat Satellite API to perform different tasks. You can use the API on Satellite Server via HTTPS on port 443, or on Capsule Server via HTTPS on port 8443. You can address these different port requirements within the script itself. For example, in Ruby, you can specify the Satellite and Capsule URLs as follows: For the host that is subscribed to Satellite Server or Capsule Server, you can determine the correct port required to access the API from the /etc/rhsm/rhsm.conf file, in the port entry of the [server] section. You can use these values to fully automate your scripts, removing any need to verify which ports to use. This chapter uses curl for sending API requests. For more information, see Section 4.1, "API Requests with curl" . Examples in this chapter use the Python json.tool module to format the output. 5.1. Working with Hosts Note The example requests below use python3 to format the respone from the Satellite Server. On RHEL 7 and some older systems, you must use python instead of python3 . Listing Hosts This example returns a list of Satellite hosts. Example request: Example response: Requesting Information for a Host This request returns information for the host satellite.example.com . Example request: Example response: Listing Host Facts This request returns all facts for the host satellite.example.com . Example request: Example response: Searching for Hosts with Matching Patterns This query returns all hosts that match the pattern "example". Example request: Example response: Searching for Hosts in an Environment This query returns all hosts in the production environment. Example request: Example response: Searching for Hosts with a Specific Fact Value This query returns all hosts with a model name RHEV Hypervisor . Example request: Example response: Deleting a Host This request deletes a host with a name host1.example.com . Example request: Downloading a Full Boot Disk Image This request downloads a full boot disk image for a host by its ID. Example request: 5.2. Working with Life Cycle Environments Satellite divides application life cycles into life cycle environments, which represent each stage of the application life cycle. Life cycle environments are linked to from an environment path. To create linked life cycle environments with the API, use the prior_id parameter. You can find the built-in API reference for life cycle environments at https:// satellite.example.com /apidoc/v2/lifecycle_environments.html . The API routes include /katello/api/environments and /katello/api/organizations/:organization_id/environments . Note The example requests below use python3 to format the respone from the Satellite Server. On RHEL 7 and some older systems, you must use python instead of python3 . Listing Life Cycle Environments Use this API call to list all the current life cycle environments on your Satellite for the default organization with ID 1 . Example request: Example response: Creating Linked Life Cycle Environments Use this example to create a path of life cycle environments. This procedure uses the default Library environment with ID 1 as the starting point for creating life cycle environments. Choose an existing life cycle environment that you want to use as a starting point. List the environment using its ID, in this case, the environment with ID 1 : Example request: Example response: Create a JSON file, for example, life-cycle.json , with the following content: Create a life cycle environment using the prior option set to 1 . Example request: Example response: In the command output, you can see the ID for this life cycle environment is 2 , and the life cycle environment prior to this one is 1 . Use the life cycle environment with ID 2 to create a successor to this environment. Edit the previously created life-cycle.json file, updating the label , name , and prior values. Create a life cycle environment, using the prior option set to 2 . Example request: Example response: In the command output, you can see the ID for this life cycle environment is 3 , and the life cycle environment prior to this one is 2 . Updating a Life Cycle Environment You can update a life cycle environment using a PUT command. This example request updates a description of the life cycle environment with ID 3 . Example request: Example response: Deleting a Life Cycle Environment You can delete a life cycle environment provided it has no successor. Therefore, delete them in reverse order using a command in the following format: Example request: 5.3. Uploading Content to the Satellite Server This section outlines how to use the Satellite 6 API to upload and import large files to your Satellite Server. This process involves four steps: Create an upload request. Upload the content. Import the content. Delete the upload request. The maximum file size that you can upload is 2MB. For information about uploading larger content, see Uploading Content Larger than 2 MB . Procedure Assign the package name to the variable name : Example request: Assign the checksum of the file to the variable checksum : Example request: Assign the file size to the variable size : Example request: The following command creates a new upload request and returns the upload ID of the request using size and checksum . Example request: where 76, in this case, is an example Repository ID. Example request: Assign the upload ID to the variable upload_id : Example request: Assign the path of the package you want to upload to the variable path : Upload your content. Ensure you use the correct MIME type when you upload data. The API uses the application/json MIME type for the majority of requests to Satellite 6. Combine the upload_id, MIME type, and other parameters to upload content. Example request: After you have uploaded the content to the Satellite Server, you need to import it into the appropriate repository. Until you complete this step, the Satellite Server does not detect the new content. Example request: After you have successfully uploaded and imported your content, you can delete the upload request. This frees any temporary disk space that data is using during the upload. Example request: Uploading Content Larger than 2 MB The following example demonstrates how to split a large file into chunks, create an upload request, upload the individual files, import them to Satellite, and then delete the upload request. Note that this example uses sample content, host names, user names, repository ID, and file names. Assign the package name to the variable name : Assign the checksum of the file to the variable checksum : Assign the file size to the variable size : The following command creates a new upload request and returns the upload ID of the request using size and checksum . Example request: where 76, in this case, is an example Repository ID. Example output Assign the upload ID to the variable upload_id : Split the file in 2MB chunks: Example output Assign the prefix of the split files to the variable path. Upload the file chunks. The offset starts at 0 for the first chunk and increases by 2000000 for each file. Note the use of the offset parameter and how it relates to the file size. Note also that the indexes are used after the path variable, for example, USD{path}0, USD{path}1. Example requests: Import the complete upload to the repository: Delete the upload request: Uploading Duplicate Content Note that if you try to upload duplicate content using: Example request: The call will return a content unit ID instead of an upload ID, similar to this: You can copy this output and call import uploads directly to add the content to a repository: Example request: Note that the call changes from using upload_id to using content_unit_id . 5.4. Applying Errata to a Host or Host Collection You can use the API to apply errata to a host, host group, or host collection. The following is the basic syntax of a PUT request: You can browse the built in API doc to find a URL to use for applying Errata. You can use the Satellite web UI to help discover the format for the search query. Navigate to Hosts > Host Collections and select a host collection. Go to Collection Actions > Errata Installation and notice the search query box contents. For example, for a Host Collection called my-collection , the search box contains host_collection="my-collection" . Applying Errata to a Host This example uses the API URL for bulk actions /katello/api/hosts/bulk/install_content to show the format required for a simple search. Example request: Applying Errata to a Host Collection In this example, notice the level of escaping required to pass the search string host_collection="my-collection" as seen in the Satellite web UI. Example request: 5.5. Using Extended Searches You can find search parameters that you can use to build your search queries in the web UI. For more information, see Building Search Queries in Administering Red Hat Satellite . For example, to search for hosts, complete the following steps: In the Satellite web UI, navigate to Hosts > All Hosts and click the Search field to display a list of search parameters. Locate the search parameters that you want to use. For this example, locate os_title and model . Combine the search parameters in your API query as follows: Example request: Note The example request uses python3 to format the respone from the Satellite Server. On RHEL 7 and some older systems, you must use python instead of python3 . Example response: 5.6. Using Searches with Pagination Control You can use the per_page and page pagination parameters to limit the search results that an API search query returns. The per_page parameter specifies the number of results per page and the page parameter specifies which page, as calculated by the per_page parameter, to return. The default number of items to return is set to 1000 when you do not specify any pagination parameters, but the per_page value has a default of 20 which applies when you specify the page parameter. Listing Content Views This example returns a list of Content Views in pages. The list contains 10 keys per page and returns the third page. Example request: Listing Activation Keys This example returns a list of activation keys for an organization with ID 1 in pages. The list contains 30 keys per page and returns the second page. Example request: Returning Multiple Pages You can use a for loop structure to get multiple pages of results. This example returns pages 1 to 3 of Content Views with 5 results per page: 5.7. Overriding Smart Class Parameters You can search for Smart Parameters using the API and supply a value to override a Smart Parameter in a Class. You can find the full list of attributes that you can modify in the built-in API reference at https:// satellite.example.com /apidoc/v2/smart_class_parameters/update.html . Find the ID of the Smart Class parameter you want to change: List all Smart Class Parameters. Example request: If you know the Puppet class ID, for example 5, you can restrict the scope: Example request: Both calls accept a search parameter. You can view the full list of searchable fields in the Satellite web UI. Navigate to Configure > Smart variables and click in the search query box to reveal the list of fields. Two particularly useful search parameters are puppetclass_name and key , which you can use to search for a specific parameter. For example, using the --data option to pass URL encoded data. Example request: Satellite supports standard scoped-search syntax. When you find the ID of the parameter, list the full details including current override values. Example request: Enable overriding of parameter values. Example request: Note that you cannot create or delete the parameters manually. You can only modify their attributes. Satellite creates and deletes parameters only upon class import from a proxy. Add custom override matchers. Example request: For more information about override values, see https:// satellite.example.com /apidoc/v2/override_values.html . You can delete override values. Example request: 5.8. Modifying a Smart Class Parameter Using an External File Using external files simplifies working with JSON data. Using an editor with syntax highlighting can help you avoid and locate mistakes. Note The example requests below use python3 to format the respone from the Satellite Server. On RHEL 7 and some older systems, you must use python instead of python3 . Modifying a Smart Class Parameter Using an External File This example uses a MOTD Puppet manifest. Search for the Puppet Class by name, motd in this case. Example request: Examine the following output. Each Smart Class Parameter has an ID that is global for the same Satellite instance. The content parameter of the motd class has id=3 in this Satellite Server. Do not confuse this with the Puppet Class ID that displays before the Puppet Class name. Example response: Use the parameter ID 3 to get the information specific to the motd parameter and redirect the output to a file, for example, output_file.json . Example request: Copy the file created in the step to a new file for editing, for example, changed_file.json : Modify the required values in the file. In this example, change the content parameter of the motd module, which requires changing the override option from false to true : After editing the file, verify that it looks as follows and then save the changes: Apply the changes to Satellite Server: 5.9. Deleting OpenSCAP reports In Satellite Server, you can delete one or more OpenSCAP reports. However, when you delete reports, you must delete one page at a time. If you want to delete all Openscap reports, use the bash script that follows. Note The example request and the example script below use python3 to format the response from the Satellite Server. On RHEL 7 and some older systems, you must use python instead of python3 . Deleting an OpenSCAP Report To delete an OpenSCAP report, complete the following steps: List all OpenSCAP reports. Note the IDs of the reports that you want to delete. Example request: Example response: Using an ID from the step, delete the OpenSCAP report. Repeat for each ID that you want to delete. Example request: Example response: Example BASH Script to Delete All OpenSCAP Reports Use the following bash script to delete all the OpenSCAP reports: #!/bin/bash #this script removes all the arf reports from the satellite server #settings USER= username PASS= password URI=https:// satellite.example.com #check amount of reports while [ USD(curl --insecure --user USDUSER:USDPASS USDURI/api/v2/compliance/arf_reports/ | python3 -m json.tool | grep \"\total\": | cut --fields=2 --delimiter":" | cut --fields=1 --delimiter"," | sed "s/ //g") -gt 0 ]; do #fetch reports for i in USD(curl --insecure --user USDUSER:USDPASS USDURI/api/v2/compliance/arf_reports/ | python3 -m json.tool | grep \"\id\": | cut --fields=2 --delimiter":" | cut --fields=1 --delimiter"," | sed "s/ //g") #delete reports do curl --insecure --user USDUSER:USDPASS --header "Content-Type: application/json" --request DELETE USDURI/api/v2/compliance/arf_reports/USDi done done | [
"url = 'https:// satellite.example.com /api/v2/' capsule_url = 'https:// capsule.example.com :8443/api/v2/' katello_url = 'https:// satellite.example.com /katello/api/v2/'",
"curl -request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/v2/hosts | python3 -m json.tool",
"{ \"total\" => 2, \"subtotal\" => 2, \"page\" => 1, \"per_page\" => 1000, \"search\" => nil, \"sort\" => { \"by\" => nil, \"order\" => nil }, \"results\" => [ }",
"curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/v2/hosts/ satellite.example.com | python3 -m json.tool",
"{ \"all_puppetclasses\": [], \"architecture_id\": 1, \"architecture_name\": \"x86_64\", \"build\": false, \"capabilities\": [ \"build\" ], \"certname\": \" satellite.example.com \", \"comment\": null, \"compute_profile_id\": null, }",
"curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/v2/hosts/ satellite.example.com /facts | python3 -m json.tool",
"{ \"results\": { \" satellite.example.com \": { \"augeasversion\": \"1.0.0\", \"bios_release_date\": \"01/01/2007\", \"bios_version\": \"0.5.1\", \"blockdevice_sr0_size\": \"1073741312\", \"facterversion\": \"1.7.6\", }",
"curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/v2/hosts?search=example | python3 -m json.tool",
"{ \"results\": [ { \"name\": \" satellite.example.com \", } ], \"search\": \"example\", }",
"curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/v2/hosts?search=environment=production | python3 -m json.tool",
"{ \"results\": [ { \"environment_name\": \"production\", \"name\": \" satellite.example.com \", } ], \"search\": \"environment=production\", }",
"curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/v2/hosts?search=model=\\\"RHEV+Hypervisor\\\" | python3 -m json.tool",
"{ \"results\": [ { \"model_id\": 1, \"model_name\": \"RHEV Hypervisor\", \"name\": \" satellite.example.com \", } ], \"search\": \"model=\\\"RHEV Hypervisor\\\"\", }",
"curl --request DELETE --insecure --user sat_username:sat_password https:// satellite.example.com /api/v2/hosts/ host1.example.com | python3 -m json.tool",
"curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/bootdisk/hosts/ host_ID ?full=true --output image .iso",
"curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request GET --user sat_username:sat_password --insecure https:// satellite.example.com /katello/api/organizations/1/environments | python3 -m json.tool`",
"output omitted \"description\": null, \"id\": 1, \"label\": \"Library\", \"library\": true, \"name\": \"Library\", \"organization\": { \"id\": 1, \"label\": \"Default_Organization\", \"name\": \"Default Organization\" }, \"permissions\": { \"destroy_lifecycle_environments\": false, \"edit_lifecycle_environments\": true, \"promote_or_remove_content_views_to_environments\": true, \"view_lifecycle_environments\": true }, \"prior\": null, \"successor\": null, output truncated",
"curl --request GET --user sat_username:sat_password --insecure https:// satellite.example.com /katello/api/environments/1 | python3 -m json.tool",
"output omitted \"id\": 1, \"label\": \"Library\", output omitted \"prior\": null, \"successor\": null, output truncated",
"{\"organization_id\":1,\"label\":\"api-dev\",\"name\":\"API Development\",\"prior\":1}",
"curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request POST --user sat_username:sat_password --insecure --data @life-cycle.json https:// satellite.example.com /katello/api/environments | python3 -m json.tool",
"output omitted \"description\": null, \"id\": 2, \"label\": \"api-dev\", \"library\": false, \"name\": \"API Development\", \"organization\": { \"id\": 1, \"label\": \"Default_Organization\", \"name\": \"Default Organization\" }, \"permissions\": { \"destroy_lifecycle_environments\": true, \"edit_lifecycle_environments\": true, \"promote_or_remove_content_views_to_environments\": true, \"view_lifecycle_environments\": true }, \"prior\": { \"id\": 1, \"name\": \"Library\" }, output truncated",
"{\"organization_id\":1,\"label\":\"api-qa\",\"name\":\"API QA\",\"prior\":2}",
"curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request POST --user sat_username:sat_password --insecure --data @life-cycle.json https:// satellite.example.com /katello/api/environments | python3 -m json.tool",
"output omitted \"description\": null, \"id\": 3, \"label\": \"api-qa\", \"library\": false, \"name\": \"API QA\", \"organization\": { \"id\": 1, \"label\": \"Default_Organization\", \"name\": \"Default Organization\" }, \"permissions\": { \"destroy_lifecycle_environments\": true, \"edit_lifecycle_environments\": true, \"promote_or_remove_content_views_to_environments\": true, \"view_lifecycle_environments\": true }, \"prior\": { \"id\": 2, \"name\": \"API Development\" }, \"successor\": null, output truncated",
"curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request POST --user sat_username:sat_password --insecure --data '{\"description\":\"Quality Acceptance Testing\"}' https:// satellite.example.com /katello/api/environments/3 | python3 -m json.tool",
"output omitted \"description\": \"Quality Acceptance Testing\", \"id\": 3, \"label\": \"api-qa\", \"library\": false, \"name\": \"API QA\", \"organization\": { \"id\": 1, \"label\": \"Default_Organization\", \"name\": \"Default Organization\" }, \"permissions\": { \"destroy_lifecycle_environments\": true, \"edit_lifecycle_environments\": true, \"promote_or_remove_content_views_to_environments\": true, \"view_lifecycle_environments\": true }, \"prior\": { \"id\": 2, \"name\": \"API Development\" }, output truncated",
"curl --request DELETE --user sat_username:sat_password --insecure https:// satellite.example.com /katello/api/environments/ :id",
"export name=jq-1.6-2.el7.x86_64.rpm",
"export checksum=USD(sha256sum USDname|cut -c 1-65)",
"export size=USD(du -bs USDname|cut -f 1)",
"curl -H 'Content-Type: application/json' -X POST -k -u sat_username:sat_password -d \"{\\\"size\\\": \\\"USDsize\\\", \\\"checksum\\\":\\\"USDchecksum\\\"}\" https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads",
"{\"upload_id\":\"37eb5900-597e-4ac3-9bc5-2250c302fdc4\"}",
"export upload_id=37eb5900-597e-4ac3-9bc5-2250c302fdc4",
"export path=/root/jq/jq-1.6-2.el7.x86_64.rpm",
"curl -u sat_username:sat_password -H Accept:application/json -H Content-Type:multipart/form-data -X PUT --data-urlencode size=USDsize --data-urlencode offset=0 --data-urlencode content@USD{path} https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads/USDupload_id",
"curl -H \"Content-Type:application/json\" -X PUT -u sat_username:sat_password -k -d \"{\\\"uploads\\\":[{\\\"id\\\": \\\"USDupload_id\\\", \\\"name\\\": \\\"USDname\\\", \\\"checksum\\\": \\\"USDchecksum\\\" }]}\" https://USD(hostname -f)/katello/api/v2/repositories/76/import_uploads",
"curl -H 'Content-Type: application/json' -X DELETE -k -u sat_username:sat_password -d \"{}\" https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads/USDupload_id",
"export name=bpftool-3.10.0-1160.2.1.el7.centos.plus.x86_64.rpm",
"export checksum=USD(sha256sum USDname|cut -c 1-65)",
"export size=USD(du -bs USDname|cut -f 1)",
"curl -H 'Content-Type: application/json' -X POST -k -u sat_username:sat_password -d \"{\\\"size\\\": \\\"USDsize\\\", \\\"checksum\\\":\\\"USDchecksum\\\"}\" https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads",
"{\"upload_id\":\"37eb5900-597e-4ac3-9bc5-2250c302fdc4\"}",
"export upload_id=37eb5900-597e-4ac3-9bc5-2250c302fdc4",
"split --bytes 2MB --numeric-suffixes --suffix-length=1 bpftool-3.10.0-1160.2.1.el7.centos.plus.x86_64.rpm bpftool",
"ls bpftool[0-9] -l -rw-r--r--. 1 root root 2000000 Mar 31 14:15 bpftool0 -rw-r--r--. 1 root root 2000000 Mar 31 14:15 bpftool1 -rw-r--r--. 1 root root 2000000 Mar 31 14:15 bpftool2 -rw-r--r--. 1 root root 2000000 Mar 31 14:15 bpftool3 -rw-r--r--. 1 root root 868648 Mar 31 14:15 bpftool4",
"export path=/root/tmp/bpftool",
"curl -u sat_username:sat_password -H Accept:application/json -H Content-Type:multipart/form-data -X PUT --data-urlencode size=USDsize --data-urlencode offset=0 --data-urlencode content@USD{path}0 https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads/USDupload_id curl -u sat_username:sat_password -H Accept:application/json -H Content-Type:multipart/form-data -X PUT --data-urlencode size=USDsize --data-urlencode offset=2000000 --data-urlencode content@USD{path}1 https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads/USDupload_id curl -u sat_username:sat_password -H Accept:application/json -H Content-Type:multipart/form-data -X PUT --data-urlencode size=USDsize --data-urlencode offset=4000000 --data-urlencode content@USD{path}2 https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads/USDupload_id USDcurl -u sat_username:sat_password -H Accept:application/json -H Content-Type:multipart/form-data -X PUT --data-urlencode size=USDsize --data-urlencode offset=6000000 --data-urlencode content@USD{path}3 https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads/USDupload_id curl -u sat_username:sat_password -H Accept:application/json -H Content-Type:multipart/form-data -X PUT --data-urlencode size=USDsize --data-urlencode offset=8000000 --data-urlencode content@USD{path}4 https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads/USDupload_id",
"curl -H \"Content-Type:application/json\" -X PUT -u sat_username:sat_password -k -d \"{\\\"uploads\\\":[{\\\"id\\\": \\\"USDupload_id\\\", \\\"name\\\": \\\"USDname\\\", \\\"checksum\\\": \\\"USDchecksum\\\" }]}\" https://USD(hostname -f)/katello/api/v2/repositories/76/import_uploads",
"curl -H 'Content-Type: application/json' -X DELETE -k -u sat_username:sat_password -d \"{}\" https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads/USDupload_id",
"curl -H 'Content-Type: application/json' -X POST -k -u sat_username:sat_password -d \"{\\\"size\\\": \\\"USDsize\\\", \\\"checksum\\\":\\\"USDchecksum\\\"}\" https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads",
"{\"content_unit_href\":\"/pulp/api/v3/content/file/files/c1bcdfb8-d840-4604-845e-86e82454c747/\"}",
"curl -H \"Content-Type:application/json\" -X PUT -u sat_username:sat_password -k \\-d \"{\\\"uploads\\\":[{\\\"content_unit_id\\\": \\\"/pulp/api/v3/content/file/files/c1bcdfb8-d840-4604-845e-86e82454c747/\\\", \\\"name\\\": \\\"USDname\\\", \\ \\\"checksum\\\": \\\"USDchecksum\\\" }]}\" https://USD(hostname -f)/katello/api/v2/repositories/76/import_uploads",
"curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request PUT --user sat_username:sat_password --insecure --data json-formatted-data https:// satellite7.example.com",
"curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request PUT --user sat_username:sat_password --insecure --data \"{\\\"organization_id\\\":1,\\\"included\\\":{\\\"search\\\":\\\" my-host \\\"},\\\"content_type\\\":\\\"errata\\\",\\\"content\\\":[\\\" RHBA-2016:1981 \\\"]}\" https:// satellite.example.com /api/v2/hosts/bulk/install_content",
"curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request PUT --user sat_username:sat_password --insecure --data \"{\\\"organization_id\\\":1,\\\"included\\\":{\\\"search\\\":\\\"host_collection=\\\\\\\" my-collection \\\\\\\"\\\"},\\\"content_type\\\":\\\"errata\\\",\\\"content\\\":[\\\" RHBA-2016:1981 \\\"]}\" https:// satellite.example.com /api/v2/hosts/bulk/install_content",
"curl --insecure --user sat_username:sat_password https:// satellite.example.com /api/v2/hosts?search=os_title=\\\"RedHat+7.7\\\",model=\\\"PowerEdge+R330\\\" | python3 -m json.tool",
"{ \"results\": [ { \"model_id\": 1, \"model_name\": \"PowerEdge R330\", \"name\": \" satellite.example.com \", \"operatingsystem_id\": 1, \"operatingsystem_name\": \"RedHat 7.7\", } ], \"search\": \"os_title=\\\"RedHat 7.7\\\",model=\\\"PowerEdge R330\\\"\", \"subtotal\": 1, \"total\": 11 }",
"curl --request GET --user sat_username:sat_password https://satellite.example.com/katello/api/content_views?per_page=10&page=3",
"curl --request GET --user sat_username:sat_password https://satellite.example.com/katello/api/activation_keys?organization_id=1&per_page=30&page=2",
"for i in seq 1 3 ; do curl --request GET --user sat_username:sat_password https://satellite.example.com/katello/api/content_views?per_page=5&page=USDi; done",
"curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/smart_class_parameters",
"curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/puppetclasses/5/smart_class_parameters",
"curl --request GET --insecure --user sat_username:sat_password --data 'search=puppetclass_name = access_insights_client and key = authmethod' https:// satellite.example.com /api/smart_class_parameters",
"curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/smart_class_parameters/ 63",
"curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request PUT --insecure --user sat_username:sat_password --data '{\"smart_class_parameter\":{\"override\":true}}' https:// satellite.example.com /api/smart_class_parameters/63",
"curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request PUT --insecure --user sat_username:sat_password --data '{\"smart_class_parameter\":{\"override_value\":{\"match\":\"hostgroup=Test\",\"value\":\"2.4.6\"}}}' https:// satellite.example.com /api/smart_class_parameters/63",
"curl --request DELETE --user sat_username:sat_password https:// satellite.example.com /api/smart_class_parameters/63/override_values/3",
"curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request GET --user sat_user:sat_password --insecure https:// satellite.example.com /api/smart_class_parameters?search=puppetclass_name=motd | python3 -m json.tool",
"{ \"avoid_duplicates\": false, \"created_at\": \"2017-02-06 12:37:48 UTC\", # Remove this line. \"default_value\": \"\", # Add a new value here. \"description\": \"\", \"hidden_value\": \"\", \"hidden_value?\": false, \"id\": 3, \"merge_default\": false, \"merge_overrides\": false, \"override\": false, # Set the override value to true . \"override_value_order\": \"fqdn\\nhostgroup\\nos\\ndomain\", \"override_values\": [], # Remove this line. \"override_values_count\": 0, \"parameter\": \"content\", \"parameter_type\": \"string\", \"puppetclass_id\": 3, \"puppetclass_name\": \"motd\", \"required\": false, \"updated_at\": \"2017-02-07 11:56:55 UTC\", # Remove this line. \"use_puppet_default\": false, \"validator_rule\": null, \"validator_type\": \"\" }",
"curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request GET --user sat_user:sat_password --insecure \\` https:// satellite.example.com /api/smart_class_parameters/3 | python3 -m json.tool > output_file.json",
"cp output_file.json changed_file.json",
"{ \"avoid_duplicates\": false, \"created_at\": \"2017-02-06 12:37:48 UTC\", # Remove this line. \"default_value\": \"\", # Add a new value here. \"description\": \"\", \"hidden_value\": \"\", \"hidden_value?\": false, \"id\": 3, \"merge_default\": false, \"merge_overrides\": false, \"override\": false, # Set the override value to true . \"override_value_order\": \"fqdn\\nhostgroup\\nos\\ndomain\", \"override_values\": [], # Remove this line. \"override_values_count\": 0, \"parameter\": \"content\", \"parameter_type\": \"string\", \"puppetclass_id\": 3, \"puppetclass_name\": \"motd\", \"required\": false, \"updated_at\": \"2017-02-07 11:56:55 UTC\", # Remove this line. \"use_puppet_default\": false, \"validator_rule\": null, \"validator_type\": \"\" }",
"{ \"avoid_duplicates\": false, \"default_value\": \" No Unauthorized Access Allowed \", \"description\": \"\", \"hidden_value\": \"\", \"hidden_value?\": false, \"id\": 3, \"merge_default\": false, \"merge_overrides\": false, \"override\": true, \"override_value_order\": \"fqdn\\nhostgroup\\nos\\ndomain\", \"override_values_count\": 0, \"parameter\": \"content\", \"parameter_type\": \"string\", \"puppetclass_id\": 3, \"puppetclass_name\": \"motd\", \"required\": false, \"use_puppet_default\": false, \"validator_rule\": null, \"validator_type\": \"\" }",
"curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request PUT --user sat_username:sat_password --insecure --data @changed_file.json https:// satellite.example.com /api/smart_class_parameters/3",
"curl --insecure --user username :_password_ https:// satellite.example.com /api/v2/compliance/arf_reports/ | python3 -m json.tool",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 3252 0 3252 0 0 4319 0 --:--:-- --:--:-- --:--:-- 4318 { \"page\": 1, \"per_page\": 20, \"results\": [ { \"created_at\": \"2017-05-16 13:27:09 UTC\", \"failed\": 0, \"host\": \" host1.example.com \", \"id\": 404, \"othered\": 0, \"passed\": 0, \"updated_at\": \"2017-05-16 13:27:09 UTC\" }, { \"created_at\": \"2017-05-16 13:26:07 UTC\", \"failed\": 0, \"host\": \" host2.example.com , \"id\": 405, \"othered\": 0, \"passed\": 0, \"updated_at\": \"2017-05-16 13:26:07 UTC\" }, { \"created_at\": \"2017-05-16 13:25:07 UTC\", \"failed\": 0, \"host\": \" host3.example.com \", \"id\": 406, \"othered\": 0, \"passed\": 0, \"updated_at\": \"2017-05-16 13:25:07 UTC\" }, { \"created_at\": \"2017-05-16 13:24:07 UTC\", \"failed\": 0, \"host\": \" host4.example.com \", \"id\": 407, \"othered\": 0, \"passed\": 0, \"updated_at\": \"2017-05-16 13:24:07 UTC\" }, ], \"search\": null, \"sort\": { \"by\": null, \"order\": null }, \"subtotal\": 29, \"total\": 29",
"curl --insecure --user username :_password_ --header \"Content-Type: application/json\" --request DELETE https:// satellite.example.com /api/v2/compliance/arf_reports/405",
"HTTP/1.1 200 OK Date: Thu, 18 May 2017 07:14:36 GMT Server: Apache/2.4.6 (Red Hat Enterprise Linux) X-Frame-Options: SAMEORIGIN X-XSS-Protection: 1; mode=block X-Content-Type-Options: nosniff Foreman_version: 1.11.0.76 Foreman_api_version: 2 Apipie-Checksum: 2d39dc59aed19120d2359f7515e10d76 Cache-Control: max-age=0, private, must-revalidate X-Request-Id: f47eb877-35c7-41fe-b866-34274b56c506 X-Runtime: 0.661831 X-Powered-By: Phusion Passenger 4.0.18 Set-Cookie: request_method=DELETE; path=/ Set-Cookie: _session_id=d58fe2649e6788b87f46eabf8a461edd; path=/; secure; HttpOnly ETag: \"2574955fc0afc47cb5394ce95553f428\" Status: 200 OK Vary: Accept-Encoding Transfer-Encoding: chunked Content-Type: application/json; charset=utf-8",
"#!/bin/bash #this script removes all the arf reports from the satellite server #settings USER= username PASS= password URI=https:// satellite.example.com #check amount of reports while [ USD(curl --insecure --user USDUSER:USDPASS USDURI/api/v2/compliance/arf_reports/ | python3 -m json.tool | grep \\\"\\total\\\": | cut --fields=2 --delimiter\":\" | cut --fields=1 --delimiter\",\" | sed \"s/ //g\") -gt 0 ]; do #fetch reports for i in USD(curl --insecure --user USDUSER:USDPASS USDURI/api/v2/compliance/arf_reports/ | python3 -m json.tool | grep \\\"\\id\\\": | cut --fields=2 --delimiter\":\" | cut --fields=1 --delimiter\",\" | sed \"s/ //g\") #delete reports do curl --insecure --user USDUSER:USDPASS --header \"Content-Type: application/json\" --request DELETE USDURI/api/v2/compliance/arf_reports/USDi done done"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/api_guide/chap-Red_Hat_Satellite-API_Guide-Using_the_Red_Hat_Satellite_API |
20.9. Configuring a Password-Based Account Lockout Policy | 20.9. Configuring a Password-Based Account Lockout Policy A password-based account lockout policy protects against hackers who try to break into the directory by repeatedly trying to guess a user's password. The password policy can be set so that a specific user is locked out of the directory after a given number of failed attempts to bind. 20.9.1. Configuring the Account Lockout Policy Using the Command Line Use a dsconf pwpolicy set command to configure the account lockout policy settings. For example, to enable the lockout policy and configure that accounts are locked after four failed login attempts: The following parameters control the account password policy: --pwdlockout : Set this parameter to on or off to enable or disable the account lockout feature. --pwdunlock : Set this parameter to on to unlock an account after the lockout duration. --pwdlockoutduration : Sets the number of seconds for which an account will be locked out. --pwdmaxfailures : Sets the maximum number of allowed failed password attempts before the account gets locked. --pwdresetfailcount : Sets the number of seconds before Directory Server resets the failed login count of an account. 20.9.2. Configuring the Account Lockout Policy Using the Web Console To configure the account lockout policy using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Database tab, and select Global Password Policy . On the Account Lockout tab, enable Enable Account Lockout setting and set the parameters. For example: To display a tool tip and the corresponding attribute name in the cn=config entry for a parameter, hover the mouse cursor over the setting. For further details, see the parameter's description in the Red Hat Directory Server Configuration, Command, and File Reference . Click Save . 20.9.3. Disabling Legacy Password Lockout Behavior There are different ways of interpreting when the maximum password failure ( passwordMaxFailure ) has been reached. It depends on how the server counts the last failed attempt in the overall failure count. The traditional behavior for LDAP clients is to assume that the failure occurs after the limit has been reached. So, if the failure limit is set to three, then the lockout happens at the fourth failed attempt. This also means that if the fourth attempt is successful, then the user can authenticate successfully, even though the user technically hit the failure limit. This is n+1 on the count. LDAP clients increasingly expect the maximum failure limit to look at the last failed attempt in the count as the final attempt. So, if the failure limit is set to three, then at the third failure, the account is locked. A fourth attempt, even with the correct credentials, fails. This is n on the count. The first scenario - where an account is locked only if the attempt count is exceeded - is the historical behavior, so this is considered a legacy password policy behavior. In Directory Server, this policy is enabled by default, so an account is only locked when the failure count is n+1 . This legacy behavior can be disabled so that newer LDAP clients receive the error ( LDAP_CONSTRAINT_VIOLATION ) when they expect it. This is set in the passwordLegacyPolicy parameter. To disable the legacy password lockout behavior: | [
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com pwpolicy set --pwdlockout on --pwdmaxfailures=4",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace passwordLegacyPolicy=off"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Managing_the_Password_Policy-Configuring_the_Account_Lockout_Policy |
3.2. Compatible Hardware | 3.2. Compatible Hardware Before configuring Red Hat High Availability Add-On software, make sure that your cluster uses appropriate hardware (for example, supported fence devices, storage devices, and Fibre Channel switches). Refer to the Red Hat Hardware Catalog at https://hardware.redhat.com/ for the most current hardware compatibility information. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-hw-compat-CA |
Chapter 45. Introduction to the API Component Framework | Chapter 45. Introduction to the API Component Framework Abstract The API component framework helps you with the challenge of implementing complex Camel components based on a large Java API. 45.1. What is the API Component Framework? Motivation For components with a small number of options, the standard approach to implementing components ( Chapter 38, Implementing a Component ) is quite effective. Where it starts to become problematic, however, is when you need to implement a component with a large number of options. This problem becomes dramatic when it comes to enterprise-level components, which can require you to wrap an API consisting of hundreds of operations. Such components require a large effort to create and maintain. The API component framework was developed precisely to deal with the challenge of implementing such components. Turning APIs into components Experience of implementing Camel components based on Java APIs has shown that a lot of the work is routine and mechanical. It consists of taking a particular Java method, mapping it to a particular URI syntax, and enabling the user to set the method parameters through URI options. This type of work is an obvious candidate for automation and code generation. Generic URI format The first step in automating the implementation of a Java API is to design a standard way of mapping an API method to a URI. For this we need to define a generic URI format, which can be used to wrap any Java API. Hence, the API component framework defines the following syntax for endpoint URIs: Where scheme is the default URI scheme defined by the component; endpoint-prefix is a short API name, which maps to one of the classes or interfaces from the wrapped Java API; endpoint maps to a method name; and the URI options map to method argument names. URI format for a single API class In the case where an API consists of just a single Java class, the endpoint-prefix part of the URI becomes redundant, and you can specify the URI in the following, shorter format: Note To enable this URI format, it is also necessary for the component implementor to leave the apiName element blank in the configuration of the API component Maven plug-in. For more information, see the the section called "Configuring the API mapping" section. Reflection and metadata In order to map Java method invocations to a URI syntax, it is obvious that some form of reflection mechanism is needed. But the standard Java reflection API suffers from a notable limitation: it does not preserve method argument names. This is a problem, because we need the method argument names in order to generate meaningful URI option names. The solution is to provide metadata in alternative format: either as Javadoc or in method signature files. Javadoc Javadoc is an ideal form of metadata for the API component framework, because it preserves the complete method signature, including method argument names. It is also easy to generate (particularly, using maven-javadoc-plugin ) and, in many cases, is already provided in a third-party library. Method signature files If Javadoc is unavailable or unsuitable for some reason, the API component framework also supports an alternative source of metadata: the method signature files. A signature file is a simple text file which consists of a list of Java method signatures. It is relatively easy to create these files manually by copying and pasting from Java code (and lightly editing the resulting files). What does the framework consist of? From the perspective of a component developer, the API component framework consists of a number of different elements, as follows: A Maven archetype The camel-archetype-api-component Maven archetype is used to generate skeleton code for the component implementation. A Maven plug-in The camel-api-component-maven-plugin Maven plug-in is responsible for generating the code that implements the mapping between the Java API and the endpoint URI syntax. Specialized base classes To support the programming model of the API component framework, the Apache Camel core provides a specialized API in the org.apache.camel.util.component package. Amongst other things, this API provides specialized base classes for the component, endpoint, consumer, and producer classes. 45.2. How to use the Framework Overview The procedure for implementing a component using the API framework involve a mixture of automated code generation, implementing Java code, and customizing the build, by editing Maven POM files. The following figure gives an overview of this development process. Figure 45.1. Using the API Component Framework Java API The starting point for your API component is always a Java API. Generally speaking, in the context of Camel, this usually means a Java client API, which connects to a remote server endpoint. The first question is, where does the Java API come from? Here are a few possibilities: Implement the Java API yourself (though this typically would involve a lot of work and is generally not the preferred approach). Use a third-party Java API. For example, the Apache Camel Box component is based on the third-party Box Java SDK library. Generate the Java API from a language-neutral interface. Javadoc metadata You have the option of providing metadata for the Java API in the form of Javadoc (which is needed for generating code in the API component framework). If you use a third-party Java API from a Maven repository, you will usually find that the Javadoc is already provided in the Maven artifact. But even in the cases where Javadoc is not provided, you can easily generate it, using the maven-javadoc-plugin Maven plug-in. Note Currently, there is a limitation in the processing of Javadoc metadata, such that generic nesting is not supported. For example, java.util.List<String> is supported, but java.util.List<java.util.List<String>> is not. The workaround is to specify the nested generic type as java.util.List<java.util.List> in a signature file. Signature file metadata If for some reason it is not convenient to provide Java API metadata in the form of Javadoc, you have the option of providing metadata in the form of signature files . The signature files consist of a list of method signatures (one method signature per line). These files can be created manually and are needed only at build time. Note the following points about signature files: You must create one signature file for each proxy class (Java API class). The method signatures should not throw an exception. All exceptions raised at runtime are wrapped in a RuntimeCamelException and returned from the endpoint. Class names that specify the type of an argument must be fully-qualified class names (except for the java.lang.\* types). There is no mechanism for importing package names. Currently, there is a limitation in the signature parser, such that generic nesting is not supported. For example, java.util.List<String> is supported, whereas java.util.List<java.util.List<String>> is not. The workaround is to specify the nested generic type as java.util.List<java.util.List> . The following shows a simple example of the contents of a signature file: Generate starting code with the Maven archetype The easiest way to get started developing an API component is to generate an initial Maven project using the camel-archetype-api-component Maven archetype. For details of how to run the archetype, see Section 46.1, "Generate Code with the Maven Archetype" . After you run the Maven archetype, you will find two sub-projects under the generated ProjectName directory: ProjectName -api This project contains the Java API, which forms the basis of the API component. When you build this project, it packages up the Java API in a Maven bundle and generates the requisite Javadoc as well. If the Java API and Javadoc are already provided by a third-party, however, you do not need this sub-project. ProjectName -component This project contains the skeleton code for the API component. Edit component classes You can edit the skeleton code in ProjectName -component to develop your own component implementation. The following generated classes make up the core of the skeleton implementation: Customize POM files You also need to edit the Maven POM files to customize the build, and to configure the camel-api-component-maven-plugin Maven plug-in. Configure the camel-api-component-maven-plugin The most important aspect of configuring the POM files is the configuration of the camel-api-component-maven-plugin Maven plug-in. This plug-in is responsible for generating the mapping between API methods and endpoint URIs, and by editing the plug-in configuration, you can customize the mapping. For example, in the ProjectName -component/pom.xml file, the following camel-api-component-maven-plugin plug-in configuration shows a minimal configuration for an API class called ExampleJavadocHello . In this example, the hello-javadoc API name is mapped to the ExampleJavadocHello class, which means you can invoke methods from this class using URIs of the form, scheme ://hello-javadoc/ endpoint . The presence of the fromJavadoc element indicates that the ExampleJavadocHello class gets its metadata from Javadoc. OSGi bundle configuration The sample POM for the component sub-project, ProjectName -component/pom.xml , is configured to package the component as an OSGi bundle. The component POM includes a sample configuration of the maven-bundle-plugin . You should customize the configuration of the maven-bundle-plugin plug-in, to ensure that Maven generates a properly configured OSGi bundle for your component. Build the component When you build the component with Maven (for example, by using mvn clean package ), the camel-api-component-maven-plugin plug-in automatically generates the API mapping classes (which define the mapping between the Java API and the endpoint URI syntax), placing them into the target/classes project subdirectory. When you are dealing with a large and complex Java API, this generated code actually constitutes the bulk of the component source code. When the Maven build completes, the compiled code and resources are packaged up as an OSGi bundle and stored in your local Maven repository as a Maven artifact. | [
"scheme :// endpoint-prefix / endpoint ? Option1 = Value1 &...& OptionN = ValueN",
"scheme :// endpoint ? Option1 = Value1 &...& OptionN = ValueN",
"public String sayHi(); public String greetMe(String name); public String greetUs(String name1, String name2);",
"ComponentName Component ComponentName Endpoint ComponentName Consumer ComponentName Producer ComponentName Configuration",
"<configuration> <apis> <api> <apiName>hello-javadoc</apiName> <proxyClass>org.jboss.fuse.example.api.ExampleJavadocHello</proxyClass> <fromJavadoc/> </api> </apis> </configuration>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/introframework |
Installing | Installing Red Hat Advanced Cluster Security for Kubernetes 4.5 Installing Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/installing/index |
7.6. Additional Resources | 7.6. Additional Resources There is a large amount of detailed information available about the X server, the clients that connect to it, and the assorted desktop environments and window managers. 7.6.1. Installed Documentation /usr/X11R6/lib/X11/doc/README - Briefly describes the XFree86 architecture and how to get additional information about the XFree86 project as a new user. /usr/X11R6/lib/X11/doc/RELNOTES - For advanced users that want to read about the latest features available in XFree86. man xorg.conf - Contains information about the xorg.conf configuration files, including the meaning and syntax for the different sections within the files. man X.Org - The primary man page for X.Org Foundation information. man Xorg - Describes the X11R6.8 display server. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-x-additional-resources |
Chapter 17. Inviting users to your RHACS instance | Chapter 17. Inviting users to your RHACS instance By inviting users to Red Hat Advanced Cluster Security for Kubernetes (RHACS), you can ensure that the right users have the appropriate access rights within your cluster. You can invite one or more users by assigning roles and defining the authentication provider. 17.1. Configuring access control and sending invitations By configuring access control in the RHACS portal, you can invite users to your RHACS instance. Procedure In the RHACS portal, go to the Platform Configuration Access Control Auth providers tab, and then click Invite users . In the Invite users dialog box, provide the following information: Emails to invite : Enter one or more email addresses of the users you want to invite. Ensure that they are valid email addresses associated with the intended recipients. Provider : From the drop-down list, select a provider you want to use for each invited user. Important If you have only one authentication provider available, it is selected by default. If multiple authentication providers are available and at least one of them is Red Hat SSO or Default Internal SSO , that provider is selected by default. If multiple authentication providers are available, but none of them is Red Hat SSO or Default Internal SSO , you are prompted to select one manually. If you have not yet set up an authentication provider, a warning message appears and the form is disabled. Click the link, which takes you to the Access Control section to configure an authentication provider. Role : From the drop-down list, select the role to assign to each invited user. Click Invite users . On the confirmation dialog box, you receive a confirmation that the users have been created with the selected role. Copy the one or more email addresses and the message into an email that you create in your own email client, and send it to the users. Click Done . Verification In the RHACS portal, go to the Platform Configuration Access Control Auth providers tab. Select the authentication provider you used to invite users. Scroll down to the Rules section. Verify that the user emails and authentication provider roles have been added to the list. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/configuring/inviting-users-to-your-rhacs-instance |
Chapter 2. Performing the pre-customization tasks | Chapter 2. Performing the pre-customization tasks 2.1. Working with ISO images In this section, you will learn how to: Extract a Red Hat ISO. Create a new boot image containing your customizations. 2.2. Downloading RH boot images Before you begin to customize the installer, download the Red Hat-provided boot images. You can obtain Red Hat Enterprise Linux 9 boot media from the Red Hat Customer Portal after login to your account. Note Your account must have sufficient entitlements to download Red Hat Enterprise Linux 9 images. You must download either the Binary DVD or Boot ISO image and can use any of the image variants (Server or ComputeNode). You cannot customize the installer using the other available downloads, such as the KVM Guest Image or Supplementary DVD; other available downloads, such as the KVM Guest Image or Supplementary DVD . For more information about the Binary DVD and Boot ISO downloads, see Product Downloads . 2.3. Extracting Red Hat Enterprise Linux boot images Perform the following procedure to extract the contents of a boot image. Procedure Ensure that the directory /mnt/iso exists and nothing is currently mounted there. Mount the downloaded image. Where path/to/image.iso is the path to the downloaded boot image. Create a working directory where you want to place the contents of the ISO image. Copy all contents of the mounted image to your new working directory. Make sure to use the -p option to preserve file and directory permissions and ownership. Unmount the image. Additional resources For more information about the Binary DVD and Boot ISO downloads, see Product Downloads . | [
"mount -t iso9660 -o loop path/to/image.iso /mnt/iso",
"mkdir /tmp/ISO",
"cp -pRf /mnt/iso /tmp/ISO",
"umount /mnt/iso"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/customizing_anaconda/working-with-iso-images_customizing-anaconda |
Chapter 3. Verifying OpenShift Data Foundation deployment for Internal-attached devices mode | Chapter 3. Verifying OpenShift Data Foundation deployment for Internal-attached devices mode Use this section to verify that OpenShift Data Foundation is deployed correctly. 3.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node csi-addons-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 3.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab and then click on ocs-storagecluster-storagesystem . In the Status card of Block and File dashboard under Overview tab, verify that both Storage Cluster and Data Resiliency has a green tick mark. In the Details card , verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 3.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 3.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_ibm_z/verifying_openshift_data_foundation_deployment_for_internal_attached_devices_mode |
Chapter 3. Integrating with an existing Ceph Storage cluster | Chapter 3. Integrating with an existing Ceph Storage cluster To integrate Red Hat OpenStack Platform (RHOSP) with an existing Red Hat Ceph Storage cluster, you must install the ceph-ansible package. After that, you can create custom environment files to override and provide values for configuration options within OpenStack components. 3.1. Installing the ceph-ansible package The Red Hat OpenStack Platform director uses ceph-ansible to integrate with an existing Ceph Storage cluster, but ceph-ansible is not installed by default on the undercloud. Procedure Enter the following command to install the ceph-ansible package on the undercloud: 3.2. Creating a custom environment file Director supplies parameters to ceph-ansible to integrate with an external Red Hat Ceph Storage cluster through the environment file: /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml If you deploy the Shared File Systems service (manila) with external CephFS, separate environment files supply additional parameters: For native CephFS, the environment file is /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml . For CephFS through NFS, the environment file is /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml . To configure integration of an existing Ceph Storage cluster with the overcloud, you must supply the details of your Ceph Storage cluster to director by using a custom environment file. Director invokes these environment files during deployment. Procedure Create a custom environment file: /home/stack/templates/ceph-config.yaml Add a parameter_defaults: section to the file: Use parameter_defaults to set all of the parameters that you want to override in /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml . You must set the following parameters at a minimum: CephClientKey : The Ceph client key for the client.openstack user in your Ceph Storage cluster. This is the value of key you retrieved in Configuring the existing Ceph Storage cluster . For example, AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ== . CephClusterFSID : The file system ID of your Ceph Storage cluster. This is the value of fsid in your Ceph Storage cluster configuration file, which you retrieved in Configuring the existing Ceph Storage cluster . For example, 4b5c8c0a-ff60-454b-a1b4-9747aa737d19 . CephExternalMonHost : A comma-delimited list of the IPs of all MON hosts in your Ceph Storage cluster, for example, 172.16.1.7, 172.16.1.8 . For example: Optional: You can override the Red Hat OpenStack Platform (RHOSP) client username and the following default pool names to match your Ceph Storage cluster: CephClientUserName: <openstack> NovaRbdPoolName: <vms> CinderRbdPoolName: <volumes> GlanceRbdPoolName: <images> CinderBackupRbdPoolName: <backups> GnocchiRbdPoolName: <metrics> Optional: If you are deploying the Shared File Systems service with CephFS, you can override the following default data and metadata pool names: Note Ensure that these names match the names of the pools you created. Set the client key that you created for the Shared File Systems service. You can override the default Ceph client username for that key: Note The default client username ManilaCephFSCephFSAuthId is manila , unless you override it. CephManilaClientKey is always required. After you create the custom environment file, you must include it when you deploy the overcloud. Additional resources Deploying the overcloud 3.3. Ceph containers for Red Hat OpenStack Platform with Ceph Storage To configure Red Hat OpenStack Platform (RHOSP) to use Red Hat Ceph Storage with NFS Ganesha, you must have a Ceph container. To be compatible with Red Hat Enterprise Linux 8, RHOSP 16 requires Red Hat Ceph Storage 4 or 5 (Ceph package 14.x or Ceph package 16.x). The Ceph Storage 4 and 5 containers are hosted at registry.redhat.io , a registry that requires authentication. For more information, see Container image preparation parameters . 3.4. Deploying the overcloud Deploy the overcloud with the environment file that you created. Procedure The creation of the overcloud requires additional arguments for the openstack overcloud deploy command: This example command uses the following options: --templates - Creates the overcloud from the default heat template collection, /usr/share/openstack-tripleo-heat-templates/ . -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml - Sets the director to integrate an existing Ceph cluster to the overcloud. -e /home/stack/templates/ceph-config.yaml - Adds a custom environment file to override the defaults set by -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml . In this case, it is the custom environment file you created in Installing the ceph-ansible package . --ntp-server pool.ntp.org - Sets the NTP server. 3.4.1. Adding environment files for the Shared File Systems service with CephFS If you deploy an overcloud that uses the Shared File Systems service (manila) with CephFS, you must add additional environment files. Procedure Create and add additional environment files: If you deploy an overcloud that uses the native CephFS back-end driver, add /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml . If you deploy an overcloud that uses CephFS through NFS, add /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml . Red Hat recommends that you deploy the Ceph-through-NFS driver with an isolated StorageNFS network where shares are exported. You must deploy the isolated network to overcloud controller nodes. To enable this deployment, director includes the following file and role: An example custom network configuration file that includes the StorageNFS network ( /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml ). Review and customize this file as necessary. A ControllerStorageNFS role. Modify the openstack overcloud deploy command depending on the CephFS back end that you use. For native CephFS: For CephFS through NFS: Note The custom ceph-config.yaml environment file overrides parameters in the ceph-ansible-external.yaml file and either the manila-cephfsnative-config.yaml file or the manila-cephfsganesha-config.yaml file. Therefore, include the custom ceph-config.yaml environment file in the deployment command after ceph-ansible-external.yaml and either manila-cephfsnative-config.yaml or manila-cephfsganesha-config.yaml . Example environment file Replace <cluster_ID> , <IP_address> , and <client_key> with values that are suitable for your environment. Additional resources For more information about generating a custom roles file, see Deploying the Shared File Systems service with CephFS through NFS . 3.4.2. Adding an additional environment file for external Ceph Object Gateway (RGW) for Object storage If you deploy an overcloud that uses an already existing RGW service for Object storage, you must add an additional environment file. Procedure Add the following parameter_defaults to a custom environment file, for example, swift-external-params.yaml , and adjust the values to suit your deployment: Note The example code snippet contains parameter values that might differ from values that you use in your environment: The default port where the remote RGW instance listens is 8080 . The port might be different depending on how the external RGW is configured. The swift user created in the overcloud uses the password defined by the SwiftPassword parameter. You must configure the external RGW instance to use the same password to authenticate with the Identity service by using the rgw_keystone_admin_password . Add the following code to the Ceph config file to configure RGW to use the Identity service. Replace the variable values to suit your environment: Note Director creates the following roles and users in the Identity service by default: rgw_keystone_accepted_admin_roles: ResellerAdmin, swiftoperator rgw_keystone_admin_domain: default rgw_keystone_admin_project: service rgw_keystone_admin_user: swift Deploy the overcloud with the additional environment files with any other environment files that are relevant to your deployment: | [
"sudo dnf install -y ceph-ansible",
"parameter_defaults:",
"parameter_defaults: CephClientKey: <AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==> CephClusterFSID: <4b5c8c0a-ff60-454b-a1b4-9747aa737d19> CephExternalMonHost: <172.16.1.7, 172.16.1.8>",
"ManilaCephFSDataPoolName: <manila_data> ManilaCephFSMetadataPoolName: <manila_metadata>",
"ManilaCephFSCephFSAuthId: <manila> CephManilaClientKey: <AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==>",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml -e /home/stack/templates/ceph-config.yaml -e --ntp-server pool.ntp.org",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml -e /home/stack/templates/ceph-config.yaml -e --ntp-server pool.ntp.org",
"openstack overcloud deploy --templates -n /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml -r /home/stack/custom_roles.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml -e /home/stack/templates/ceph-config.yaml -e --ntp-server pool.ntp.org",
"parameter_defaults: CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderEnableNfsBackend: false NovaEnableRbdBackend: true GlanceBackend: rbd CinderRbdPoolName: \"volumes\" NovaRbdPoolName: \"vms\" GlanceRbdPoolName: \"images\" CinderBackupRbdPoolName: \"backups\" GnocchiRbdPoolName: \"metrics\" CephClusterFSID: <cluster_ID> CephExternalMonHost: <IP_address>,<IP_address>,<IP_address> CephClientKey: \"<client_key>\" CephClientUserName: \"openstack\" ManilaCephFSDataPoolName: manila_data ManilaCephFSMetadataPoolName: manila_metadata ManilaCephFSCephFSAuthId: 'manila' CephManilaClientKey: '<client_key>' ExtraConfig: ceph::profile::params::rbd_default_features: '1'",
"parameter_defaults: ExternalSwiftPublicUrl: 'http://<Public RGW endpoint or loadbalancer>:8080/swift/v1/AUTH_%(project_id)s' ExternalSwiftInternalUrl: 'http://<Internal RGW endpoint>:8080/swift/v1/AUTH_%(project_id)s' ExternalSwiftAdminUrl: 'http://<Admin RGW endpoint>:8080/swift/v1/AUTH_%(project_id)s' ExternalSwiftUserTenant: 'service' SwiftPassword: 'choose_a_random_password'",
"rgw_keystone_api_version = 3 rgw_keystone_url = http://<public Keystone endpoint>:5000/ rgw_keystone_accepted_roles = member, Member, admin rgw_keystone_accepted_admin_roles = ResellerAdmin, swiftoperator rgw_keystone_admin_domain = default rgw_keystone_admin_project = service rgw_keystone_admin_user = swift rgw_keystone_admin_password = <password_as_defined_in_the_environment_parameters> rgw_keystone_implicit_tenants = true rgw_keystone_revocation_interval = 0 rgw_s3_auth_use_keystone = true rgw_swift_versioning_enabled = true rgw_swift_account_in_url = true",
"openstack overcloud deploy --templates -e <your_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml -e swift-external-params.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/integrating_an_overcloud_with_an_existing_red_hat_ceph_storage_cluster/assembly-integrate-with-an-existing-ceph-storage-cluster_preparing-overcloud-nodes |
API reference | API reference Red Hat Advanced Cluster Security for Kubernetes 4.5 API Reference guide for Red Hat Advanced Cluster Security for Kubernetes. Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/index |
5.191. mingw32-libxml2 | 5.191. mingw32-libxml2 5.191.1. RHSA-2013:0217 - Important: mingw32-libxml2 security update Updated mingw32-libxml2 packages that fix several security issues are now available for Red Hat Enterprise Linux 6. This advisory also contains information about future updates for the mingw32 packages, as well as the deprecation of the packages with the release of Red Hat Enterprise Linux 6.4. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. These packages provide the libxml2 library, a development toolbox providing the implementation of various XML standards, for users of MinGW (Minimalist GNU for Windows). Security Fixes CVE-2011-3919 Important The mingw32 packages in Red Hat Enterprise Linux 6 will no longer be updated proactively and will be deprecated with the release of Red Hat Enterprise Linux 6.4. These packages were provided to support other capabilities in Red Hat Enterprise Linux and were not intended for direct customer use. Customers are advised to not use these packages with immediate effect. Future updates to these packages will be at Red Hat's discretion and these packages may be removed in a future minor release. A heap-based buffer overflow flaw was found in the way libxml2 decoded entity references with long names. A remote attacker could provide a specially-crafted XML file that, when opened in an application linked against libxml2, would cause the application to crash or, potentially, execute arbitrary code with the privileges of the user running the application. CVE-2012-5134 A heap-based buffer underflow flaw was found in the way libxml2 decoded certain entities. A remote attacker could provide a specially-crafted XML file that, when opened in an application linked against libxml2, would cause the application to crash or, potentially, execute arbitrary code with the privileges of the user running the application. CVE-2012-0841 It was found that the hashing routine used by libxml2 arrays was susceptible to predictable hash collisions. Sending a specially-crafted message to an XML service could result in longer processing time, which could lead to a denial of service. To mitigate this issue, randomization has been added to the hashing function to reduce the chance of an attacker successfully causing intentional collisions. CVE-2010-4008 , CVE-2010-4494 , CVE-2011-2821 , CVE-2011-2834 Multiple flaws were found in the way libxml2 parsed certain XPath (XML Path Language) expressions. If an attacker were able to supply a specially-crafted XML file to an application using libxml2, as well as an XPath expression for that application to run against the crafted file, it could cause the application to crash. CVE-2011-0216 , CVE-2011-3102 Two heap-based buffer overflow flaws were found in the way libxml2 decoded certain XML files. A remote attacker could provide a specially-crafted XML file that, when opened in an application linked against libxml2, would cause the application to crash or, potentially, execute arbitrary code with the privileges of the user running the application. CVE-2011-1944 An integer overflow flaw, leading to a heap-based buffer overflow, was found in the way libxml2 parsed certain XPath expressions. If an attacker were able to supply a specially-crafted XML file to an application using libxml2, as well as an XPath expression for that application to run against the crafted file, it could cause the application to crash or, possibly, execute arbitrary code. CVE-2011-3905 An out-of-bounds memory read flaw was found in libxml2. A remote attacker could provide a specially-crafted XML file that, when opened in an application linked against libxml2, would cause the application to crash. Red Hat would like to thank the Google Security Team for reporting the CVE-2010-4008 issue. Upstream acknowledges Bui Quang Minh from Bkis as the original reporter of CVE-2010-4008. All users of mingw32-libxml2 are advised to upgrade to these updated packages, which contain backported patches to correct these issues. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/mingw32-libxml2 |
5.4. Configure CXF for a Web Service Data Source: Logging | 5.4. Configure CXF for a Web Service Data Source: Logging CXF configuration can control the logging of requests and responses for specific or all ports. Logging, when enabled, is performed at an INFO level to the org.apache.cxf.interceptor context. Prerequisites The web service data source must be configured and the ConfigFile and EndPointName properties must be configured for CXF. Procedure 5.3. Configure CXF for a Web Service Data Source: Logging Modify the CXF Configuration File Open the CXF configuration file for the web service data source and add your desired logging properties. The following is an example of a CXF configuration file for a web service data source that enables logging: References For more information about CXF logging configuration options see http://cxf.apache.org/docs/debugging-and-logging.html . | [
"<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxws=\"http://cxf.apache.org/jaxws\" xsi:schemaLocation=\"http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd\"> <jaxws:client name=\"{http://teiid.org}teiid\" createdFromAPI=\"true\"> <jaxws:features> <bean class=\"org.apache.cxf.feature.LoggingFeature\"/> </jaxws:features> </jaxws:client> </beans>"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/configure_cxf_for_a_web_service_data_source_logging1 |
4.2. SELinux and Mandatory Access Control (MAC) | 4.2. SELinux and Mandatory Access Control (MAC) Security-Enhanced Linux (SELinux) is an implementation of MAC in the Linux kernel, checking for allowed operations after standard discretionary access controls (DAC) are checked. SELinux can enforce a user-customizable security policy on running processes and their actions, including attempts to access file system objects. Enabled by default in Red Hat Enterprise Linux, SELinux limits the scope of potential damage that can result from the exploitation of vulnerabilities in applications and system services, such as the hypervisor. sVirt integrates with libvirt, a virtualization management abstraction layer, to provide a MAC framework for virtual machines. This architecture allows all virtualization platforms supported by libvirt and all MAC implementations supported by sVirt to interoperate. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_security_guide/sect-virtualization_security_guide-svirt-mac |
Chapter 88. Configuring a logging utility in the decision engine | Chapter 88. Configuring a logging utility in the decision engine The decision engine uses the Java logging API SLF4J for system logging. You can use one of the following logging utilities with the decision engine to investigate decision engine activity, such as for troubleshooting or data gathering: Logback Apache Commons Logging Apache Log4j java.util.logging package Procedure For the logging utility that you want to use, add the relevant dependency to your Maven project or save the relevant XML configuration file in the org.drools package of your Red Hat Process Automation Manager distribution: Example Maven dependency for Logback <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>USD{logback.version}</version> </dependency> Example logback.xml configuration file in org.drools package <configuration> <logger name="org.drools" level="debug"/> ... <configuration> Example log4j.xml configuration file in org.drools package <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/"> <category name="org.drools"> <priority value="debug" /> </category> ... </log4j:configuration> Note If you are developing for an ultra light environment, use the slf4j-nop or slf4j-simple logger. | [
"<dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>USD{logback.version}</version> </dependency>",
"<configuration> <logger name=\"org.drools\" level=\"debug\"/> <configuration>",
"<log4j:configuration xmlns:log4j=\"http://jakarta.apache.org/log4j/\"> <category name=\"org.drools\"> <priority value=\"debug\" /> </category> </log4j:configuration>"
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/logging-proc_decision-engine |
Chapter 6. PriorityClass [scheduling.k8s.io/v1] | Chapter 6. PriorityClass [scheduling.k8s.io/v1] Description PriorityClass defines mapping from a priority class name to the priority integer value. The value can be any valid integer. Type object Required value 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources description string description is an arbitrary string that usually provides guidelines on when this priority class should be used. globalDefault boolean globalDefault specifies whether this PriorityClass should be considered as the default priority for pods that do not have any priority class. Only one PriorityClass can be marked as globalDefault . However, if more than one PriorityClasses exists with their globalDefault field set to true, the smallest value of such global default PriorityClasses will be used as the default priority. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata preemptionPolicy string preemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. Possible enum values: - "Never" means that pod never preempts other pods with lower priority. - "PreemptLowerPriority" means that pod can preempt other pods with lower priority. value integer value represents the integer value of this priority class. This is the actual priority that pods receive when they have the name of this class in their pod spec. 6.2. API endpoints The following API endpoints are available: /apis/scheduling.k8s.io/v1/priorityclasses DELETE : delete collection of PriorityClass GET : list or watch objects of kind PriorityClass POST : create a PriorityClass /apis/scheduling.k8s.io/v1/watch/priorityclasses GET : watch individual changes to a list of PriorityClass. deprecated: use the 'watch' parameter with a list operation instead. /apis/scheduling.k8s.io/v1/priorityclasses/{name} DELETE : delete a PriorityClass GET : read the specified PriorityClass PATCH : partially update the specified PriorityClass PUT : replace the specified PriorityClass /apis/scheduling.k8s.io/v1/watch/priorityclasses/{name} GET : watch changes to an object of kind PriorityClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 6.2.1. /apis/scheduling.k8s.io/v1/priorityclasses HTTP method DELETE Description delete collection of PriorityClass Table 6.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PriorityClass Table 6.3. HTTP responses HTTP code Reponse body 200 - OK PriorityClassList schema 401 - Unauthorized Empty HTTP method POST Description create a PriorityClass Table 6.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.5. Body parameters Parameter Type Description body PriorityClass schema Table 6.6. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 202 - Accepted PriorityClass schema 401 - Unauthorized Empty 6.2.2. /apis/scheduling.k8s.io/v1/watch/priorityclasses HTTP method GET Description watch individual changes to a list of PriorityClass. deprecated: use the 'watch' parameter with a list operation instead. Table 6.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /apis/scheduling.k8s.io/v1/priorityclasses/{name} Table 6.8. Global path parameters Parameter Type Description name string name of the PriorityClass HTTP method DELETE Description delete a PriorityClass Table 6.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PriorityClass Table 6.11. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PriorityClass Table 6.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.13. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PriorityClass Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.15. Body parameters Parameter Type Description body PriorityClass schema Table 6.16. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 401 - Unauthorized Empty 6.2.4. /apis/scheduling.k8s.io/v1/watch/priorityclasses/{name} Table 6.17. Global path parameters Parameter Type Description name string name of the PriorityClass HTTP method GET Description watch changes to an object of kind PriorityClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/schedule_and_quota_apis/priorityclass-scheduling-k8s-io-v1 |
Chapter 3. Important Changes to External Kernel Parameters | Chapter 3. Important Changes to External Kernel Parameters This chapter provides system administrators with a summary of significant changes in the kernel shipped with Red Hat Enterprise Linux 7. These changes include added or updated proc entries, sysctl , and sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes. New kernel parameters audit = [KNL] This parameter enables the audit sub-system. The value is either 1 = enabled or 0 = disabled. The default value is unset, which is not a new option, but it was previously undocumented. Format: { "0" | "1" } audit_backlog_limit = [KNL] This parameter sets the audit queue size limit. The default value is 64. Format: <int> (must be >=0) ipcmni_extend [KNL] This parameter extends the maximum number of unique System V IPC identifiers from 32 768 to 16 777 216. nospectre_v1 [X86,PPC] This parameter disables mitigations for Spectre Variant 1 (bounds check bypass). With this option data leaks are possible in the system. tsx = [X86] This parameter controls Transactional Synchronization Extensions (TSX) feature in Intel processors that support TSX control. The options are: on - Enable TSX on the system. Although there are mitigations for all known security vulnerabilities, TSX has been known to be an accelerator for several speculation-related CVEs, and so there may be unknown security risks associated with leaving it enabled. off - Disable TSX on the system. Note that this option takes effect only on newer CPUs which are not vulnerable to Microarchitectural Data Sampling (MDS). In other words, they have MSR_IA32_ARCH_CAPABILITIES.MDS_NO=1 and get the new IA32_TSX_CTRL Model-specific register (MSR) through a microcode update. This new MSR allows for a reliable deactivation of the TSX functionality. auto - Disable TSX if X86_BUG_TAA is present, otherwise enable TSX on the system. Not specifying this option is equivalent to tsx=on as Red Hat has implicitly made TSX enabled. For more details, see documentation of TAA - TSX Asynchronous Abort . tsx_async_abort = [X86,INTEL] This parameter controls mitigation for the TSX Async Abort (TAA) vulnerability. Similar to Micro-architectural Data Sampling (MDS), certain CPUs that support Transactional Synchronization Extensions (TSX) are vulnerable to an exploit against CPU internal buffers. The exploit can forward information to a disclosure gadget under certain conditions. In vulnerable processors, the speculatively forwarded data can be used in a cache side channel attack, to access data to which the attacker does not have direct access. The options are: full - Enable TAA mitigation on vulnerable CPUs if TSX is enabled. full,nosmt - Enable TAA mitigation and disable Simultaneous Multi Threading (SMT) on vulnerable CPUs. If TSX is disabled, SMT is not disabled because CPU is not vulnerable to cross-thread TAA attacks. off - Unconditionally disable TAA mitigation. On MDS-affected machines, the tsx_async_abort=off parameter can be prevented by an active MDS mitigation as both vulnerabilities are mitigated with the same mechanism. Therefore, to disable this mitigation, you need to specify the mds=off parameter as well. Not specifying this option is equivalent to tsx_async_abort=full . On CPUs which are MDS affected and deploy MDS mitigation, TAA mitigation is not required and does not provide any additional mitigation. For more details, see documentation of TAA - TSX Asynchronous Abort . Updated kernel parameters mitigations = [X86,PPC,S390] Controls optional mitigations for CPU vulnerabilities. This is a set of curated, arch-independent options, each of which is an aggregation of existing arch-specific options. The options are: off - Disable all optional CPU mitigations. This improves system performance, but it may also expose users to several CPU vulnerabilities. Equivalent to: nopti [X86,PPC] nospectre_v1 [X86,PPC] nobp=0 [S390] nospectre_v2 [X86,PPC,S390] spec_store_bypass_disable=off [X86,PPC] l1tf=off [X86] mds=off [X86] tsx_async_abort=off [X86] kvm.nx_huge_pages=off [X86] Exceptions: mitigations=off does not have any effect on the kvm.nx_huge_pages parameter if kvm.nx_huge_pages=force . auto (default) - Mitigate all CPU vulnerabilities, but leave Simultaneous multithreading (SMT) enabled, even if it is vulnerable. This is for users who do not want to be surprised by SMT getting disabled across kernel upgrades, or who have other ways of avoiding SMT-based attacks. Equivalent to: (default behavior) auto,nosmt - Mitigate all CPU vulnerabilities, disabling Simultaneous multithreading (SMT) if needed. This is for users who always want to be fully mitigated, even if it means losing SMT. Equivalent to: l1tf=flush,nosmt [X86] mds=full,nosmt [X86] tsx_async_abort=full,nosmt [X86] New /proc/sys/fs parameters negative-dentry-limit The integer value of this parameter specifies a soft limit on the total number of negative dentries allowed in a system as a percentage of the total system memory available. The allowable range for this value is 0-100. A value of 0 means there is no limit. Each unit represents 0.1% of the total system memory. So 10% is the maximum that can be specified. On an AMD64 or Intel 64 system with 32GB of memory, a 1% limit translates to about 1.7 million dentries or about 53 thousand dentries per GB of memory. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.8_release_notes/kernel_parameters_changes |
18.4. RAID Support in the Anaconda Installer | 18.4. RAID Support in the Anaconda Installer The Anaconda installer automatically detects any hardware and firmware RAID sets on a system, making them available for installation. Anaconda also supports software RAID using mdraid , and can recognize existing mdraid sets. Anaconda provides utilities for creating RAID sets during installation; however, these utilities only allow partitions (as opposed to entire disks) to be members of new sets. To use an entire disk for a set, create a partition on it spanning the entire disk, and use that partition as the RAID set member. When the root file system uses a RAID set, Anaconda adds special kernel command-line options to the bootloader configuration telling the initrd which RAID set(s) to activate before searching for the root file system. For instructions on configuring RAID during installation, see the Red Hat Enterprise Linux 7 Installation Guide . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/raidinstall |
Chapter 1. Release notes for Logging | Chapter 1. Release notes for Logging Note The logging subsystem for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-X where X is the version of logging you have installed. 1.1. Logging 5.6.11 This release includes OpenShift Logging Bug Fix Release 5.6.11 . 1.1.1. Bug fixes Before this update, the LokiStack gateway cached authorized requests very broadly. As a result, this caused wrong authorization results. With this update, LokiStack gateway caches on a more fine-grained basis which resolves this issue. ( LOG-4435 ) 1.1.2. CVEs CVE-2023-3899 CVE-2023-32360 CVE-2023-34969 1.2. Logging 5.6.9 This release includes OpenShift Logging Bug Fix Release 5.6.9 . 1.2.1. Bug fixes Before this update, when multiple roles were used to authenticate using STS with AWS Cloudwatch forwarding, a recent update caused the credentials to be non-unique. With this update, multiple combinations of STS roles and static credentials can once again be used to authenticate with AWS Cloudwatch. ( LOG-4084 ) Before this update, the Vector collector occasionally panicked with the following error message in its log: thread 'vector-worker' panicked at 'all branches are disabled and there is no else branch', src/kubernetes/reflector.rs:26:9 . With this update, the error has been resolved. ( LOG-4276 ) Before this update, Loki filtered label values for active streams but did not remove duplicates, making Grafana's Label Browser unusable. With this update, Loki filters out duplicate label values for active streams, resolving the issue. ( LOG-4390 ) 1.2.2. CVEs CVE-2020-24736 CVE-2022-48281 CVE-2023-1667 CVE-2023-2283 CVE-2023-24329 CVE-2023-26604 CVE-2023-28466 CVE-2023-32233 1.3. Logging 5.6.8 This release includes OpenShift Logging Bug Fix Release 5.6.8 . 1.3.1. Bug fixes Before this update, the vector collector terminated unexpectedly when input match label values contained a / character within the ClusterLogForwarder . This update resolves the issue by quoting the match label, enabling the collector to start and collect logs. ( LOG-4091 ) Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the more data available option loaded more log entries only the first time it was clicked. With this update, more entries are loaded with each click. ( OU-187 ) Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the streaming option would only display the streaming logs message without showing the actual logs. With this update, both the message and the log stream are displayed correctly. ( OU-189 ) Before this update, the Loki Operator reset errors in a way that made identifying configuration problems difficult to troubleshoot. With this update, errors persist until the configuration error is resolved. ( LOG-4158 ) Before this update, clusters with more than 8,000 namespaces caused Elasticsearch to reject queries because the list of namespaces was larger than the http.max_header_size setting. With this update, the default value for header size has been increased, resolving the issue. ( LOG-4278 ) 1.3.2. CVEs CVE-2020-24736 CVE-2022-48281 CVE-2023-1667 CVE-2023-2283 CVE-2023-24329 CVE-2023-26604 CVE-2023-28466 1.4. Logging 5.6.7 This release includes OpenShift Logging Bug Fix Release 5.6.7 . 1.4.1. Bug fixes Before this update, the LokiStack gateway returned label values for namespaces without applying the access rights of a user. With this update, the LokiStack gateway applies permissions to label value requests, resolving the issue. ( LOG-3728 ) Before this update, the time field of log messages did not parse as structured.time by default in Fluentd when the messages included a timestamp. With this update, parsed log messages will include a structured.time field if the output destination supports it. ( LOG-4090 ) Before this update, the LokiStack route configuration caused queries running longer than 30 seconds to time out. With this update, the LokiStack global and per-tenant queryTimeout settings affect the route timeout settings, resolving the issue. ( LOG-4130 ) Before this update, LokiStack CRs with values defined for tenant limits but not global limits caused the Loki Operator to crash. With this update, the Operator is able to process LokiStack CRs with only tenant limits defined, resolving the issue. ( LOG-4199 ) Before this update, the OpenShift Container Platform web console generated errors after an upgrade due to cached files of the prior version retained by the web browser. With this update, these files are no longer cached, resolving the issue. ( LOG-4099 ) Before this update, Vector generated certificate errors when forwarding to the default Loki instance. With this update, logs can be forwarded without errors to Loki by using Vector. ( LOG-4184 ) Before this update, the Cluster Logging Operator API required a certificate to be provided by a secret when the tls.insecureSkipVerify option was set to true . With this update, the Cluster Logging Operator API no longer requires a certificate to be provided by a secret in such cases. The following configuration has been added to the Operator's CR: tls.verify_certificate = false tls.verify_hostname = false ( LOG-4146 ) 1.4.2. CVEs CVE-2021-26341 CVE-2021-33655 CVE-2021-33656 CVE-2022-1462 CVE-2022-1679 CVE-2022-1789 CVE-2022-2196 CVE-2022-2663 CVE-2022-3028 CVE-2022-3239 CVE-2022-3522 CVE-2022-3524 CVE-2022-3564 CVE-2022-3566 CVE-2022-3567 CVE-2022-3619 CVE-2022-3623 CVE-2022-3625 CVE-2022-3627 CVE-2022-3628 CVE-2022-3707 CVE-2022-3970 CVE-2022-4129 CVE-2022-20141 CVE-2022-25147 CVE-2022-25265 CVE-2022-30594 CVE-2022-36227 CVE-2022-39188 CVE-2022-39189 CVE-2022-41218 CVE-2022-41674 CVE-2022-42703 CVE-2022-42720 CVE-2022-42721 CVE-2022-42722 CVE-2022-43750 CVE-2022-47929 CVE-2023-0394 CVE-2023-0461 CVE-2023-1195 CVE-2023-1582 CVE-2023-2491 CVE-2023-22490 CVE-2023-23454 CVE-2023-23946 CVE-2023-25652 CVE-2023-25815 CVE-2023-27535 CVE-2023-29007 1.5. Logging 5.6.6 This release includes OpenShift Logging Bug Fix Release 5.6.6 . 1.5.1. Bug fixes Before this update, dropping of messages occurred when configuring the ClusterLogForwarder custom resource to write to a Kafka output topic that matched a key in the payload due to an error. With this update, the issue is resolved by prefixing Fluentd's buffer name with an underscore. ( LOG-3458 ) Before this update, premature closure of watches occurred in Fluentd when inodes were reused and there were multiple entries with the same inode. With this update, the issue of premature closure of watches in the Fluentd position file is resolved. ( LOG-3629 ) Before this update, the detection of JavaScript client multi-line exceptions by Fluentd failed, resulting in printing them as multiple lines. With this update, exceptions are output as a single line, resolving the issue.( LOG-3761 ) Before this update, direct upgrades from the Red Hat Openshift Logging Operator version 4.6 to version 5.6 were allowed, resulting in functionality issues. With this update, upgrades must be within two versions, resolving the issue. ( LOG-3837 ) Before this update, metrics were not displayed for Splunk or Google Logging outputs. With this update, the issue is resolved by sending metrics for HTTP endpoints.( LOG-3932 ) Before this update, when the ClusterLogForwarder custom resource was deleted, collector pods remained running. With this update, collector pods do not run when log forwarding is not enabled. ( LOG-4030 ) Before this update, a time range could not be selected in the OpenShift Container Platform web console by clicking and dragging over the logs histogram. With this update, clicking and dragging can be used to successfully select a time range. ( LOG-4101 ) Before this update, Fluentd hash values for watch files were generated using the paths to log files, resulting in a non unique hash upon log rotation. With this update, hash values for watch files are created with inode numbers, resolving the issue. ( LOG-3633 ) Before this update, clicking on the Show Resources link in the OpenShift Container Platform web console did not produce any effect. With this update, the issue is resolved by fixing the functionality of the Show Resources link to toggle the display of resources for each log entry. ( LOG-4118 ) 1.5.2. CVEs CVE-2023-21930 CVE-2023-21937 CVE-2023-21938 CVE-2023-21939 CVE-2023-21954 CVE-2023-21967 CVE-2023-21968 CVE-2023-28617 1.6. Logging 5.6.5 This release includes OpenShift Logging Bug Fix Release 5.6.5 . 1.6.1. Bug fixes Before this update, the template definitions prevented Elasticsearch from indexing some labels and namespace_labels, causing issues with data ingestion. With this update, the fix replaces dots and slashes in labels to ensure proper ingestion, effectively resolving the issue. ( LOG-3419 ) Before this update, if the Logs page of the OpenShift Web Console failed to connect to the LokiStack, a generic error message was displayed, providing no additional context or troubleshooting suggestions. With this update, the error message has been enhanced to include more specific details and recommendations for troubleshooting. ( LOG-3750 ) Before this update, time range formats were not validated, leading to errors selecting a custom date range. With this update, time formats are now validated, enabling users to select a valid range. If an invalid time range format is selected, an error message is displayed to the user. ( LOG-3583 ) Before this update, when searching logs in Loki, even if the length of an expression did not exceed 5120 characters, the query would fail in many cases. With this update, query authorization label matchers have been optimized, resolving the issue. ( LOG-3480 ) Before this update, the Loki Operator failed to produce a memberlist configuration that was sufficient for locating all the components when using a memberlist for private IPs. With this update, the fix ensures that the generated configuration includes the advertised port, allowing for successful lookup of all components. ( LOG-4008 ) 1.6.2. CVEs CVE-2022-4269 CVE-2022-4378 CVE-2023-0266 CVE-2023-0361 CVE-2023-0386 CVE-2023-27539 CVE-2023-28120 1.7. Logging 5.6.4 This release includes OpenShift Logging Bug Fix Release 5.6.4 . 1.7.1. Bug fixes Before this update, when LokiStack was deployed as the log store, the logs generated by Loki pods were collected and sent to LokiStack. With this update, the logs generated by Loki are excluded from collection and will not be stored. ( LOG-3280 ) Before this update, when the query editor on the Logs page of the OpenShift Web Console was empty, the drop-down menus did not populate. With this update, if an empty query is attempted, an error message is displayed and the drop-down menus now populate as expected. ( LOG-3454 ) Before this update, when the tls.insecureSkipVerify option was set to true , the Cluster Logging Operator would generate incorrect configuration. As a result, the operator would fail to send data to Elasticsearch when attempting to skip certificate validation. With this update, the Cluster Logging Operator generates the correct TLS configuration even when tls.insecureSkipVerify is enabled. As a result, data can be sent successfully to Elasticsearch even when attempting to skip certificate validation. ( LOG-3475 ) Before this update, when structured parsing was enabled and messages were forwarded to multiple destinations, they were not deep copied. This resulted in some of the received logs including the structured message, while others did not. With this update, the configuration generation has been modified to deep copy messages before JSON parsing. As a result, all received messages now have structured messages included, even when they are forwarded to multiple destinations. ( LOG-3640 ) Before this update, if the collection field contained {} it could result in the Operator crashing. With this update, the Operator will ignore this value, allowing the operator to continue running smoothly without interruption. ( LOG-3733 ) Before this update, the nodeSelector attribute for the Gateway component of LokiStack did not have any effect. With this update, the nodeSelector attribute functions as expected. ( LOG-3783 ) Before this update, the static LokiStack memberlist configuration relied solely on private IP networks. As a result, when the OpenShift Container Platform cluster pod network was configured with a public IP range, the LokiStack pods would crashloop. With this update, the LokiStack administrator now has the option to use the pod network for the memberlist configuration. This resolves the issue and prevents the LokiStack pods from entering a crashloop state when the OpenShift Container Platform cluster pod network is configured with a public IP range. ( LOG-3814 ) Before this update, if the tls.insecureSkipVerify field was set to true , the Cluster Logging Operator would generate an incorrect configuration. As a result, the Operator would fail to send data to Elasticsearch when attempting to skip certificate validation. With this update, the Operator generates the correct TLS configuration even when tls.insecureSkipVerify is enabled. As a result, data can be sent successfully to Elasticsearch even when attempting to skip certificate validation. ( LOG-3838 ) Before this update, if the Cluster Logging Operator (CLO) was installed without the Elasticsearch Operator, the CLO pod would continuously display an error message related to the deletion of Elasticsearch. With this update, the CLO now performs additional checks before displaying any error messages. As a result, error messages related to Elasticsearch deletion are no longer displayed in the absence of the Elasticsearch Operator.( LOG-3763 ) 1.7.2. CVEs CVE-2022-4304 CVE-2022-4450 CVE-2023-0215 CVE-2023-0286 CVE-2023-0767 CVE-2023-23916 1.8. Logging 5.6.3 This release includes OpenShift Logging Bug Fix Release 5.6.3 . 1.8.1. Bug fixes Before this update, the operator stored gateway tenant secret information in a config map. With this update, the operator stores this information in a secret. ( LOG-3717 ) Before this update, the Fluentd collector did not capture OAuth login events stored in /var/log/auth-server/audit.log . With this update, Fluentd captures these OAuth login events, resolving the issue. ( LOG-3729 ) 1.8.2. CVEs CVE-2020-10735 CVE-2021-28861 CVE-2022-2873 CVE-2022-4415 CVE-2022-40897 CVE-2022-41222 CVE-2022-43945 CVE-2022-45061 CVE-2022-48303 1.9. Logging 5.6.2 This release includes OpenShift Logging Bug Fix Release 5.6.2 . 1.9.1. Bug fixes Before this update, the collector did not set level fields correctly based on priority for systemd logs. With this update, level fields are set correctly. ( LOG-3429 ) Before this update, the Operator incorrectly generated incompatibility warnings on OpenShift Container Platform 4.12 or later. With this update, the Operator max OpenShift Container Platform version value has been corrected, resolving the issue. ( LOG-3584 ) Before this update, creating a ClusterLogForwarder custom resource (CR) with an output value of default did not generate any errors. With this update, an error warning that this value is invalid generates appropriately. ( LOG-3437 ) Before this update, when the ClusterLogForwarder custom resource (CR) had multiple pipelines configured with one output set as default , the collector pods restarted. With this update, the logic for output validation has been corrected, resolving the issue. ( LOG-3559 ) Before this update, collector pods restarted after being created. With this update, the deployed collector does not restart on its own. ( LOG-3608 ) Before this update, patch releases removed versions of the Operators from the catalog. This made installing the old versions impossible. This update changes bundle configurations so that releases of the same minor version stay in the catalog. ( LOG-3635 ) 1.9.2. CVEs CVE-2022-23521 CVE-2022-40303 CVE-2022-40304 CVE-2022-41903 CVE-2022-47629 CVE-2023-21835 CVE-2023-21843 1.10. Logging 5.6.1 This release includes OpenShift Logging Bug Fix Release 5.6.1 . 1.10.1. Bug fixes Before this update, the compactor would report TLS certificate errors from communications with the querier when retention was active. With this update, the compactor and querier no longer communicate erroneously over HTTP. ( LOG-3494 ) Before this update, the Loki Operator would not retry setting the status of the LokiStack CR, which caused stale status information. With this update, the Operator retries status information updates on conflict. ( LOG-3496 ) Before this update, the Loki Operator Webhook server caused TLS errors when the kube-apiserver-operator Operator checked the webhook validity. With this update, the Loki Operator Webhook PKI is managed by the Operator Lifecycle Manager (OLM), resolving the issue. ( LOG-3510 ) Before this update, the LokiStack Gateway Labels Enforcer generated parsing errors for valid LogQL queries when using combined label filters with boolean expressions. With this update, the LokiStack LogQL implementation supports label filters with boolean expression and resolves the issue. ( LOG-3441 ), ( LOG-3397 ) Before this update, records written to Elasticsearch would fail if multiple label keys had the same prefix and some keys included dots. With this update, underscores replace dots in label keys, resolving the issue. ( LOG-3463 ) Before this update, the Red Hat OpenShift Logging Operator was not available for OpenShift Container Platform 4.10 clusters because of an incompatibility between OpenShift Container Platform console and the logging-view-plugin. With this update, the plugin is properly integrated with the OpenShift Container Platform 4.10 admin console. ( LOG-3447 ) Before this update the reconciliation of the ClusterLogForwarder custom resource would incorrectly report a degraded status of pipelines that reference the default logstore. With this update, the pipeline validates properly.( LOG-3477 ) 1.10.2. CVEs CVE-2021-46848 CVE-2022-3821 CVE-2022-35737 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 CVE-2021-35065 CVE-2022-46175 1.11. Logging 5.6 This release includes OpenShift Logging Release 5.6 . 1.11.1. Deprecation notice In Logging 5.6, Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to fluentd, you can use Vector instead. 1.11.2. Enhancements With this update, Logging is compliant with OpenShift Container Platform cluster-wide cryptographic policies . ( LOG-895 ) With this update, you can declare per-tenant, per-stream, and global policies retention policies through the LokiStack custom resource, ordered by priority. ( LOG-2695 ) With this update, Splunk is an available output option for log forwarding. ( LOG-2913 ) With this update, Vector replaces Fluentd as the default Collector. ( LOG-2222 ) With this update, the Developer role can access the per-project workload logs they are assigned to within the Log Console Plugin on clusters running OpenShift Container Platform 4.11 and higher. ( LOG-3388 ) With this update, logs from any source contain a field openshift.cluster_id , the unique identifier of the cluster in which the Operator is deployed. You can view the clusterID value with the command below. ( LOG-2715 ) USD oc get clusterversion/version -o jsonpath='{.spec.clusterID}{"\n"}' 1.11.3. Known Issues Before this update, Elasticsearch would reject logs if multiple label keys had the same prefix and some keys included the . character. This fixes the limitation of Elasticsearch by replacing . in the label keys with _ . As a workaround for this issue, remove the labels that cause errors, or add a namespace to the label. ( LOG-3463 ) 1.11.4. Bug fixes Before this update, if you deleted the Kibana Custom Resource, the OpenShift Container Platform web console continued displaying a link to Kibana. With this update, removing the Kibana Custom Resource also removes that link. ( LOG-2993 ) Before this update, a user was not able to view the application logs of namespaces they have access to. With this update, the Loki Operator automatically creates a cluster role and cluster role binding allowing users to read application logs. ( LOG-3072 ) Before this update, the Operator removed any custom outputs defined in the ClusterLogForwarder custom resource when using LokiStack as the default log storage. With this update, the Operator merges custom outputs with the default outputs when processing the ClusterLogForwarder custom resource. ( LOG-3090 ) Before this update, the CA key was used as the volume name for mounting the CA into Loki, causing error states when the CA Key included non-conforming characters, such as dots. With this update, the volume name is standardized to an internal string which resolves the issue. ( LOG-3331 ) Before this update, a default value set within the LokiStack Custom Resource Definition, caused an inability to create a LokiStack instance without a ReplicationFactor of 1 . With this update, the operator sets the actual value for the size used. ( LOG-3296 ) Before this update, Vector parsed the message field when JSON parsing was enabled without also defining structuredTypeKey or structuredTypeName values. With this update, a value is required for either structuredTypeKey or structuredTypeName when writing structured logs to Elasticsearch. ( LOG-3195 ) Before this update, the secret creation component of the Elasticsearch Operator modified internal secrets constantly. With this update, the existing secret is properly handled. ( LOG-3161 ) Before this update, the Operator could enter a loop of removing and recreating the collector daemonset while the Elasticsearch or Kibana deployments changed their status. With this update, a fix in the status handling of the Operator resolves the issue. ( LOG-3157 ) Before this update, Kibana had a fixed 24h OAuth cookie expiration time, which resulted in 401 errors in Kibana whenever the accessTokenInactivityTimeout field was set to a value lower than 24h . With this update, Kibana's OAuth cookie expiration time synchronizes to the accessTokenInactivityTimeout , with a default value of 24h . ( LOG-3129 ) Before this update, the Operators general pattern for reconciling resources was to try and create before attempting to get or update which would lead to constant HTTP 409 responses after creation. With this update, Operators first attempt to retrieve an object and only create or update it if it is either missing or not as specified. ( LOG-2919 ) Before this update, the .level and`.structure.level` fields in Fluentd could contain different values. With this update, the values are the same for each field. ( LOG-2819 ) Before this update, the Operator did not wait for the population of the trusted CA bundle and deployed the collector a second time once the bundle updated. With this update, the Operator waits briefly to see if the bundle has been populated before it continues the collector deployment. ( LOG-2789 ) Before this update, logging telemetry info appeared twice when reviewing metrics. With this update, logging telemetry info displays as expected. ( LOG-2315 ) Before this update, Fluentd pod logs contained a warning message after enabling the JSON parsing addition. With this update, that warning message does not appear. ( LOG-1806 ) Before this update, the must-gather script did not complete because oc needs a folder with write permission to build its cache. With this update, oc has write permissions to a folder, and the must-gather script completes successfully. ( LOG-3446 ) Before this update the log collector SCC could be superseded by other SCCs on the cluster, rendering the collector unusable. This update sets the priority of the log collector SCC so that it takes precedence over the others. ( LOG-3235 ) Before this update, Vector was missing the field sequence , which was added to fluentd as a way to deal with a lack of actual nanoseconds precision. With this update, the field openshift.sequence has been added to the event logs. ( LOG-3106 ) 1.11.5. CVEs CVE-2020-36518 CVE-2021-46848 CVE-2022-2879 CVE-2022-2880 CVE-2022-27664 CVE-2022-32190 CVE-2022-35737 CVE-2022-37601 CVE-2022-41715 CVE-2022-42003 CVE-2022-42004 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 1.12. Logging 5.5.16 This release includes OpenShift Logging Bug Fix Release 5.5.16 . 1.12.1. Bug fixes Before this update, the LokiStack gateway cached authorized requests very broadly. As a result, this caused wrong authorization results. With this update, LokiStack gateway caches on a more fine-grained basis which resolves this issue. ( LOG-4434 ) 1.12.2. CVEs CVE-2023-3899 CVE-2023-32360 CVE-2023-34969 1.13. Logging 5.5.14 This release includes OpenShift Logging Bug Fix Release 5.5.14 . 1.13.1. Bug fixes Before this update, the Vector collector occasionally panicked with the following error message in its log: thread 'vector-worker' panicked at 'all branches are disabled and there is no else branch', src/kubernetes/reflector.rs:26:9 . With this update, the error has been resolved. ( LOG-4279 ) 1.13.2. CVEs CVE-2023-2828 1.14. Logging 5.5.13 This release includes OpenShift Logging Bug Fix Release 5.5.13 . 1.14.1. Bug fixes None. 1.14.2. CVEs CVE-2023-1999 CVE-2020-24736 CVE-2022-48281 CVE-2023-1667 CVE-2023-2283 CVE-2023-24329 CVE-2023-26604 CVE-2023-28466 1.15. Logging 5.5.11 This release includes OpenShift Logging Bug Fix Release 5.5.11 . 1.15.1. Bug fixes Before this update, a time range could not be selected in the OpenShift Container Platform web console by clicking and dragging over the logs histogram. With this update, clicking and dragging can be used to successfully select a time range. ( LOG-4102 ) Before this update, clicking on the Show Resources link in the OpenShift Container Platform web console did not produce any effect. With this update, the issue is resolved by fixing the functionality of the Show Resources link to toggle the display of resources for each log entry. ( LOG-4117 ) 1.15.2. CVEs CVE-2021-26341 CVE-2021-33655 CVE-2021-33656 CVE-2022-1462 CVE-2022-1679 CVE-2022-1789 CVE-2022-2196 CVE-2022-2663 CVE-2022-2795 CVE-2022-3028 CVE-2022-3239 CVE-2022-3522 CVE-2022-3524 CVE-2022-3564 CVE-2022-3566 CVE-2022-3567 CVE-2022-3619 CVE-2022-3623 CVE-2022-3625 CVE-2022-3627 CVE-2022-3628 CVE-2022-3707 CVE-2022-3970 CVE-2022-4129 CVE-2022-20141 CVE-2022-24765 CVE-2022-25265 CVE-2022-29187 CVE-2022-30594 CVE-2022-36227 CVE-2022-39188 CVE-2022-39189 CVE-2022-39253 CVE-2022-39260 CVE-2022-41218 CVE-2022-41674 CVE-2022-42703 CVE-2022-42720 CVE-2022-42721 CVE-2022-42722 CVE-2022-43750 CVE-2022-47929 CVE-2023-0394 CVE-2023-0461 CVE-2023-1195 CVE-2023-1582 CVE-2023-2491 CVE-2023-23454 CVE-2023-27535 1.16. Logging 5.5.10 This release includes OpenShift Logging Bug Fix Release 5.5.10 . 1.16.1. Bug fixes Before this update, the logging view plugin of the OpenShift Web Console showed only an error text when the LokiStack was not reachable. After this update the plugin shows a proper error message with details on how to fix the unreachable LokiStack. ( LOG-2874 ) 1.16.2. CVEs CVE-2022-4304 CVE-2022-4450 CVE-2023-0215 CVE-2023-0286 CVE-2023-0361 CVE-2023-23916 1.17. Logging 5.5.9 This release includes OpenShift Logging Bug Fix Release 5.5.9 . 1.17.1. Bug fixes Before this update, a problem with the Fluentd collector caused it to not capture OAuth login events stored in /var/log/auth-server/audit.log . This led to incomplete collection of login events from the OAuth service. With this update, the Fluentd collector now resolves this issue by capturing all login events from the OAuth service, including those stored in /var/log/auth-server/audit.log , as expected.( LOG-3730 ) Before this update, when structured parsing was enabled and messages were forwarded to multiple destinations, they were not deep copied. This resulted in some of the received logs including the structured message, while others did not. With this update, the configuration generation has been modified to deep copy messages before JSON parsing. As a result, all received logs now have structured messages included, even when they are forwarded to multiple destinations.( LOG-3767 ) 1.17.2. CVEs CVE-2022-4304 CVE-2022-4450 CVE-2022-41717 CVE-2023-0215 CVE-2023-0286 CVE-2023-0767 CVE-2023-23916 1.18. Logging 5.5.8 This release includes OpenShift Logging Bug Fix Release 5.5.8 . 1.18.1. Bug fixes Before this update, the priority field was missing from systemd logs due to an error in how the collector set level fields. With this update, these fields are set correctly, resolving the issue. ( LOG-3630 ) 1.18.2. CVEs CVE-2020-10735 CVE-2021-28861 CVE-2022-2873 CVE-2022-4415 CVE-2022-24999 CVE-2022-40897 CVE-2022-41222 CVE-2022-41717 CVE-2022-43945 CVE-2022-45061 CVE-2022-48303 1.19. Logging 5.5.7 This release includes OpenShift Logging Bug Fix Release 5.5.7 . 1.19.1. Bug fixes Before this update, the LokiStack Gateway Labels Enforcer generated parsing errors for valid LogQL queries when using combined label filters with boolean expressions. With this update, the LokiStack LogQL implementation supports label filters with boolean expression and resolves the issue. ( LOG-3534 ) Before this update, the ClusterLogForwarder custom resource (CR) did not pass TLS credentials for syslog output to Fluentd, resulting in errors during forwarding. With this update, credentials pass correctly to Fluentd, resolving the issue. ( LOG-3533 ) 1.19.2. CVEs CVE-2021-46848 CVE-2022-3821 CVE-2022-35737 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 1.20. Logging 5.5.6 This release includes OpenShift Logging Bug Fix Release 5.5.6 . 1.20.1. Bug fixes Before this update, the Pod Security admission controller added the label podSecurityLabelSync = true to the openshift-logging namespace. This resulted in our specified security labels being overwritten, and as a result Collector pods would not start. With this update, the label podSecurityLabelSync = false preserves security labels. Collector pods deploy as expected. ( LOG-3340 ) Before this update, the Operator installed the console view plugin, even when it was not enabled on the cluster. This caused the Operator to crash. With this update, if an account for a cluster does not have the console view enabled, the Operator functions normally and does not install the console view. ( LOG-3407 ) Before this update, a prior fix to support a regression where the status of the Elasticsearch deployment was not being updated caused the Operator to crash unless the Red Hat Elasticsearch Operator was deployed. With this update, that fix has been reverted so the Operator is now stable but re-introduces the issue related to the reported status. ( LOG-3428 ) Before this update, the Loki Operator only deployed one replica of the LokiStack gateway regardless of the chosen stack size. With this update, the number of replicas is correctly configured according to the selected size. ( LOG-3478 ) Before this update, records written to Elasticsearch would fail if multiple label keys had the same prefix and some keys included dots. With this update, underscores replace dots in label keys, resolving the issue. ( LOG-3341 ) Before this update, the logging view plugin contained an incompatible feature for certain versions of OpenShift Container Platform. With this update, the correct release stream of the plugin resolves the issue. ( LOG-3467 ) Before this update, the reconciliation of the ClusterLogForwarder custom resource would incorrectly report a degraded status of one or more pipelines causing the collector pods to restart every 8-10 seconds. With this update, reconciliation of the ClusterLogForwarder custom resource processes correctly, resolving the issue. ( LOG-3469 ) Before this change the spec for the outputDefaults field of the ClusterLogForwarder custom resource would apply the settings to every declared Elasticsearch output type. This change corrects the behavior to match the enhancement specification where the setting specifically applies to the default managed Elasticsearch store. ( LOG-3342 ) Before this update, the OpenShift CLI (oc) must-gather script did not complete because the OpenShift CLI (oc) needs a folder with write permission to build its cache. With this update, the OpenShift CLI (oc) has write permissions to a folder, and the must-gather script completes successfully. ( LOG-3472 ) Before this update, the Loki Operator webhook server caused TLS errors. With this update, the Loki Operator webhook PKI is managed by the Operator Lifecycle Manager's dynamic webhook management resolving the issue. ( LOG-3511 ) 1.20.2. CVEs CVE-2021-46848 CVE-2022-2056 CVE-2022-2057 CVE-2022-2058 CVE-2022-2519 CVE-2022-2520 CVE-2022-2521 CVE-2022-2867 CVE-2022-2868 CVE-2022-2869 CVE-2022-2953 CVE-2022-2964 CVE-2022-4139 CVE-2022-35737 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 1.21. Logging 5.5.5 This release includes OpenShift Logging Bug Fix Release 5.5.5 . 1.21.1. Bug fixes Before this update, Kibana had a fixed 24h OAuth cookie expiration time, which resulted in 401 errors in Kibana whenever the accessTokenInactivityTimeout field was set to a value lower than 24h . With this update, Kibana's OAuth cookie expiration time synchronizes to the accessTokenInactivityTimeout , with a default value of 24h . ( LOG-3305 ) Before this update, Vector parsed the message field when JSON parsing was enabled without also defining structuredTypeKey or structuredTypeName values. With this update, a value is required for either structuredTypeKey or structuredTypeName when writing structured logs to Elasticsearch. ( LOG-3284 ) Before this update, the FluentdQueueLengthIncreasing alert could fail to fire when there was a cardinality issue with the set of labels returned from this alert expression. This update reduces labels to only include those required for the alert. ( LOG-3226 ) Before this update, Loki did not have support to reach an external storage in a disconnected cluster. With this update, proxy environment variables and proxy trusted CA bundles are included in the container image to support these connections. ( LOG-2860 ) Before this update, OpenShift Container Platform web console users could not choose the ConfigMap object that includes the CA certificate for Loki, causing pods to operate without the CA. With this update, web console users can select the config map, resolving the issue. ( LOG-3310 ) Before this update, the CA key was used as volume name for mounting the CA into Loki, causing error states when the CA Key included non-conforming characters (such as dots). With this update, the volume name is standardized to an internal string which resolves the issue. ( LOG-3332 ) 1.21.2. CVEs CVE-2016-3709 CVE-2020-35525 CVE-2020-35527 CVE-2020-36516 CVE-2020-36558 CVE-2021-3640 CVE-2021-30002 CVE-2022-0168 CVE-2022-0561 CVE-2022-0562 CVE-2022-0617 CVE-2022-0854 CVE-2022-0865 CVE-2022-0891 CVE-2022-0908 CVE-2022-0909 CVE-2022-0924 CVE-2022-1016 CVE-2022-1048 CVE-2022-1055 CVE-2022-1184 CVE-2022-1292 CVE-2022-1304 CVE-2022-1355 CVE-2022-1586 CVE-2022-1785 CVE-2022-1852 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2078 CVE-2022-2097 CVE-2022-2509 CVE-2022-2586 CVE-2022-2639 CVE-2022-2938 CVE-2022-3515 CVE-2022-20368 CVE-2022-21499 CVE-2022-21618 CVE-2022-21619 CVE-2022-21624 CVE-2022-21626 CVE-2022-21628 CVE-2022-22624 CVE-2022-22628 CVE-2022-22629 CVE-2022-22662 CVE-2022-22844 CVE-2022-23960 CVE-2022-24448 CVE-2022-25255 CVE-2022-26373 CVE-2022-26700 CVE-2022-26709 CVE-2022-26710 CVE-2022-26716 CVE-2022-26717 CVE-2022-26719 CVE-2022-27404 CVE-2022-27405 CVE-2022-27406 CVE-2022-27950 CVE-2022-28390 CVE-2022-28893 CVE-2022-29581 CVE-2022-30293 CVE-2022-34903 CVE-2022-36946 CVE-2022-37434 CVE-2022-39399 1.22. Logging 5.5.4 This release includes RHSA-2022:7434-OpenShift Logging Bug Fix Release 5.5.4 . 1.22.1. Bug fixes Before this update, an error in the query parser of the logging view plugin caused parts of the logs query to disappear if the query contained curly brackets {} . This made the queries invalid, leading to errors being returned for valid queries. With this update, the parser correctly handles these queries. ( LOG-3042 ) Before this update, the Operator could enter a loop of removing and recreating the collector daemonset while the Elasticsearch or Kibana deployments changed their status. With this update, a fix in the status handling of the Operator resolves the issue. ( LOG-3049 ) Before this update, no alerts were implemented to support the collector implementation of Vector. This change adds Vector alerts and deploys separate alerts, depending upon the chosen collector implementation. ( LOG-3127 ) Before this update, the secret creation component of the Elasticsearch Operator modified internal secrets constantly. With this update, the existing secret is properly handled. ( LOG-3138 ) Before this update, a prior refactoring of the logging must-gather scripts removed the expected location for the artifacts. This update reverts that change to write artifacts to the /must-gather folder. ( LOG-3213 ) Before this update, on certain clusters, the Prometheus exporter would bind on IPv4 instead of IPv6. After this update, Fluentd detects the IP version and binds to 0.0.0.0 for IPv4 or [::] for IPv6. ( LOG-3162 ) 1.22.2. CVEs CVE-2020-35525 CVE-2020-35527 CVE-2022-0494 CVE-2022-1353 CVE-2022-2509 CVE-2022-2588 CVE-2022-3515 CVE-2022-21618 CVE-2022-21619 CVE-2022-21624 CVE-2022-21626 CVE-2022-21628 CVE-2022-23816 CVE-2022-23825 CVE-2022-29900 CVE-2022-29901 CVE-2022-32149 CVE-2022-37434 CVE-2022-40674 1.23. Logging 5.5.3 This release includes OpenShift Logging Bug Fix Release 5.5.3 . 1.23.1. Bug fixes Before this update, log entries that had structured messages included the original message field, which made the entry larger. This update removes the message field for structured logs to reduce the increased size. ( LOG-2759 ) Before this update, the collector configuration excluded logs from collector , default-log-store , and visualization pods, but was unable to exclude logs archived in a .gz file. With this update, archived logs stored as .gz files of collector , default-log-store , and visualization pods are also excluded. ( LOG-2844 ) Before this update, when requests to an unavailable pod were sent through the gateway, no alert would warn of the disruption. With this update, individual alerts will generate if the gateway has issues completing a write or read request. ( LOG-2884 ) Before this update, pod metadata could be altered by fluent plugins because the values passed through the pipeline by reference. This update ensures each log message receives a copy of the pod metadata so each message processes independently. ( LOG-3046 ) Before this update, selecting unknown severity in the OpenShift Console Logs view excluded logs with a level=unknown value. With this update, logs without level and with level=unknown values are visible when filtering by unknown severity. ( LOG-3062 ) Before this update, log records sent to Elasticsearch had an extra field named write-index that contained the name of the index to which the logs needed to be sent. This field is not a part of the data model. After this update, this field is no longer sent. ( LOG-3075 ) With the introduction of the new built-in Pod Security Admission Controller , Pods not configured in accordance with the enforced security standards defined globally or on the namespace level cannot run. With this update, the Operator and collectors allow privileged execution and run without security audit warnings or errors. ( LOG-3077 ) Before this update, the Operator removed any custom outputs defined in the ClusterLogForwarder custom resource when using LokiStack as the default log storage. With this update, the Operator merges custom outputs with the default outputs when processing the ClusterLogForwarder custom resource. ( LOG-3095 ) 1.23.2. CVEs CVE-2015-20107 CVE-2022-0391 CVE-2022-2526 CVE-2022-21123 CVE-2022-21125 CVE-2022-21166 CVE-2022-29154 CVE-2022-32206 CVE-2022-32208 CVE-2022-34903 1.24. Logging 5.5.2 This release includes OpenShift Logging Bug Fix Release 5.5.2 . 1.24.1. Bug fixes Before this update, alerting rules for the Fluentd collector did not adhere to the OpenShift Container Platform monitoring style guidelines. This update modifies those alerts to include the namespace label, resolving the issue. ( LOG-1823 ) Before this update, the index management rollover script failed to generate a new index name whenever there was more than one hyphen character in the name of the index. With this update, index names generate correctly. ( LOG-2644 ) Before this update, the Kibana route was setting a caCertificate value without a certificate present. With this update, no caCertificate value is set. ( LOG-2661 ) Before this update, a change in the collector dependencies caused it to issue a warning message for unused parameters. With this update, removing unused configuration parameters resolves the issue. ( LOG-2859 ) Before this update, pods created for deployments that Loki Operator created were mistakenly scheduled on nodes with non-Linux operating systems, if such nodes were available in the cluster the Operator was running in. With this update, the Operator attaches an additional node-selector to the pod definitions which only allows scheduling the pods on Linux-based nodes. ( LOG-2895 ) Before this update, the OpenShift Console Logs view did not filter logs by severity due to a LogQL parser issue in the LokiStack gateway. With this update, a parser fix resolves the issue and the OpenShift Console Logs view can filter by severity. ( LOG-2908 ) Before this update, a refactoring of the Fluentd collector plugins removed the timestamp field for events. This update restores the timestamp field, sourced from the event's received time. ( LOG-2923 ) Before this update, absence of a level field in audit logs caused an error in vector logs. With this update, the addition of a level field in the audit log record resolves the issue. ( LOG-2961 ) Before this update, if you deleted the Kibana Custom Resource, the OpenShift Container Platform web console continued displaying a link to Kibana. With this update, removing the Kibana Custom Resource also removes that link. ( LOG-3053 ) Before this update, each rollover job created empty indices when the ClusterLogForwarder custom resource had JSON parsing defined. With this update, new indices are not empty. ( LOG-3063 ) Before this update, when the user deleted the LokiStack after an update to Loki Operator 5.5 resources originally created by Loki Operator 5.4 remained. With this update, the resources' owner-references point to the 5.5 LokiStack. ( LOG-2945 ) Before this update, a user was not able to view the application logs of namespaces they have access to. With this update, the Loki Operator automatically creates a cluster role and cluster role binding allowing users to read application logs. ( LOG-2918 ) Before this update, users with cluster-admin privileges were not able to properly view infrastructure and audit logs using the logging console. With this update, the authorization check has been extended to also recognize users in cluster-admin and dedicated-admin groups as admins. ( LOG-2970 ) 1.24.2. CVEs CVE-2015-20107 CVE-2022-0391 CVE-2022-21123 CVE-2022-21125 CVE-2022-21166 CVE-2022-29154 CVE-2022-32206 CVE-2022-32208 CVE-2022-34903 1.25. Logging 5.5.1 This release includes OpenShift Logging Bug Fix Release 5.5.1 . 1.25.1. Enhancements This enhancement adds an Aggregated Logs tab to the Pod Details page of the OpenShift Container Platform web console when the Logging Console Plugin is in use. This enhancement is only available on OpenShift Container Platform 4.10 and later. ( LOG-2647 ) This enhancement adds Google Cloud Logging as an output option for log forwarding. ( LOG-1482 ) 1.25.2. Bug fixes Before this update, the Operator did not ensure that the pod was ready, which caused the cluster to reach an inoperable state during a cluster restart. With this update, the Operator marks new pods as ready before continuing to a new pod during a restart, which resolves the issue. ( LOG-2745 ) Before this update, Fluentd would sometimes not recognize that the Kubernetes platform rotated the log file and would no longer read log messages. This update corrects that by setting the configuration parameter suggested by the upstream development team. ( LOG-2995 ) Before this update, the addition of multi-line error detection caused internal routing to change and forward records to the wrong destination. With this update, the internal routing is correct. ( LOG-2801 ) Before this update, changing the OpenShift Container Platform web console's refresh interval created an error when the Query field was empty. With this update, changing the interval is not an available option when the Query field is empty. ( LOG-2917 ) 1.25.3. CVEs CVE-2022-1705 CVE-2022-2526 CVE-2022-29154 CVE-2022-30631 CVE-2022-32148 CVE-2022-32206 CVE-2022-32208 1.26. Logging 5.5 The following advisories are available for Logging 5.5: Release 5.5 1.26.1. Enhancements With this update, you can forward structured logs from different containers within the same pod to different indices. To use this feature, you must configure the pipeline with multi-container support and annotate the pods. ( LOG-1296 ) Important JSON formatting of logs varies by application. Because creating too many indices impacts performance, limit your use of this feature to creating indices for logs that have incompatible JSON formats. Use queries to separate logs from different namespaces, or applications with compatible JSON formats. With this update, you can filter logs with Elasticsearch outputs by using the Kubernetes common labels, app.kubernetes.io/component , app.kubernetes.io/managed-by , app.kubernetes.io/part-of , and app.kubernetes.io/version . Non-Elasticsearch output types can use all labels included in kubernetes.labels . ( LOG-2388 ) With this update, clusters with AWS Security Token Service (STS) enabled may use STS authentication to forward logs to Amazon CloudWatch. ( LOG-1976 ) With this update, the 'Loki Operator' Operator and Vector collector move from Technical Preview to General Availability. Full feature parity with prior releases are pending, and some APIs remain Technical Previews. See the Logging with the LokiStack section for details. 1.26.2. Bug fixes Before this update, clusters configured to forward logs to Amazon CloudWatch wrote rejected log files to temporary storage, causing cluster instability over time. With this update, chunk backup for all storage options has been disabled, resolving the issue. ( LOG-2746 ) Before this update, the Operator was using versions of some APIs that are deprecated and planned for removal in future versions of OpenShift Container Platform. This update moves dependencies to the supported API versions. ( LOG-2656 ) Before this update, the Operator was using versions of some APIs that are deprecated and planned for removal in future versions of OpenShift Container Platform. This update moves dependencies to the supported API versions. ( LOG-2656 ) Before this update, multiple ClusterLogForwarder pipelines configured for multiline error detection caused the collector to go into a crashloopbackoff error state. This update fixes the issue where multiple configuration sections had the same unique ID. ( LOG-2241 ) Before this update, the collector could not save non UTF-8 symbols to the Elasticsearch storage logs. With this update the collector encodes non UTF-8 symbols, resolving the issue. ( LOG-2203 ) Before this update, non-latin characters displayed incorrectly in Kibana. With this update, Kibana displays all valid UTF-8 symbols correctly. ( LOG-2784 ) 1.26.3. CVEs CVE-2021-38561 CVE-2022-1012 CVE-2022-1292 CVE-2022-1586 CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2097 CVE-2022-21698 CVE-2022-30631 CVE-2022-32250 1.27. Logging 5.4.14 This release includes OpenShift Logging Bug Fix Release 5.4.14 . 1.27.1. Bug fixes None. 1.27.2. CVEs CVE-2022-4304 CVE-2022-4450 CVE-2023-0215 CVE-2023-0286 CVE-2023-0361 CVE-2023-23916 1.28. Logging 5.4.13 This release includes OpenShift Logging Bug Fix Release 5.4.13 . 1.28.1. Bug fixes Before this update, a problem with the Fluentd collector caused it to not capture OAuth login events stored in /var/log/auth-server/audit.log . This led to incomplete collection of login events from the OAuth service. With this update, the Fluentd collector now resolves this issue by capturing all login events from the OAuth service, including those stored in /var/log/auth-server/audit.log , as expected. ( LOG-3731 ) 1.28.2. CVEs CVE-2022-4304 CVE-2022-4450 CVE-2023-0215 CVE-2023-0286 CVE-2023-0767 CVE-2023-23916 1.29. Logging 5.4.12 This release includes OpenShift Logging Bug Fix Release 5.4.12 . 1.29.1. Bug fixes None. 1.29.2. CVEs CVE-2020-10735 CVE-2021-28861 CVE-2022-2873 CVE-2022-4415 CVE-2022-40897 CVE-2022-41222 CVE-2022-41717 CVE-2022-43945 CVE-2022-45061 CVE-2022-48303 1.30. Logging 5.4.11 This release includes OpenShift Logging Bug Fix Release 5.4.11 . 1.30.1. Bug fixes BZ 2099524 BZ 2161274 1.30.2. CVEs CVE-2021-46848 CVE-2022-3821 CVE-2022-35737 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 1.31. Logging 5.4.10 This release includes OpenShift Logging Bug Fix Release 5.4.10 . 1.31.1. Bug fixes None. 1.31.2. CVEs CVE-2021-46848 CVE-2022-2056 CVE-2022-2057 CVE-2022-2058 CVE-2022-2519 CVE-2022-2520 CVE-2022-2521 CVE-2022-2867 CVE-2022-2868 CVE-2022-2869 CVE-2022-2953 CVE-2022-2964 CVE-2022-4139 CVE-2022-35737 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 1.32. Logging 5.4.9 This release includes OpenShift Logging Bug Fix Release 5.4.9 . 1.32.1. Bug fixes Before this update, the Fluentd collector would warn of unused configuration parameters. This update removes those configuration parameters and their warning messages. ( LOG-3074 ) Before this update, Kibana had a fixed 24h OAuth cookie expiration time, which resulted in 401 errors in Kibana whenever the accessTokenInactivityTimeout field was set to a value lower than 24h . With this update, Kibana's OAuth cookie expiration time synchronizes to the accessTokenInactivityTimeout , with a default value of 24h . ( LOG-3306 ) 1.32.2. CVEs CVE-2016-3709 CVE-2020-35525 CVE-2020-35527 CVE-2020-36516 CVE-2020-36558 CVE-2021-3640 CVE-2021-30002 CVE-2022-0168 CVE-2022-0561 CVE-2022-0562 CVE-2022-0617 CVE-2022-0854 CVE-2022-0865 CVE-2022-0891 CVE-2022-0908 CVE-2022-0909 CVE-2022-0924 CVE-2022-1016 CVE-2022-1048 CVE-2022-1055 CVE-2022-1184 CVE-2022-1292 CVE-2022-1304 CVE-2022-1355 CVE-2022-1586 CVE-2022-1785 CVE-2022-1852 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2078 CVE-2022-2097 CVE-2022-2509 CVE-2022-2586 CVE-2022-2639 CVE-2022-2938 CVE-2022-3515 CVE-2022-20368 CVE-2022-21499 CVE-2022-21618 CVE-2022-21619 CVE-2022-21624 CVE-2022-21626 CVE-2022-21628 CVE-2022-22624 CVE-2022-22628 CVE-2022-22629 CVE-2022-22662 CVE-2022-22844 CVE-2022-23960 CVE-2022-24448 CVE-2022-25255 CVE-2022-26373 CVE-2022-26700 CVE-2022-26709 CVE-2022-26710 CVE-2022-26716 CVE-2022-26717 CVE-2022-26719 CVE-2022-27404 CVE-2022-27405 CVE-2022-27406 CVE-2022-27950 CVE-2022-28390 CVE-2022-28893 CVE-2022-29581 CVE-2022-30293 CVE-2022-34903 CVE-2022-36946 CVE-2022-37434 CVE-2022-39399 1.33. Logging 5.4.8 This release includes RHSA-2022:7435-OpenShift Logging Bug Fix Release 5.4.8 . 1.33.1. Bug fixes None. 1.33.2. CVEs CVE-2016-3709 CVE-2020-35525 CVE-2020-35527 CVE-2020-36518 CVE-2022-1304 CVE-2022-2509 CVE-2022-3515 CVE-2022-22624 CVE-2022-22628 CVE-2022-22629 CVE-2022-22662 CVE-2022-26700 CVE-2022-26709 CVE-2022-26710 CVE-2022-26716 CVE-2022-26717 CVE-2022-26719 CVE-2022-30293 CVE-2022-32149 CVE-2022-37434 CVE-2022-40674 CVE-2022-42003 CVE-2022-42004 1.34. Logging 5.4.6 This release includes OpenShift Logging Bug Fix Release 5.4.6 . 1.34.1. Bug fixes Before this update, Fluentd would sometimes not recognize that the Kubernetes platform rotated the log file and would no longer read log messages. This update corrects that by setting the configuration parameter suggested by the upstream development team. ( LOG-2792 ) Before this update, each rollover job created empty indices when the ClusterLogForwarder custom resource had JSON parsing defined. With this update, new indices are not empty. ( LOG-2823 ) Before this update, if you deleted the Kibana Custom Resource, the OpenShift Container Platform web console continued displaying a link to Kibana. With this update, removing the Kibana Custom Resource also removes that link. ( LOG-3054 ) 1.34.2. CVEs CVE-2015-20107 CVE-2022-0391 CVE-2022-21123 CVE-2022-21125 CVE-2022-21166 CVE-2022-29154 CVE-2022-32206 CVE-2022-32208 CVE-2022-34903 1.35. Logging 5.4.5 This release includes RHSA-2022:6183-OpenShift Logging Bug Fix Release 5.4.5 . 1.35.1. Bug fixes Before this update, the Operator did not ensure that the pod was ready, which caused the cluster to reach an inoperable state during a cluster restart. With this update, the Operator marks new pods as ready before continuing to a new pod during a restart, which resolves the issue. ( LOG-2881 ) Before this update, the addition of multi-line error detection caused internal routing to change and forward records to the wrong destination. With this update, the internal routing is correct. ( LOG-2946 ) Before this update, the Operator could not decode index setting JSON responses with a quoted Boolean value and would result in an error. With this update, the Operator can properly decode this JSON response. ( LOG-3009 ) Before this update, Elasticsearch index templates defined the fields for labels with the wrong types. This change updates those templates to match the expected types forwarded by the log collector. ( LOG-2972 ) 1.35.2. CVEs CVE-2022-1292 CVE-2022-1586 CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2097 CVE-2022-30631 1.36. Logging 5.4.4 This release includes RHBA-2022:5907-OpenShift Logging Bug Fix Release 5.4.4 . 1.36.1. Bug fixes Before this update, non-latin characters displayed incorrectly in Elasticsearch. With this update, Elasticsearch displays all valid UTF-8 symbols correctly. ( LOG-2794 ) Before this update, non-latin characters displayed incorrectly in Fluentd. With this update, Fluentd displays all valid UTF-8 symbols correctly. ( LOG-2657 ) Before this update, the metrics server for the collector attempted to bind to the address using a value exposed by an environment value. This change modifies the configuration to bind to any available interface. ( LOG-2821 ) Before this update, the cluster-logging Operator relied on the cluster to create a secret. This cluster behavior changed in OpenShift Container Platform 4.11, which caused logging deployments to fail. With this update, the cluster-logging Operator resolves the issue by creating the secret if needed. ( LOG-2840 ) 1.36.2. CVEs CVE-2022-21540 CVE-2022-21541 CVE-2022-34169 1.37. Logging 5.4.3 This release includes RHSA-2022:5556-OpenShift Logging Bug Fix Release 5.4.3 . 1.37.1. Elasticsearch Operator deprecation notice In logging subsystem 5.4.3 the Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. 1.37.2. Bug fixes Before this update, the OpenShift Logging Dashboard showed the number of active primary shards instead of all active shards. With this update, the dashboard displays all active shards. ( LOG-2781 ) Before this update, a bug in a library used by elasticsearch-operator contained a denial of service attack vulnerability. With this update, the library has been updated to a version that does not contain this vulnerability. ( LOG-2816 ) Before this update, when configuring Vector to forward logs to Loki, it was not possible to set a custom bearer token or use the default token if Loki had TLS enabled. With this update, Vector can forward logs to Loki using tokens with TLS enabled. ( LOG-2786 Before this update, the ElasticSearch Operator omitted the referencePolicy property of the ImageStream custom resource when selecting an oauth-proxy image. This omission caused the Kibana deployment to fail in specific environments. With this update, using referencePolicy resolves the issue, and the Operator can deploy Kibana successfully. ( LOG-2791 ) Before this update, alerting rules for the ClusterLogForwarder custom resource did not take multiple forward outputs into account. This update resolves the issue. ( LOG-2640 ) Before this update, clusters configured to forward logs to Amazon CloudWatch wrote rejected log files to temporary storage, causing cluster instability over time. With this update, chunk backup for CloudWatch has been disabled, resolving the issue. ( LOG-2768 ) 1.37.3. CVEs Example 1.1. Click to expand CVEs CVE-2020-28915 CVE-2021-40528 CVE-2022-1271 CVE-2022-1621 CVE-2022-1629 CVE-2022-22576 CVE-2022-25313 CVE-2022-25314 CVE-2022-26691 CVE-2022-27666 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-29824 1.38. Logging 5.4.2 This release includes RHBA-2022:4874-OpenShift Logging Bug Fix Release 5.4.2 1.38.1. Bug fixes Before this update, editing the Collector configuration using oc edit was difficult because it had inconsistent use of white-space. This change introduces logic to normalize and format the configuration prior to any updates by the Operator so that it is easy to edit using oc edit . ( LOG-2319 ) Before this update, the FluentdNodeDown alert could not provide instance labels in the message section appropriately. This update resolves the issue by fixing the alert rule to provide instance labels in cases of partial instance failures. ( LOG-2607 ) Before this update, several log levels, such as`critical`, that were documented as supported by the product were not. This update fixes the discrepancy so the documented log levels are now supported by the product. ( LOG-2033 ) 1.38.2. CVEs Example 1.2. Click to expand CVEs CVE-2018-25032 CVE-2020-0404 CVE-2020-4788 CVE-2020-13974 CVE-2020-19131 CVE-2020-27820 CVE-2021-0941 CVE-2021-3612 CVE-2021-3634 CVE-2021-3669 CVE-2021-3737 CVE-2021-3743 CVE-2021-3744 CVE-2021-3752 CVE-2021-3759 CVE-2021-3764 CVE-2021-3772 CVE-2021-3773 CVE-2021-4002 CVE-2021-4037 CVE-2021-4083 CVE-2021-4157 CVE-2021-4189 CVE-2021-4197 CVE-2021-4203 CVE-2021-20322 CVE-2021-21781 CVE-2021-23222 CVE-2021-26401 CVE-2021-29154 CVE-2021-37159 CVE-2021-41617 CVE-2021-41864 CVE-2021-42739 CVE-2021-43056 CVE-2021-43389 CVE-2021-43976 CVE-2021-44733 CVE-2021-45485 CVE-2021-45486 CVE-2022-0001 CVE-2022-0002 CVE-2022-0286 CVE-2022-0322 CVE-2022-1011 CVE-2022-1271 1.39. Logging 5.4.1 This release includes RHSA-2022:2216-OpenShift Logging Bug Fix Release 5.4.1 . 1.39.1. Bug fixes Before this update, the log file metric exporter only reported logs created while the exporter was running, which resulted in inaccurate log growth data. This update resolves this issue by monitoring /var/log/pods . ( LOG-2442 ) Before this update, the collector would be blocked because it continually tried to use a stale connection when forwarding logs to fluentd forward receivers. With this release, the keepalive_timeout value has been set to 30 seconds ( 30s ) so that the collector recycles the connection and re-attempts to send failed messages within a reasonable amount of time. ( LOG-2534 ) Before this update, an error in the gateway component enforcing tenancy for reading logs limited access to logs with a Kubernetes namespace causing "audit" and some "infrastructure" logs to be unreadable. With this update, the proxy correctly detects users with admin access and allows access to logs without a namespace. ( LOG-2448 ) Before this update, the system:serviceaccount:openshift-monitoring:prometheus-k8s service account had cluster level privileges as a clusterrole and clusterrolebinding . This update restricts the service account` to the openshift-logging namespace with a role and rolebinding. ( LOG-2437 ) Before this update, Linux audit log time parsing relied on an ordinal position of a key/value pair. This update changes the parsing to use a regular expression to find the time entry. ( LOG-2321 ) 1.39.2. CVEs Example 1.3. Click to expand CVEs CVE-2018-25032 CVE-2021-4028 CVE-2021-37136 CVE-2021-37137 CVE-2021-43797 CVE-2022-0778 CVE-2022-1154 CVE-2022-1271 CVE-2022-21426 CVE-2022-21434 CVE-2022-21443 CVE-2022-21476 CVE-2022-21496 CVE-2022-21698 CVE-2022-25636 1.40. Logging 5.4 The following advisories are available for logging 5.4: Logging subsystem for Red Hat OpenShift Release 5.4 1.40.1. Technology Previews Important Vector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.40.2. About Vector Vector is a log collector offered as a tech-preview alternative to the current default collector for the logging subsystem. The following outputs are supported: elasticsearch . An external Elasticsearch instance. The elasticsearch output can use a TLS connection. kafka . A Kafka broker. The kafka output can use an unsecured or TLS connection. loki . Loki, a horizontally scalable, highly available, multi-tenant log aggregation system. 1.40.2.1. Enabling Vector Vector is not enabled by default. Use the following steps to enable Vector on your OpenShift Container Platform cluster. Important Vector does not support FIPS Enabled Clusters. Prerequisites OpenShift Container Platform: 4.10 Logging subsystem for Red Hat OpenShift: 5.4 FIPS disabled Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance Add a logging.openshift.io/preview-vector-collector: enabled annotation to the ClusterLogging custom resource (CR). Add vector as a collection type to the ClusterLogging custom resource (CR). apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" annotations: logging.openshift.io/preview-vector-collector: enabled spec: collection: logs: type: "vector" vector: {} Additional resources Vector Documentation Important Loki Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.40.3. About Loki Loki is a horizontally scalable, highly available, multi-tenant log aggregation system currently offered as an alternative to Elasticsearch as a log store for the logging subsystem. Additional resources Loki Documentation 1.40.3.1. Deploying the Lokistack You can use the OpenShift Container Platform web console to install the Loki Operator. Prerequisites OpenShift Container Platform: 4.10 Logging subsystem for Red Hat OpenShift: 5.4 To install the Loki Operator using the OpenShift Container Platform web console: Install the Loki Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Loki Operator from the list of available Operators, and click Install . Under Installation Mode , select All namespaces on the cluster . Under Installed Namespace , select openshift-operators-redhat . You must specify the openshift-operators-redhat namespace. The openshift-operators namespace might contain Community Operators, which are untrusted and could publish a metric with the same name as an OpenShift Container Platform metric, which would cause conflicts. Select Enable operator recommended cluster monitoring on this namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verify that you installed the Loki Operator. Visit the Operators Installed Operators page and look for "Loki Operator." Ensure that Loki Operator is listed in all the projects whose Status is Succeeded . 1.40.4. Bug fixes Before this update, the cluster-logging-operator used cluster scoped roles and bindings to establish permissions for the Prometheus service account to scrape metrics. These permissions were created when deploying the Operator using the console interface but were missing when deploying from the command line. This update fixes the issue by making the roles and bindings namespace-scoped. ( LOG-2286 ) Before this update, a prior change to fix dashboard reconciliation introduced a ownerReferences field to the resource across namespaces. As a result, both the config map and dashboard were not created in the namespace. With this update, the removal of the ownerReferences field resolves the issue, and the OpenShift Logging dashboard is available in the console. ( LOG-2163 ) Before this update, changes to the metrics dashboards did not deploy because the cluster-logging-operator did not correctly compare existing and modified config maps that contain the dashboard. With this update, the addition of a unique hash value to object labels resolves the issue. ( LOG-2071 ) Before this update, the OpenShift Logging dashboard did not correctly display the pods and namespaces in the table, which displays the top producing containers collected over the last 24 hours. With this update, the pods and namespaces are displayed correctly. ( LOG-2069 ) Before this update, when the ClusterLogForwarder was set up with Elasticsearch OutputDefault and Elasticsearch outputs did not have structured keys, the generated configuration contained the incorrect values for authentication. This update corrects the secret and certificates used. ( LOG-2056 ) Before this update, the OpenShift Logging dashboard displayed an empty CPU graph because of a reference to an invalid metric. With this update, the correct data point has been selected, resolving the issue. ( LOG-2026 ) Before this update, the Fluentd container image included builder tools that were unnecessary at run time. This update removes those tools from the image.( LOG-1927 ) Before this update, a name change of the deployed collector in the 5.3 release caused the logging collector to generate the FluentdNodeDown alert. This update resolves the issue by fixing the job name for the Prometheus alert. ( LOG-1918 ) Before this update, the log collector was collecting its own logs due to a refactoring of the component name change. This lead to a potential feedback loop of the collector processing its own log that might result in memory and log message size issues. This update resolves the issue by excluding the collector logs from the collection. ( LOG-1774 ) Before this update, Elasticsearch generated the error Unable to create PersistentVolumeClaim due to forbidden: exceeded quota: infra-storage-quota. if the PVC already existed. With this update, Elasticsearch checks for existing PVCs, resolving the issue. ( LOG-2131 ) Before this update, Elasticsearch was unable to return to the ready state when the elasticsearch-signing secret was removed. With this update, Elasticsearch is able to go back to the ready state after that secret is removed. ( LOG-2171 ) Before this update, the change of the path from which the collector reads container logs caused the collector to forward some records to the wrong indices. With this update, the collector now uses the correct configuration to resolve the issue. ( LOG-2160 ) Before this update, clusters with a large number of namespaces caused Elasticsearch to stop serving requests because the list of namespaces reached the maximum header size limit. With this update, headers only include a list of namespace names, resolving the issue. ( LOG-1899 ) Before this update, the OpenShift Container Platform Logging dashboard showed the number of shards 'x' times larger than the actual value when Elasticsearch had 'x' nodes. This issue occurred because it was printing all primary shards for each Elasticsearch pod and calculating a sum on it, although the output was always for the whole Elasticsearch cluster. With this update, the number of shards is now correctly calculated. ( LOG-2156 ) Before this update, the secrets kibana and kibana-proxy were not recreated if they were deleted manually. With this update, the elasticsearch-operator will watch the resources and automatically recreate them if deleted. ( LOG-2250 ) Before this update, tuning the buffer chunk size could cause the collector to generate a warning about the chunk size exceeding the byte limit for the event stream. With this update, you can also tune the read line limit, resolving the issue. ( LOG-2379 ) Before this update, the logging console link in OpenShift web console was not removed with the ClusterLogging CR. With this update, deleting the CR or uninstalling the Cluster Logging Operator removes the link. ( LOG-2373 ) Before this update, a change to the container logs path caused the collection metric to always be zero with older releases configured with the original path. With this update, the plugin which exposes metrics about collected logs supports reading from either path to resolve the issue. ( LOG-2462 ) 1.40.5. CVEs CVE-2022-0759 BZ-2058404 CVE-2022-21698 BZ-2045880 1.41. Logging 5.3.14 This release includes OpenShift Logging Bug Fix Release 5.3.14 . 1.41.1. Bug fixes Before this update, the log file size map generated by the log-file-metrics-exporter component did not remove entries for deleted files, resulting in increased file size, and process memory. With this update, the log file size map does not contain entries for deleted files. ( LOG-3293 ) 1.41.2. CVEs CVE-2016-3709 CVE-2020-35525 CVE-2020-35527 CVE-2020-36516 CVE-2020-36558 CVE-2021-3640 CVE-2021-30002 CVE-2022-0168 CVE-2022-0561 CVE-2022-0562 CVE-2022-0617 CVE-2022-0854 CVE-2022-0865 CVE-2022-0891 CVE-2022-0908 CVE-2022-0909 CVE-2022-0924 CVE-2022-1016 CVE-2022-1048 CVE-2022-1055 CVE-2022-1184 CVE-2022-1292 CVE-2022-1304 CVE-2022-1355 CVE-2022-1586 CVE-2022-1785 CVE-2022-1852 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2078 CVE-2022-2097 CVE-2022-2509 CVE-2022-2586 CVE-2022-2639 CVE-2022-2938 CVE-2022-3515 CVE-2022-20368 CVE-2022-21499 CVE-2022-21618 CVE-2022-21619 CVE-2022-21624 CVE-2022-21626 CVE-2022-21628 CVE-2022-22624 CVE-2022-22628 CVE-2022-22629 CVE-2022-22662 CVE-2022-22844 CVE-2022-23960 CVE-2022-24448 CVE-2022-25255 CVE-2022-26373 CVE-2022-26700 CVE-2022-26709 CVE-2022-26710 CVE-2022-26716 CVE-2022-26717 CVE-2022-26719 CVE-2022-27404 CVE-2022-27405 CVE-2022-27406 CVE-2022-27950 CVE-2022-28390 CVE-2022-28893 CVE-2022-29581 CVE-2022-30293 CVE-2022-34903 CVE-2022-36946 CVE-2022-37434 CVE-2022-39399 CVE-2022-42898 1.42. Logging 5.3.13 This release includes RHSA-2022:68828-OpenShift Logging Bug Fix Release 5.3.13 . 1.42.1. Bug fixes None. 1.42.2. CVEs Example 1.4. Click to expand CVEs CVE-2020-35525 CVE-2020-35527 CVE-2022-0494 CVE-2022-1353 CVE-2022-2509 CVE-2022-2588 CVE-2022-3515 CVE-2022-21618 CVE-2022-21619 CVE-2022-21624 CVE-2022-21626 CVE-2022-21628 CVE-2022-23816 CVE-2022-23825 CVE-2022-29900 CVE-2022-29901 CVE-2022-32149 CVE-2022-37434 CVE-2022-39399 CVE-2022-40674 1.43. Logging 5.3.12 This release includes OpenShift Logging Bug Fix Release 5.3.12 . 1.43.1. Bug fixes None. 1.43.2. CVEs CVE-2015-20107 CVE-2022-0391 CVE-2022-21123 CVE-2022-21125 CVE-2022-21166 CVE-2022-29154 CVE-2022-32206 CVE-2022-32208 CVE-2022-34903 1.44. Logging 5.3.11 This release includes OpenShift Logging Bug Fix Release 5.3.11 . 1.44.1. Bug fixes Before this update, the Operator did not ensure that the pod was ready, which caused the cluster to reach an inoperable state during a cluster restart. With this update, the Operator marks new pods as ready before continuing to a new pod during a restart, which resolves the issue. ( LOG-2871 ) 1.44.2. CVEs CVE-2022-1292 CVE-2022-1586 CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2097 CVE-2022-30631 1.45. Logging 5.3.10 This release includes RHSA-2022:5908-OpenShift Logging Bug Fix Release 5.3.10 . 1.45.1. Bug fixes BZ-2100495 1.45.2. CVEs Example 1.5. Click to expand CVEs CVE-2021-38561 CVE-2021-40528 CVE-2022-1271 CVE-2022-1621 CVE-2022-1629 CVE-2022-21540 CVE-2022-21541 CVE-2022-22576 CVE-2022-25313 CVE-2022-25314 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-29824 CVE-2022-34169 1.46. Logging 5.3.9 This release includes RHBA-2022:5557-OpenShift Logging Bug Fix Release 5.3.9 . 1.46.1. Bug fixes Before this update, the logging collector included a path as a label for the metrics it produced. This path changed frequently and contributed to significant storage changes for the Prometheus server. With this update, the label has been dropped to resolve the issue and reduce storage consumption. ( LOG-2682 ) 1.46.2. CVEs Example 1.6. Click to expand CVEs CVE-2020-28915 CVE-2021-40528 CVE-2022-1271 CVE-2022-1621 CVE-2022-1629 CVE-2022-22576 CVE-2022-25313 CVE-2022-25314 CVE-2022-26691 CVE-2022-27666 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-29824 1.47. Logging 5.3.8 This release includes RHBA-2022:5010-OpenShift Logging Bug Fix Release 5.3.8 1.47.1. Bug fixes (None.) 1.47.2. CVEs Example 1.7. Click to expand CVEs CVE-2018-25032 CVE-2020-0404 CVE-2020-4788 CVE-2020-13974 CVE-2020-19131 CVE-2020-27820 CVE-2021-0941 CVE-2021-3612 CVE-2021-3634 CVE-2021-3669 CVE-2021-3737 CVE-2021-3743 CVE-2021-3744 CVE-2021-3752 CVE-2021-3759 CVE-2021-3764 CVE-2021-3772 CVE-2021-3773 CVE-2021-4002 CVE-2021-4037 CVE-2021-4083 CVE-2021-4157 CVE-2021-4189 CVE-2021-4197 CVE-2021-4203 CVE-2021-20322 CVE-2021-21781 CVE-2021-23222 CVE-2021-26401 CVE-2021-29154 CVE-2021-37159 CVE-2021-41617 CVE-2021-41864 CVE-2021-42739 CVE-2021-43056 CVE-2021-43389 CVE-2021-43976 CVE-2021-44733 CVE-2021-45485 CVE-2021-45486 CVE-2022-0001 CVE-2022-0002 CVE-2022-0286 CVE-2022-0322 CVE-2022-1011 CVE-2022-1271 1.48. OpenShift Logging 5.3.7 This release includes RHSA-2022:2217 OpenShift Logging Bug Fix Release 5.3.7 1.48.1. Bug fixes Before this update, Linux audit log time parsing relied on an ordinal position of key/value pair. This update changes the parsing to utilize a regex to find the time entry. ( LOG-2322 ) Before this update, some log forwarder outputs could re-order logs with the same time-stamp. With this update, a sequence number has been added to the log record to order entries that have matching timestamps. ( LOG-2334 ) Before this update, clusters with a large number of namespaces caused Elasticsearch to stop serving requests because the list of namespaces reached the maximum header size limit. With this update, headers only include a list of namespace names, resolving the issue. ( LOG-2450 ) Before this update, system:serviceaccount:openshift-monitoring:prometheus-k8s had cluster level privileges as a clusterrole and clusterrolebinding . This update restricts the serviceaccount to the openshift-logging namespace with a role and rolebinding. ( LOG-2481) ) 1.48.2. CVEs Example 1.8. Click to expand CVEs CVE-2018-25032 CVE-2021-4028 CVE-2021-37136 CVE-2021-37137 CVE-2021-43797 CVE-2022-0759 CVE-2022-0778 CVE-2022-1154 CVE-2022-1271 CVE-2022-21426 CVE-2022-21434 CVE-2022-21443 CVE-2022-21476 CVE-2022-21496 CVE-2022-21698 CVE-2022-25636 1.49. OpenShift Logging 5.3.6 This release includes RHBA-2022:1377 OpenShift Logging Bug Fix Release 5.3.6 1.49.1. Bug fixes Before this update, defining a toleration with no key and the existing Operator caused the Operator to be unable to complete an upgrade. With this update, this toleration no longer blocks the upgrade from completing. ( LOG-2126 ) Before this change, it was possible for the collector to generate a warning where the chunk byte limit was exceeding an emitted event. With this change, you can tune the readline limit to resolve the issue as advised by the upstream documentation. ( LOG-2380 ) 1.50. OpenShift Logging 5.3.5 This release includes RHSA-2022:0721 OpenShift Logging Bug Fix Release 5.3.5 1.50.1. Bug fixes Before this update, if you removed OpenShift Logging from OpenShift Container Platform, the web console continued displaying a link to the Logging page. With this update, removing or uninstalling OpenShift Logging also removes that link. ( LOG-2182 ) 1.50.2. CVEs Example 1.9. Click to expand CVEs CVE-2020-28491 CVE-2021-3521 CVE-2021-3872 CVE-2021-3984 CVE-2021-4019 CVE-2021-4122 CVE-2021-4192 CVE-2021-4193 CVE-2022-0552 1.51. OpenShift Logging 5.3.4 This release includes RHBA-2022:0411 OpenShift Logging Bug Fix Release 5.3.4 1.51.1. Bug fixes Before this update, changes to the metrics dashboards had not yet been deployed because the cluster-logging-operator did not correctly compare existing and desired config maps that contained the dashboard. This update fixes the logic by adding a unique hash value to the object labels. ( LOG-2066 ) Before this update, Elasticsearch pods failed to start after updating with FIPS enabled. With this update, Elasticsearch pods start successfully. ( LOG-1974 ) Before this update, elasticsearch generated the error "Unable to create PersistentVolumeClaim due to forbidden: exceeded quota: infra-storage-quota." if the PVC already existed. With this update, elasticsearch checks for existing PVCs, resolving the issue. ( LOG-2127 ) 1.51.2. CVEs Example 1.10. Click to expand CVEs CVE-2021-3521 CVE-2021-3872 CVE-2021-3984 CVE-2021-4019 CVE-2021-4122 CVE-2021-4155 CVE-2021-4192 CVE-2021-4193 CVE-2022-0185 CVE-2022-21248 CVE-2022-21277 CVE-2022-21282 CVE-2022-21283 CVE-2022-21291 CVE-2022-21293 CVE-2022-21294 CVE-2022-21296 CVE-2022-21299 CVE-2022-21305 CVE-2022-21340 CVE-2022-21341 CVE-2022-21360 CVE-2022-21365 CVE-2022-21366 1.52. OpenShift Logging 5.3.3 This release includes RHSA-2022:0227 OpenShift Logging Bug Fix Release 5.3.3 1.52.1. Bug fixes Before this update, changes to the metrics dashboards had not yet been deployed because the cluster-logging-operator did not correctly compare existing and desired configmaps containing the dashboard. This update fixes the logic by adding a dashboard unique hash value to the object labels.( LOG-2066 ) This update changes the log4j dependency to 2.17.1 to resolve CVE-2021-44832 .( LOG-2102 ) 1.52.2. CVEs Example 1.11. Click to expand CVEs CVE-2021-27292 BZ-1940613 CVE-2021-44832 BZ-2035951 1.53. OpenShift Logging 5.3.2 This release includes RHSA-2022:0044 OpenShift Logging Bug Fix Release 5.3.2 1.53.1. Bug fixes Before this update, Elasticsearch rejected logs from the Event Router due to a parsing error. This update changes the data model to resolve the parsing error. However, as a result, indices might cause warnings or errors within Kibana. The kubernetes.event.metadata.resourceVersion field causes errors until existing indices are removed or reindexed. If this field is not used in Kibana, you can ignore the error messages. If you have a retention policy that deletes old indices, the policy eventually removes the old indices and stops the error messages. Otherwise, manually reindex to stop the error messages. ( LOG-2087 ) Before this update, the OpenShift Logging Dashboard displayed the wrong pod namespace in the table that displays top producing and collected containers over the last 24 hours. With this update, the OpenShift Logging Dashboard displays the correct pod namespace. ( LOG-2051 ) Before this update, if outputDefaults.elasticsearch.structuredTypeKey in the ClusterLogForwarder custom resource (CR) instance did not have a structured key, the CR replaced the output secret with the default secret used to communicate to the default log store. With this update, the defined output secret is correctly used. ( LOG-2046 ) 1.53.2. CVEs Example 1.12. Click to expand CVEs CVE-2020-36327 BZ-1958999 CVE-2021-45105 BZ-2034067 CVE-2021-3712 CVE-2021-20321 CVE-2021-42574 1.54. OpenShift Logging 5.3.1 This release includes RHSA-2021:5129 OpenShift Logging Bug Fix Release 5.3.1 1.54.1. Bug fixes Before this update, the Fluentd container image included builder tools that were unnecessary at run time. This update removes those tools from the image. ( LOG-1998 ) Before this update, the Logging dashboard displayed an empty CPU graph because of a reference to an invalid metric. With this update, the Logging dashboard displays CPU graphs correctly. ( LOG-1925 ) Before this update, the Elasticsearch Prometheus exporter plugin compiled index-level metrics using a high-cost query that impacted the Elasticsearch node performance. This update implements a lower-cost query that improves performance. ( LOG-1897 ) 1.54.2. CVEs Example 1.13. Click to expand CVEs CVE-2021-21409 BZ-1944888 CVE-2021-37136 BZ-2004133 CVE-2021-37137 BZ-2004135 CVE-2021-44228 BZ-2030932 CVE-2018-25009 CVE-2018-25010 CVE-2018-25012 CVE-2018-25013 CVE-2018-25014 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-12762 CVE-2020-13435 CVE-2020-14145 CVE-2020-14155 CVE-2020-16135 CVE-2020-17541 CVE-2020-24370 CVE-2020-35521 CVE-2020-35522 CVE-2020-35523 CVE-2020-35524 CVE-2020-36330 CVE-2020-36331 CVE-2020-36332 CVE-2021-3200 CVE-2021-3426 CVE-2021-3445 CVE-2021-3481 CVE-2021-3572 CVE-2021-3580 CVE-2021-3712 CVE-2021-3800 CVE-2021-20231 CVE-2021-20232 CVE-2021-20266 CVE-2021-20317 CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 CVE-2021-27645 CVE-2021-28153 CVE-2021-31535 CVE-2021-33560 CVE-2021-33574 CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-42574 CVE-2021-43267 CVE-2021-43527 CVE-2021-45046 1.55. OpenShift Logging 5.3.0 This release includes RHSA-2021:4627 OpenShift Logging Bug Fix Release 5.3.0 1.55.1. New features and enhancements With this update, authorization options for Log Forwarding have been expanded. Outputs may now be configured with SASL, username/password, or TLS. 1.55.2. Bug fixes Before this update, if you forwarded logs using the syslog protocol, serializing a ruby hash encoded key/value pairs to contain a '⇒' character and replaced tabs with "#11". This update fixes the issue so that log messages are correctly serialized as valid JSON. ( LOG-1494 ) Before this update, application logs were not correctly configured to forward to the proper Cloudwatch stream with multi-line error detection enabled. ( LOG-1939 ) Before this update, a name change of the deployed collector in the 5.3 release caused the alert 'fluentnodedown' to generate. ( LOG-1918 ) Before this update, a regression introduced in a prior release configuration caused the collector to flush its buffered messages before shutdown, creating a delay the termination and restart of collector Pods. With this update, fluentd no longer flushes buffers at shutdown, resolving the issue. ( LOG-1735 ) Before this update, a regression introduced in a prior release intentionally disabled JSON message parsing. This update re-enables JSON parsing. It also sets the log entry "level" based on the "level" field in parsed JSON message or by using regex to extract a match from a message field. ( LOG-1199 ) Before this update, the ClusterLogging custom resource (CR) applied the value of the totalLimitSize field to the Fluentd total_limit_size field, even if the required buffer space was not available. With this update, the CR applies the lesser of the two totalLimitSize or 'default' values to the Fluentd total_limit_size field, resolving the issue. ( LOG-1776 ) 1.55.3. Known issues If you forward logs to an external Elasticsearch server and then change a configured value in the pipeline secret, such as the username and password, the Fluentd forwarder loads the new secret but uses the old value to connect to an external Elasticsearch server. This issue happens because the Red Hat OpenShift Logging Operator does not currently monitor secrets for content changes. ( LOG-1652 ) As a workaround, if you change the secret, you can force the Fluentd pods to redeploy by entering: USD oc delete pod -l component=collector 1.55.4. Deprecated and removed features Some features available in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Logging and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. 1.55.4.1. Forwarding logs using the legacy Fluentd and legacy syslog methods have been removed In OpenShift Logging 5.3, the legacy methods of forwarding logs to Syslog and Fluentd are removed. Bug fixes and support are provided through the end of the OpenShift Logging 5.2 life cycle. After which, no new feature enhancements are made. Instead, use the following non-legacy methods: Forwarding logs using the Fluentd forward protocol Forwarding logs using the syslog protocol 1.55.4.2. Configuration mechanisms for legacy forwarding methods have been removed In OpenShift Logging 5.3, the legacy configuration mechanism for log forwarding is removed: You cannot forward logs using the legacy Fluentd method and legacy Syslog method. Use the standard log forwarding methods instead. 1.55.5. CVEs Example 1.14. Click to expand CVEs CVE-2018-20673 CVE-2018-25009 CVE-2018-25010 CVE-2018-25012 CVE-2018-25013 CVE-2018-25014 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-14615 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-0427 CVE-2020-10001 CVE-2020-12762 CVE-2020-13435 CVE-2020-14145 CVE-2020-14155 CVE-2020-16135 CVE-2020-17541 CVE-2020-24370 CVE-2020-24502 CVE-2020-24503 CVE-2020-24504 CVE-2020-24586 CVE-2020-24587 CVE-2020-24588 CVE-2020-26139 CVE-2020-26140 CVE-2020-26141 CVE-2020-26143 CVE-2020-26144 CVE-2020-26145 CVE-2020-26146 CVE-2020-26147 CVE-2020-27777 CVE-2020-29368 CVE-2020-29660 CVE-2020-35448 CVE-2020-35521 CVE-2020-35522 CVE-2020-35523 CVE-2020-35524 CVE-2020-36158 CVE-2020-36312 CVE-2020-36330 CVE-2020-36331 CVE-2020-36332 CVE-2020-36386 CVE-2021-0129 CVE-2021-3200 CVE-2021-3348 CVE-2021-3426 CVE-2021-3445 CVE-2021-3481 CVE-2021-3487 CVE-2021-3489 CVE-2021-3564 CVE-2021-3572 CVE-2021-3573 CVE-2021-3580 CVE-2021-3600 CVE-2021-3635 CVE-2021-3659 CVE-2021-3679 CVE-2021-3732 CVE-2021-3778 CVE-2021-3796 CVE-2021-3800 CVE-2021-20194 CVE-2021-20197 CVE-2021-20231 CVE-2021-20232 CVE-2021-20239 CVE-2021-20266 CVE-2021-20284 CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 CVE-2021-23133 CVE-2021-23840 CVE-2021-23841 CVE-2021-27645 CVE-2021-28153 CVE-2021-28950 CVE-2021-28971 CVE-2021-29155 lCVE-2021-29646 CVE-2021-29650 CVE-2021-31440 CVE-2021-31535 CVE-2021-31829 CVE-2021-31916 CVE-2021-33033 CVE-2021-33194 CVE-2021-33200 CVE-2021-33560 CVE-2021-33574 CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-42574 1.56. Logging 5.2.13 This release includes RHSA-2022:5909-OpenShift Logging Bug Fix Release 5.2.13 . 1.56.1. Bug fixes BZ-2100495 1.56.2. CVEs Example 1.15. Click to expand CVEs CVE-2021-38561 CVE-2021-40528 CVE-2022-1271 CVE-2022-1621 CVE-2022-1629 CVE-2022-21540 CVE-2022-21541 CVE-2022-22576 CVE-2022-25313 CVE-2022-25314 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-29824 CVE-2022-34169 1.57. Logging 5.2.12 This release includes RHBA-2022:5558-OpenShift Logging Bug Fix Release 5.2.12 . 1.57.1. Bug fixes None. 1.57.2. CVEs Example 1.16. Click to expand CVEs CVE-2020-28915 CVE-2021-40528 CVE-2022-1271 CVE-2022-1621 CVE-2022-1629 CVE-2022-22576 CVE-2022-25313 CVE-2022-25314 CVE-2022-26691 CVE-2022-27666 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-29824 1.58. Logging 5.2.11 This release includes RHBA-2022:5012-OpenShift Logging Bug Fix Release 5.2.11 1.58.1. Bug fixes Before this update, clusters configured to perform CloudWatch forwarding wrote rejected log files to temporary storage, causing cluster instability over time. With this update, chunk backup for CloudWatch has been disabled, resolving the issue. ( LOG-2635 ) 1.58.2. CVEs Example 1.17. Click to expand CVEs CVE-2018-25032 CVE-2020-0404 CVE-2020-4788 CVE-2020-13974 CVE-2020-19131 CVE-2020-27820 CVE-2021-0941 CVE-2021-3612 CVE-2021-3634 CVE-2021-3669 CVE-2021-3737 CVE-2021-3743 CVE-2021-3744 CVE-2021-3752 CVE-2021-3759 CVE-2021-3764 CVE-2021-3772 CVE-2021-3773 CVE-2021-4002 CVE-2021-4037 CVE-2021-4083 CVE-2021-4157 CVE-2021-4189 CVE-2021-4197 CVE-2021-4203 CVE-2021-20322 CVE-2021-21781 CVE-2021-23222 CVE-2021-26401 CVE-2021-29154 CVE-2021-37159 CVE-2021-41617 CVE-2021-41864 CVE-2021-42739 CVE-2021-43056 CVE-2021-43389 CVE-2021-43976 CVE-2021-44733 CVE-2021-45485 CVE-2021-45486 CVE-2022-0001 CVE-2022-0002 CVE-2022-0286 CVE-2022-0322 CVE-2022-1011 CVE-2022-1271 1.59. OpenShift Logging 5.2.10 This release includes OpenShift Logging Bug Fix Release 5.2.10 ] 1.59.1. Bug fixes Before this update some log forwarder outputs could re-order logs with the same time-stamp. With this update, a sequence number has been added to the log record to order entries that have matching timestamps.( LOG-2335 ) Before this update, clusters with a large number of namespaces caused Elasticsearch to stop serving requests because the list of namespaces reached the maximum header size limit. With this update, headers only include a list of namespace names, resolving the issue. ( LOG-2475 ) Before this update, system:serviceaccount:openshift-monitoring:prometheus-k8s had cluster level privileges as a clusterrole and clusterrolebinding . This update restricts the serviceaccount to the openshift-logging namespace with a role and rolebinding. ( LOG-2480 ) Before this update, the cluster-logging-operator utilized cluster scoped roles and bindings to establish permissions for the Prometheus service account to scrape metrics. These permissions were only created when deploying the Operator using the console interface and were missing when the Operator was deployed from the command line. This fixes the issue by making this role and binding namespace scoped. ( LOG-1972 ) 1.59.2. CVEs Example 1.18. Click to expand CVEs CVE-2018-25032 CVE-2021-4028 CVE-2021-37136 CVE-2021-37137 CVE-2021-43797 CVE-2022-0778 CVE-2022-1154 CVE-2022-1271 CVE-2022-21426 CVE-2022-21434 CVE-2022-21443 CVE-2022-21476 CVE-2022-21496 CVE-2022-21698 CVE-2022-25636 1.60. OpenShift Logging 5.2.9 This release includes RHBA-2022:1375 OpenShift Logging Bug Fix Release 5.2.9 ] 1.60.1. Bug fixes Before this update, defining a toleration with no key and the existing Operator caused the Operator to be unable to complete an upgrade. With this update, this toleration no longer blocks the upgrade from completing. ( LOG-2304 ) 1.61. OpenShift Logging 5.2.8 This release includes RHSA-2022:0728 OpenShift Logging Bug Fix Release 5.2.8 1.61.1. Bug fixes Before this update, if you removed OpenShift Logging from OpenShift Container Platform, the web console continued displaying a link to the Logging page. With this update, removing or uninstalling OpenShift Logging also removes that link. ( LOG-2180 ) 1.61.2. CVEs Example 1.19. Click to expand CVEs CVE-2020-28491 BZ-1930423 CVE-2022-0552 BG-2052539 1.62. OpenShift Logging 5.2.7 This release includes RHBA-2022:0478 OpenShift Logging Bug Fix Release 5.2.7 1.62.1. Bug fixes Before this update, Elasticsearch pods with FIPS enabled failed to start after updating. With this update, Elasticsearch pods start successfully. ( LOG-2000 ) Before this update, if a persistent volume claim (PVC) already existed, Elasticsearch generated an error, "Unable to create PersistentVolumeClaim due to forbidden: exceeded quota: infra-storage-quota." With this update, Elasticsearch checks for existing PVCs, resolving the issue. ( LOG-2118 ) 1.62.2. CVEs Example 1.20. Click to expand CVEs CVE-2021-3521 CVE-2021-3872 CVE-2021-3984 CVE-2021-4019 CVE-2021-4122 CVE-2021-4155 CVE-2021-4192 CVE-2021-4193 CVE-2022-0185 1.63. OpenShift Logging 5.2.6 This release includes RHSA-2022:0230 OpenShift Logging Bug Fix Release 5.2.6 1.63.1. Bug fixes Before this update, the release did not include a filter change which caused Fluentd to crash. With this update, the missing filter has been corrected. ( LOG-2104 ) This update changes the log4j dependency to 2.17.1 to resolve CVE-2021-44832 .( LOG-2101 ) 1.63.2. CVEs Example 1.21. Click to expand CVEs CVE-2021-27292 BZ-1940613 CVE-2021-44832 BZ-2035951 1.64. OpenShift Logging 5.2.5 This release includes RHSA-2022:0043 OpenShift Logging Bug Fix Release 5.2.5 1.64.1. Bug fixes Before this update, Elasticsearch rejected logs from the Event Router due to a parsing error. This update changes the data model to resolve the parsing error. However, as a result, indices might cause warnings or errors within Kibana. The kubernetes.event.metadata.resourceVersion field causes errors until existing indices are removed or reindexed. If this field is not used in Kibana, you can ignore the error messages. If you have a retention policy that deletes old indices, the policy eventually removes the old indices and stops the error messages. Otherwise, manually reindex to stop the error messages. LOG-2087 ) 1.64.2. CVEs Example 1.22. Click to expand CVEs CVE-2021-3712 CVE-2021-20321 CVE-2021-42574 CVE-2021-45105 1.65. OpenShift Logging 5.2.4 This release includes RHSA-2021:5127 OpenShift Logging Bug Fix Release 5.2.4 1.65.1. Bug fixes Before this update, records shipped via syslog would serialize a ruby hash encoding key/value pairs to contain a '⇒' character, as well as replace tabs with "#11". This update serializes the message correctly as proper JSON. ( LOG-1775 ) Before this update, the Elasticsearch Prometheus exporter plugin compiled index-level metrics using a high-cost query that impacted the Elasticsearch node performance. This update implements a lower-cost query that improves performance. ( LOG-1970 ) Before this update, Elasticsearch sometimes rejected messages when Log Forwarding was configured with multiple outputs. This happened because configuring one of the outputs modified message content to be a single message. With this update, Log Forwarding duplicates the messages for each output so that output-specific processing does not affect the other outputs. ( LOG-1824 ) 1.65.2. CVEs Example 1.23. Click to expand CVEs CVE-2018-25009 CVE-2018-25010 CVE-2018-25012 CVE-2018-25013 CVE-2018-25014 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-12762 CVE-2020-13435 CVE-2020-14145 CVE-2020-14155 CVE-2020-16135 CVE-2020-17541 CVE-2020-24370 CVE-2020-35521 CVE-2020-35522 CVE-2020-35523 CVE-2020-35524 CVE-2020-36330 CVE-2020-36331 CVE-2020-36332 CVE-2021-3200 CVE-2021-3426 CVE-2021-3445 CVE-2021-3481 CVE-2021-3572 CVE-2021-3580 CVE-2021-3712 CVE-2021-3800 CVE-2021-20231 CVE-2021-20232 CVE-2021-20266 CVE-2021-20317 CVE-2021-21409 CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 CVE-2021-27645 CVE-2021-28153 CVE-2021-31535 CVE-2021-33560 CVE-2021-33574 CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-37136 CVE-2021-37137 CVE-2021-42574 CVE-2021-43267 CVE-2021-43527 CVE-2021-44228 CVE-2021-45046 1.66. OpenShift Logging 5.2.3 This release includes RHSA-2021:4032 OpenShift Logging Bug Fix Release 5.2.3 1.66.1. Bug fixes Before this update, some alerts did not include a namespace label. This omission does not comply with the OpenShift Monitoring Team's guidelines for writing alerting rules in OpenShift Container Platform. With this update, all the alerts in Elasticsearch Operator include a namespace label and follow all the guidelines for writing alerting rules in OpenShift Container Platform. ( LOG-1857 ) Before this update, a regression introduced in a prior release intentionally disabled JSON message parsing. This update re-enables JSON parsing. It also sets the log entry level based on the level field in parsed JSON message or by using regex to extract a match from a message field. ( LOG-1759 ) 1.66.2. CVEs Example 1.24. Click to expand CVEs CVE-2021-23369 BZ-1948761 CVE-2021-23383 BZ-1956688 CVE-2018-20673 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-12762 CVE-2020-13435 CVE-2020-14155 CVE-2020-16135 CVE-2020-24370 CVE-2021-3200 CVE-2021-3426 CVE-2021-3445 CVE-2021-3572 CVE-2021-3580 CVE-2021-3778 CVE-2021-3796 CVE-2021-3800 CVE-2021-20231 CVE-2021-20232 CVE-2021-20266 CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 CVE-2021-23840 CVE-2021-23841 CVE-2021-27645 CVE-2021-28153 CVE-2021-33560 CVE-2021-33574 CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 1.67. OpenShift Logging 5.2.2 This release includes RHBA-2021:3747 OpenShift Logging Bug Fix Release 5.2.2 1.67.1. Bug fixes Before this update, the ClusterLogging custom resource (CR) applied the value of the totalLimitSize field to the Fluentd total_limit_size field, even if the required buffer space was not available. With this update, the CR applies the lesser of the two totalLimitSize or 'default' values to the Fluentd total_limit_size field, resolving the issue.( LOG-1738 ) Before this update, a regression introduced in a prior release configuration caused the collector to flush its buffered messages before shutdown, creating a delay to the termination and restart of collector pods. With this update, Fluentd no longer flushes buffers at shutdown, resolving the issue. ( LOG-1739 ) Before this update, an issue in the bundle manifests prevented installation of the Elasticsearch Operator through OLM on OpenShift Container Platform 4.9. With this update, a correction to bundle manifests re-enables installation and upgrade in 4.9.( LOG-1780 ) 1.67.2. CVEs Example 1.25. Click to expand CVEs CVE-2020-25648 CVE-2021-22922 CVE-2021-22923 CVE-2021-22924 CVE-2021-36222 CVE-2021-37576 CVE-2021-37750 CVE-2021-38201 1.68. OpenShift Logging 5.2.1 This release includes RHBA-2021:3550 OpenShift Logging Bug Fix Release 5.2.1 1.68.1. Bug fixes Before this update, due to an issue in the release pipeline scripts, the value of the olm.skipRange field remained unchanged at 5.2.0 instead of reflecting the current release number. This update fixes the pipeline scripts to update the value of this field when the release numbers change. ( LOG-1743 ) 1.68.2. CVEs (None) 1.69. OpenShift Logging 5.2.0 This release includes RHBA-2021:3393 OpenShift Logging Bug Fix Release 5.2.0 1.69.1. New features and enhancements With this update, you can forward log data to Amazon CloudWatch, which provides application and infrastructure monitoring. For more information, see Forwarding logs to Amazon CloudWatch . ( LOG-1173 ) With this update, you can forward log data to Loki, a horizontally scalable, highly available, multi-tenant log aggregation system. For more information, see Forwarding logs to Loki . ( LOG-684 ) With this update, if you use the Fluentd forward protocol to forward log data over a TLS-encrypted connection, now you can use a password-encrypted private key file and specify the passphrase in the Cluster Log Forwarder configuration. For more information, see Forwarding logs using the Fluentd forward protocol . ( LOG-1525 ) This enhancement enables you to use a username and password to authenticate a log forwarding connection to an external Elasticsearch instance. For example, if you cannot use mutual TLS (mTLS) because a third-party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password. For more information, see Forwarding logs to an external Elasticsearch instance . ( LOG-1022 ) With this update, you can collect OVN network policy audit logs for forwarding to a logging server. ( LOG-1526 ) By default, the data model introduced in OpenShift Container Platform 4.5 gave logs from different namespaces a single index in common. This change made it harder to see which namespaces produced the most logs. The current release adds namespace metrics to the Logging dashboard in the OpenShift Container Platform console. With these metrics, you can see which namespaces produce logs and how many logs each namespace produces for a given timestamp. To see these metrics, open the Administrator perspective in the OpenShift Container Platform web console, and navigate to Observe Dashboards Logging/Elasticsearch . ( LOG-1680 ) The current release, OpenShift Logging 5.2, enables two new metrics: For a given timestamp or duration, you can see the total logs produced or logged by individual containers, and the total logs collected by the collector. These metrics are labeled by namespace, pod, and container name so that you can see how many logs each namespace and pod collects and produces. ( LOG-1213 ) 1.69.2. Bug fixes Before this update, when the OpenShift Elasticsearch Operator created index management cronjobs, it added the POLICY_MAPPING environment variable twice, which caused the apiserver to report the duplication. This update fixes the issue so that the POLICY_MAPPING environment variable is set only once per cronjob, and there is no duplication for the apiserver to report. ( LOG-1130 ) Before this update, suspending an Elasticsearch cluster to zero nodes did not suspend the index-management cronjobs, which put these cronjobs into maximum backoff. Then, after unsuspending the Elasticsearch cluster, these cronjobs stayed halted due to maximum backoff reached. This update resolves the issue by suspending the cronjobs and the cluster. ( LOG-1268 ) Before this update, in the Logging dashboard in the OpenShift Container Platform console, the list of top 10 log-producing containers was missing the "chart namespace" label and provided the incorrect metric name, fluentd_input_status_total_bytes_logged . With this update, the chart shows the namespace label and the correct metric name, log_logged_bytes_total . ( LOG-1271 ) Before this update, if an index management cronjob terminated with an error, it did not report the error exit code: instead, its job status was "complete." This update resolves the issue by reporting the error exit codes of index management cronjobs that terminate with errors. ( LOG-1273 ) The priorityclasses.v1beta1.scheduling.k8s.io was removed in 1.22 and replaced by priorityclasses.v1.scheduling.k8s.io ( v1beta1 was replaced by v1 ). Before this update, APIRemovedInNextReleaseInUse alerts were generated for priorityclasses because v1beta1 was still present . This update resolves the issue by replacing v1beta1 with v1 . The alert is no longer generated. ( LOG-1385 ) Previously, the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator did not have the annotation that was required for them to appear in the OpenShift Container Platform web console list of Operators that can run in a disconnected environment. This update adds the operators.openshift.io/infrastructure-features: '["Disconnected"]' annotation to these two Operators so that they appear in the list of Operators that run in disconnected environments. ( LOG-1420 ) Before this update, Red Hat OpenShift Logging Operator pods were scheduled on CPU cores that were reserved for customer workloads on performance-optimized single-node clusters. With this update, cluster logging Operator pods are scheduled on the correct CPU cores. ( LOG-1440 ) Before this update, some log entries had unrecognized UTF-8 bytes, which caused Elasticsearch to reject the messages and block the entire buffered payload. With this update, rejected payloads drop the invalid log entries and resubmit the remaining entries to resolve the issue. ( LOG-1499 ) Before this update, the kibana-proxy pod sometimes entered the CrashLoopBackoff state and logged the following message Invalid configuration: cookie_secret must be 16, 24, or 32 bytes to create an AES cipher when pass_access_token == true or cookie_refresh != 0, but is 29 bytes. The exact actual number of bytes could vary. With this update, the generation of the Kibana session secret has been corrected, and the kibana-proxy pod no longer enters a CrashLoopBackoff state due to this error. ( LOG-1446 ) Before this update, the AWS CloudWatch Fluentd plugin logged its AWS API calls to the Fluentd log at all log levels, consuming additional OpenShift Container Platform node resources. With this update, the AWS CloudWatch Fluentd plugin logs AWS API calls only at the "debug" and "trace" log levels. This way, at the default "warn" log level, Fluentd does not consume extra node resources. ( LOG-1071 ) Before this update, the Elasticsearch OpenDistro security plugin caused user index migrations to fail. This update resolves the issue by providing a newer version of the plugin. Now, index migrations proceed without errors. ( LOG-1276 ) Before this update, in the Logging dashboard in the OpenShift Container Platform console, the list of top 10 log-producing containers lacked data points. This update resolves the issue, and the dashboard displays all data points. ( LOG-1353 ) Before this update, if you were tuning the performance of the Fluentd log forwarder by adjusting the chunkLimitSize and totalLimitSize values, the Setting queued_chunks_limit_size for each buffer to message reported values that were too low. The current update fixes this issue so that this message reports the correct values. ( LOG-1411 ) Before this update, the Kibana OpenDistro security plugin caused user index migrations to fail. This update resolves the issue by providing a newer version of the plugin. Now, index migrations proceed without errors. ( LOG-1558 ) Before this update, using a namespace input filter prevented logs in that namespace from appearing in other inputs. With this update, logs are sent to all inputs that can accept them. ( LOG-1570 ) Before this update, a missing license file for the viaq/logerr dependency caused license scanners to abort without success. With this update, the viaq/logerr dependency is licensed under Apache 2.0 and the license scanners run successfully. ( LOG-1590 ) Before this update, an incorrect brew tag for curator5 within the elasticsearch-operator-bundle build pipeline caused the pull of an image pinned to a dummy SHA1. With this update, the build pipeline uses the logging-curator5-rhel8 reference for curator5 , enabling index management cronjobs to pull the correct image from registry.redhat.io . ( LOG-1624 ) Before this update, an issue with the ServiceAccount permissions caused errors such as no permissions for [indices:admin/aliases/get] . With this update, a permission fix resolves the issue. ( LOG-1657 ) Before this update, the Custom Resource Definition (CRD) for the Red Hat OpenShift Logging Operator was missing the Loki output type, which caused the admission controller to reject the ClusterLogForwarder custom resource object. With this update, the CRD includes Loki as an output type so that administrators can configure ClusterLogForwarder to send logs to a Loki server. ( LOG-1683 ) Before this update, OpenShift Elasticsearch Operator reconciliation of the ServiceAccounts overwrote third-party-owned fields that contained secrets. This issue caused memory and CPU spikes due to frequent recreation of secrets. This update resolves the issue. Now, the OpenShift Elasticsearch Operator does not overwrite third-party-owned fields. ( LOG-1714 ) Before this update, in the ClusterLogging custom resource (CR) definition, if you specified a flush_interval value but did not set flush_mode to interval , the Red Hat OpenShift Logging Operator generated a Fluentd configuration. However, the Fluentd collector generated an error at runtime. With this update, the Red Hat OpenShift Logging Operator validates the ClusterLogging CR definition and only generates the Fluentd configuration if both fields are specified. ( LOG-1723 ) 1.69.3. Known issues If you forward logs to an external Elasticsearch server and then change a configured value in the pipeline secret, such as the username and password, the Fluentd forwarder loads the new secret but uses the old value to connect to an external Elasticsearch server. This issue happens because the Red Hat OpenShift Logging Operator does not currently monitor secrets for content changes. ( LOG-1652 ) As a workaround, if you change the secret, you can force the Fluentd pods to redeploy by entering: USD oc delete pod -l component=collector 1.69.4. Deprecated and removed features Some features available in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Logging and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. 1.69.5. Forwarding logs using the legacy Fluentd and legacy syslog methods have been deprecated From OpenShift Container Platform 4.6 to the present, forwarding logs by using the following legacy methods have been deprecated and will be removed in a future release: Forwarding logs using the legacy Fluentd method Forwarding logs using the legacy syslog method Instead, use the following non-legacy methods: Forwarding logs using the Fluentd fohttps://www.redhat.com/security/data/cve/CVE-2021-22922.htmlrward protocol Forwarding logs using the syslog protocol 1.69.6. CVEs Example 1.26. Click to expand CVEs CVE-2021-22922 CVE-2021-22923 CVE-2021-22924 CVE-2021-32740 CVE-2021-36222 CVE-2021-37750 | [
"tls.verify_certificate = false tls.verify_hostname = false",
"oc get clusterversion/version -o jsonpath='{.spec.clusterID}{\"\\n\"}'",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" annotations: logging.openshift.io/preview-vector-collector: enabled spec: collection: logs: type: \"vector\" vector: {}",
"oc delete pod -l component=collector",
"oc delete pod -l component=collector"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/logging/release-notes |
7.55. febootstrap | 7.55. febootstrap 7.55.1. RHBA-2013:0432 - febootstrap bug fix update Updated febootstrap packages that fix one bug are now available for Red Hat Enterprise Linux 6. The febootstrap packages provide a tool to create a basic Red Hat Enterprise Linux or Fedora file system, and build initramfs (initrd.img) or file system images. Bug Fix BZ#803962 The "febootstrap-supermin-helper" program is used when opening a disk image using the libguestfs API, or as part of virt-v2v conversion. Previously, this tool did not always handle the "-u" and "-g" options correctly when the host used an LDAP server to resolve user names and group names. This caused the virt-v2v command to fail when LDAP was in use. With this update, the "febootstrap-supermin-helper" program has been modified to parse the "-u" and "-g" options correctly, so that virt-v2v works as expected in the described scenario. Users of febootstrap are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/febootstrap |
Chapter 16. Updating Red Hat Certificate System | Chapter 16. Updating Red Hat Certificate System To update Certificate System and the operating system it is running on, use the dnf update command. This downloads, verifies, and installs updates for Certificate System as well as operating system packages. For example: This downloads, verifies, and installs updates for Certificate System as well as operating system packages. You can verify the version number before and after updating packages, to confirm they were successfully installed. Important Updating Certificate System requires the PKI infrastructure to be restarted. We suggest scheduling a maintenance window to take the PKI infrastructure offline and install the update. To optionally download updates without installing, use the --downloadonly option in the above procedure: The downloaded packages are stored in the /var/cache/yum/ directory. The dnf update will later use the packages if they are the latest versions. | [
"dnf update",
"dnf update --downloadonly"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide_common_criteria_edition/updating_rhcs |
function::switch_file | function::switch_file Name function::switch_file - switch to the output file Synopsis Arguments None Description This function sends a signal to the stapio process, commanding it to rotate to the output file when output is sent to file(s). | [
"switch_file()"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-switch-file |
Appendix A. Revision History | Appendix A. Revision History Revision History Revision 6.6.0-4 Thur May 25 2017 John Brier JDG-984: Removed references to unsupported mixed Client-Server and embedded cluster Revision 6.6.0-3 Wed 7 Sep 2016 Christian Huffman Updating for 6.6.1. Revision 6.6.0-2 Wed 24 Feb 2016 Christian Huffman BZ-1310604: Included note on updates to source cluster during rolling upgrades. Revision 6.6.0-1 Tue 16 Feb 2016 Christian Huffman Included chapter on Integration with the Spring Framework. Revision 6.6.0-0 Thu 7 Jan 2016 Christian Huffman Initial draft for 6.6.0. BZ-1296210: Added note on deprecation of MapReduce. BZ-1269542: Included information on @CacheEntryExpired. Updated versions. | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/appe-revision_history |
Appendix A. Using Your Subscription | Appendix A. Using Your Subscription AMQ Streams is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ Streams entries in the JBOSS INTEGRATION AND AUTOMATION category. Select the desired AMQ Streams product. The Software Downloads page opens. Click the Download link for your component. Revised on 2020-12-02 15:51:49 UTC | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/amq_streams_on_openshift_overview/using_your_subscription |
Chapter 23. Scheduling APIs | Chapter 23. Scheduling APIs 23.1. Scheduling APIs 23.1.1. PriorityClass [scheduling.k8s.io/v1] Description PriorityClass defines mapping from a priority class name to the priority integer value. The value can be any valid integer. Type object 23.2. PriorityClass [scheduling.k8s.io/v1] Description PriorityClass defines mapping from a priority class name to the priority integer value. The value can be any valid integer. Type object Required value 23.2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources description string description is an arbitrary string that usually provides guidelines on when this priority class should be used. globalDefault boolean globalDefault specifies whether this PriorityClass should be considered as the default priority for pods that do not have any priority class. Only one PriorityClass can be marked as globalDefault . However, if more than one PriorityClasses exists with their globalDefault field set to true, the smallest value of such global default PriorityClasses will be used as the default priority. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata preemptionPolicy string preemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. Possible enum values: - "Never" means that pod never preempts other pods with lower priority. - "PreemptLowerPriority" means that pod can preempt other pods with lower priority. value integer value represents the integer value of this priority class. This is the actual priority that pods receive when they have the name of this class in their pod spec. 23.2.2. API endpoints The following API endpoints are available: /apis/scheduling.k8s.io/v1/priorityclasses DELETE : delete collection of PriorityClass GET : list or watch objects of kind PriorityClass POST : create a PriorityClass /apis/scheduling.k8s.io/v1/watch/priorityclasses GET : watch individual changes to a list of PriorityClass. deprecated: use the 'watch' parameter with a list operation instead. /apis/scheduling.k8s.io/v1/priorityclasses/{name} DELETE : delete a PriorityClass GET : read the specified PriorityClass PATCH : partially update the specified PriorityClass PUT : replace the specified PriorityClass /apis/scheduling.k8s.io/v1/watch/priorityclasses/{name} GET : watch changes to an object of kind PriorityClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 23.2.2.1. /apis/scheduling.k8s.io/v1/priorityclasses Table 23.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of PriorityClass Table 23.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 23.3. Body parameters Parameter Type Description body DeleteOptions schema Table 23.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PriorityClass Table 23.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 23.6. HTTP responses HTTP code Reponse body 200 - OK PriorityClassList schema 401 - Unauthorized Empty HTTP method POST Description create a PriorityClass Table 23.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.8. Body parameters Parameter Type Description body PriorityClass schema Table 23.9. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 202 - Accepted PriorityClass schema 401 - Unauthorized Empty 23.2.2.2. /apis/scheduling.k8s.io/v1/watch/priorityclasses Table 23.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of PriorityClass. deprecated: use the 'watch' parameter with a list operation instead. Table 23.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 23.2.2.3. /apis/scheduling.k8s.io/v1/priorityclasses/{name} Table 23.12. Global path parameters Parameter Type Description name string name of the PriorityClass Table 23.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a PriorityClass Table 23.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 23.15. Body parameters Parameter Type Description body DeleteOptions schema Table 23.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PriorityClass Table 23.17. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PriorityClass Table 23.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 23.19. Body parameters Parameter Type Description body Patch schema Table 23.20. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PriorityClass Table 23.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.22. Body parameters Parameter Type Description body PriorityClass schema Table 23.23. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 401 - Unauthorized Empty 23.2.2.4. /apis/scheduling.k8s.io/v1/watch/priorityclasses/{name} Table 23.24. Global path parameters Parameter Type Description name string name of the PriorityClass Table 23.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind PriorityClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 23.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/api_reference/scheduling-apis-1 |
Chapter 13. VolumeSnapshotContent [snapshot.storage.k8s.io/v1] | Chapter 13. VolumeSnapshotContent [snapshot.storage.k8s.io/v1] Description VolumeSnapshotContent represents the actual "on-disk" snapshot object in the underlying storage system Type object Required spec 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec defines properties of a VolumeSnapshotContent created by the underlying storage system. Required. status object status represents the current information of a snapshot. 13.1.1. .spec Description spec defines properties of a VolumeSnapshotContent created by the underlying storage system. Required. Type object Required deletionPolicy driver source volumeSnapshotRef Property Type Description deletionPolicy string deletionPolicy determines whether this VolumeSnapshotContent and its physical snapshot on the underlying storage system should be deleted when its bound VolumeSnapshot is deleted. Supported values are "Retain" and "Delete". "Retain" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. "Delete" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. For dynamically provisioned snapshots, this field will automatically be filled in by the CSI snapshotter sidecar with the "DeletionPolicy" field defined in the corresponding VolumeSnapshotClass. For pre-existing snapshots, users MUST specify this field when creating the VolumeSnapshotContent object. Required. driver string driver is the name of the CSI driver used to create the physical snapshot on the underlying storage system. This MUST be the same as the name returned by the CSI GetPluginName() call for that driver. Required. source object source specifies whether the snapshot is (or should be) dynamically provisioned or already exists, and just requires a Kubernetes object representation. This field is immutable after creation. Required. sourceVolumeMode string SourceVolumeMode is the mode of the volume whose snapshot is taken. Can be either "Filesystem" or "Block". If not specified, it indicates the source volume's mode is unknown. This field is immutable. This field is an alpha field. volumeSnapshotClassName string name of the VolumeSnapshotClass from which this snapshot was (or will be) created. Note that after provisioning, the VolumeSnapshotClass may be deleted or recreated with different set of values, and as such, should not be referenced post-snapshot creation. volumeSnapshotRef object volumeSnapshotRef specifies the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName field must reference to this VolumeSnapshotContent's name for the bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent object, name and namespace of the VolumeSnapshot object MUST be provided for binding to happen. This field is immutable after creation. Required. 13.1.2. .spec.source Description source specifies whether the snapshot is (or should be) dynamically provisioned or already exists, and just requires a Kubernetes object representation. This field is immutable after creation. Required. Type object Property Type Description snapshotHandle string snapshotHandle specifies the CSI "snapshot_id" of a pre-existing snapshot on the underlying storage system for which a Kubernetes object representation was (or should be) created. This field is immutable. volumeHandle string volumeHandle specifies the CSI "volume_id" of the volume from which a snapshot should be dynamically taken from. This field is immutable. 13.1.3. .spec.volumeSnapshotRef Description volumeSnapshotRef specifies the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName field must reference to this VolumeSnapshotContent's name for the bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent object, name and namespace of the VolumeSnapshot object MUST be provided for binding to happen. This field is immutable after creation. Required. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 13.1.4. .status Description status represents the current information of a snapshot. Type object Property Type Description creationTime integer creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command date +%s%N returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. error object error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. readyToUse boolean readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. restoreSize integer restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. snapshotHandle string snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. 13.1.5. .status.error Description error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. Type object Property Type Description message string message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information. time string time is the timestamp when the error was encountered. 13.2. API endpoints The following API endpoints are available: /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents DELETE : delete collection of VolumeSnapshotContent GET : list objects of kind VolumeSnapshotContent POST : create a VolumeSnapshotContent /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name} DELETE : delete a VolumeSnapshotContent GET : read the specified VolumeSnapshotContent PATCH : partially update the specified VolumeSnapshotContent PUT : replace the specified VolumeSnapshotContent /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name}/status GET : read status of the specified VolumeSnapshotContent PATCH : partially update status of the specified VolumeSnapshotContent PUT : replace status of the specified VolumeSnapshotContent 13.2.1. /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents HTTP method DELETE Description delete collection of VolumeSnapshotContent Table 13.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind VolumeSnapshotContent Table 13.2. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContentList schema 401 - Unauthorized Empty HTTP method POST Description create a VolumeSnapshotContent Table 13.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.4. Body parameters Parameter Type Description body VolumeSnapshotContent schema Table 13.5. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 201 - Created VolumeSnapshotContent schema 202 - Accepted VolumeSnapshotContent schema 401 - Unauthorized Empty 13.2.2. /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name} Table 13.6. Global path parameters Parameter Type Description name string name of the VolumeSnapshotContent HTTP method DELETE Description delete a VolumeSnapshotContent Table 13.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 13.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified VolumeSnapshotContent Table 13.9. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified VolumeSnapshotContent Table 13.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.11. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified VolumeSnapshotContent Table 13.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.13. Body parameters Parameter Type Description body VolumeSnapshotContent schema Table 13.14. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 201 - Created VolumeSnapshotContent schema 401 - Unauthorized Empty 13.2.3. /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name}/status Table 13.15. Global path parameters Parameter Type Description name string name of the VolumeSnapshotContent HTTP method GET Description read status of the specified VolumeSnapshotContent Table 13.16. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified VolumeSnapshotContent Table 13.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.18. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified VolumeSnapshotContent Table 13.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.20. Body parameters Parameter Type Description body VolumeSnapshotContent schema Table 13.21. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 201 - Created VolumeSnapshotContent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/storage_apis/volumesnapshotcontent-snapshot-storage-k8s-io-v1 |
Chapter 20. Using the administration events page | Chapter 20. Using the administration events page You can view administration event information in a single interface with Red Hat Advanced Cluster Security for Kubernetes (RHACS). You can use this interface to help you understand and interpret important event details. 20.1. Accessing the event logs in different domains By viewing the administration events page, you can access various event logs in different domains. Procedure In the RHACS platform, go to Platform Configuration Administration Events . 20.2. Administration events page overview The administration events page organizes information in the following groups: Domain : Categorizes events by the specific area or domain within RHACS in which the event occurred. This classification helps organize and understand the context of events. The following domains are included: Authentication General Image Scanning Integrations Resource type : Classifies events based on the resource or component type involved. The following resource types are included: API Token Cluster Image Node Notifier Level : Indicates the severity or importance of an event. The following levels are included: Error Warning Success Info Unknown Event last occurred at : Provides information about the timestamp and date when an event occurred. It helps track the timing of events, which is essential for diagnosing issues and understanding the sequence of actions or incidents. Count : Indicates the number of times a particular event occurred. This number is useful in assessing the frequency of an issue. An event that has occurred multiple times indicates a persistent issue that you need to fix. Each event also gives you an indication of what you need to do to fix the error. 20.3. Getting information about the events in a particular domain By viewing the details of an administration event, you get more information about the events in that particular domain. This enables you to better understand the context and details of the events. Procedure In the Administration Events page, click the domain to view its details. 20.4. Administration event details overview The administration event provides log information that describes the error or event. The logs provide the following information: Context of the event Steps to take to fix the error The administration event page organizes information in the following groups: Resource type : Classifies events based on the resource or component type involved. The following resource types are included: API Token Cluster Image Node Notifier Resource name : Specifies the name of the resource or component to which the event refers. It identifies the specific instance within the domain where the event occurred. Event type : Specifies the source of the event. Central generates log events that correspond to administration events created from log statements. Event ID : A unique identifier composed of alphanumeric characters that is assigned to each event. Event IDs can be useful in identifying, tracking, and managing events over time. Created at : Indicates the timestamp and date when the event was originally created or recorded. Last occurred at : Specifies the timestamp and date when the event last occurred. This tracks the timing of the event, which can be critical for diagnosing and fixing recurring issues. Count : Indicates the number of times a particular event occurred. This number is useful in assessing the frequency of an issue. An event that has occurred multiple times indicates a persistent issue that you need to fix. 20.5. Setting the expiration of the administration events By specifying the number of days, you can control when the administration events expire. This is important for managing your events and ensuring that you retain the information for the desired duration. Note By default, administration events are retained for 4 days. The retention period for these events is determined by the time of the last occurrence and not by the time of creation. This means that an event expires and is deleted only if the time of the last occurrence exceeds the specified retention period. Procedure In the RHACS portal, go to Platform Configuration System Configuration . You can configure the following setting for administration events: Administration events retention days : The number of days to retain your administration events. To change this value, click Edit , make your changes, and then click Save . | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/operating/using-the-administration-events-page |
Installation Guide | Installation Guide Red Hat Enterprise Linux 7 Installing Red Hat Enterprise Linux 7 on all architectures Jana Heves Red Hat Customer Content Services [email protected] Vladimir Slavik Red Hat Customer Content Services [email protected] Abstract This manual explains how to boot the Red Hat Enterprise Linux 7 installation program ( Anaconda ) and how to install Red Hat Enterprise Linux 7 on AMD64 and Intel 64 systems, 64-bit ARM systems, 64-bit IBM Power Systems servers, and IBM Z servers. It also covers advanced installation methods such as Kickstart installations, PXE installations, and installations over VNC. Finally, it describes common post-installation tasks and explains how to troubleshoot installation problems. Information on installing Red Hat Enterprise Linux Atomic Host can be found in the Red Hat Enterprise Linux Atomic Host Installation and Configuration Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/index |
Preface | Preface Preface | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_3scale_api_management_with_the_streams_for_apache_kafka_bridge/preface |
Chapter 7. Subscription Management | Chapter 7. Subscription Management 7.1. Subscription Manager String Updates In Red Hat Enterprise Linux 6.4, several strings have been renamed in Subscription Manager: subscribe was renamed to attach auto-subscribe was renamed to auto-attach unsubscribe was renamed to remove consumer was renamed to system or unit Testing Proxy Connection The Proxy Configuration dialog now allows users to test a connection to a proxy after entering a value. Subscribe or Unsubscribe Multiple Entitlements Subscription Manager is now able to subscribe (attach) or unsubscribe (remove) multiple entitlements using their serial numbers at once. Activation Keys Support in the GUI The Subscription Manager graphical user interface now allows you to register a system using an activation key . Activation keys allow users to preconfigure subscriptions for a system before it is registered. Registering Against External Servers Support for the selection of a remote server during the registration of a system is now supported in Subscription Manager. The Subscription Manager user interface provides an option to choose a URL of a server to register against, together with a port and a prefix, during the registration process. Additionally, when registering on the command line, the --serverurl option can be used to specify the server to register against. For more information about this feature, refer to the section Registering, Unregistering, and Reregistering a System in the Subscription Management Guide . Usability Changes in the GUI The Subscription Manager GUI has been enhanced with various changes based on customer feedback. 7.2. Subscription Asset Manager Installation on Offline Systems Subscription Asset Manager is now available as an ISO image and can be obtained from Content Delivery Network and Red Hat Network. It is therefore possible to install Subscription Asset Manager on offline systems. Reduced System Registration Workload It is now possible to configure a kickstart file with instructions to connect to Subscription Asset Manager and to automatically register and subscribe the system. This significantly reduces workloads of registering a large number of systems. Migration Red Hat Enterprise Linux 6.4 provides subscription-manager which includes the rhn-migrate-classic-to-rhsm script. The script has the --serverurl parameter that allows the user to point the system to an existing or on-premise installation of Subscription Asset Manager, and automatically migrates the system to use Subscription Asset Manager for its content. Note For more information about the 1.2 release of Subscription Asset Manager, refer to the Red Hat Subscription Asset Manager 1.2 Release Notes located at: https://access.redhat.com/site/documentation/en-US/Red_Hat_Subscription_Asset_Manager/1.2/html-single/Release_Notes/index.html | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_release_notes/subscription-management |
1.2. Before Setting Up GFS2 | 1.2. Before Setting Up GFS2 Before you install and set up GFS2, note the following key characteristics of your GFS2 file systems: GFS2 nodes Determine which nodes in the cluster will mount the GFS2 file systems. Number of file systems Determine how many GFS2 file systems to create initially. (More file systems can be added later.) File system name Determine a unique name for each file system. The name must be unique for all lock_dlm file systems over the cluster. Each file system name is required in the form of a parameter variable. For example, this book uses file system names mydata1 and mydata2 in some example procedures. Journals Determine the number of journals for your GFS2 file systems. One journal is required for each node that mounts a GFS2 file system. GFS2 allows you to add journals dynamically at a later point as additional servers mount a file system. For information on adding journals to a GFS2 file system, see Section 4.7, "Adding Journals to a File System" . Storage devices and partitions Determine the storage devices and partitions to be used for creating logical volumes (by means of CLVM) in the file systems. Note You may see performance problems with GFS2 when many create and delete operations are issued from more than one node in the same directory at the same time. If this causes performance problems in your system, you should localize file creation and deletions by a node to directories specific to that node as much as possible. For further recommendations on creating, using, and maintaining a GFS2 file system. see Chapter 2, GFS2 Configuration and Operational Considerations . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s1-ov-preconfig |
Chapter 5. Preparing Storage for Red Hat Virtualization | Chapter 5. Preparing Storage for Red Hat Virtualization You need to prepare storage to be used for storage domains in the new environment. A Red Hat Virtualization environment must have at least one data storage domain, but adding more is recommended. Warning When installing or reinstalling the host's operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center, and cannot be shared across data centers while active (but can be migrated between data centers). Data domains of multiple storage types can be added to the same data center, provided they are all shared, rather than local, domains. You can use one of the following storage types: NFS iSCSI Fibre Channel (FCP) POSIX-compliant file system Local storage Red Hat Gluster Storage 5.1. Preparing NFS Storage Set up NFS shares on your file storage or remote server to serve as storage domains on Red Hat Enterprise Virtualization Host systems. After exporting the shares on the remote storage and configuring them in the Red Hat Virtualization Manager, the shares will be automatically imported on the Red Hat Virtualization hosts. For information on setting up, configuring, mounting and exporting NFS, see Managing file systems for Red Hat Enterprise Linux 8. Specific system user accounts and system user groups are required by Red Hat Virtualization so the Manager can store data in the storage domains represented by the exported directories. The following procedure sets the permissions for one directory. You must repeat the chown and chmod steps for all of the directories you intend to use as storage domains in Red Hat Virtualization. Prerequisites Install the NFS utils package. # dnf install nfs-utils -y To check the enabled versions: # cat /proc/fs/nfsd/versions Enable the following services: # systemctl enable nfs-server # systemctl enable rpcbind Procedure Create the group kvm : # groupadd kvm -g 36 Create the user vdsm in the group kvm : # useradd vdsm -u 36 -g kvm Create the storage directory and modify the access rights. Add the storage directory to /etc/exports with the relevant permissions. # vi /etc/exports # cat /etc/exports /storage *(rw) Restart the following services: # systemctl restart rpcbind # systemctl restart nfs-server To see which export are available for a specific IP address: # exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server <world> Note If changes in /etc/exports have been made after starting the services, the exportfs -ra command can be used to reload the changes. After performing all the above stages, the exports directory should be ready and can be tested on a different host to check that it is usable. 5.2. Preparing iSCSI Storage Red Hat Virtualization supports iSCSI storage, which is a storage domain created from a volume group made up of LUNs. Volume groups and LUNs cannot be attached to more than one storage domain at a time. For information on setting up and configuring iSCSI storage, see Configuring an iSCSI target in Managing storage devices for Red Hat Enterprise Linux 8. Important If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm-tool config-lvm-filter command to create filters for the LVM. See Creating an LVM filter Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: # cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } 5.3. Preparing FCP Storage Red Hat Virtualization supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time. Red Hat Virtualization system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage. For information on setting up and configuring FCP or multipathing on Red Hat Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide . Important If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm-tool config-lvm-filter command to create filters for the LVM. See Creating an LVM filter Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: # cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } } 5.4. Preparing POSIX-compliant File System Storage POSIX file system support allows you to mount file systems using the same mount options that you would normally use when mounting them manually from the command line. This functionality is intended to allow access to storage not exposed using NFS, iSCSI, or FCP. Any POSIX-compliant file system used as a storage domain in Red Hat Virtualization must be a clustered file system, such as Global File System 2 (GFS2), and must support sparse files and direct I/O. The Common Internet File System (CIFS), for example, does not support direct I/O, making it incompatible with Red Hat Virtualization. For information on setting up and configuring POSIX-compliant file system storage, see Red Hat Enterprise Linux Global File System 2 . Important Do not mount NFS storage by creating a POSIX-compliant file system storage domain. Always create an NFS storage domain instead. 5.5. Preparing local storage On Red Hat Virtualization Host (RHVH), local storage should always be defined on a file system that is separate from / (root). Use a separate logical volume or disk, to prevent possible loss of data during upgrades. Procedure for Red Hat Enterprise Linux hosts On the host, create the directory to be used for the local storage: # mkdir -p /data/images Ensure that the directory has permissions allowing read/write access to the vdsm user (UID 36) and kvm group (GID 36): # chown 36:36 /data /data/images # chmod 0755 /data /data/images Procedure for Red Hat Virtualization Hosts Create the local storage on a logical volume: Create a local storage directory: # mkdir /data # lvcreate -L USDSIZE rhvh -n data # mkfs.ext4 /dev/mapper/rhvh-data # echo "/dev/mapper/rhvh-data /data ext4 defaults,discard 1 2" >> /etc/fstab # mount /data Mount the new local storage: # mount -a Ensure that the directory has permissions allowing read/write access to the vdsm user (UID 36) and kvm group (GID 36): # chown 36:36 /data /rhvh-data # chmod 0755 /data /rhvh-data 5.6. Preparing Red Hat Gluster Storage For information on setting up and configuring Red Hat Gluster Storage, see the Red Hat Gluster Storage Installation Guide . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see Red Hat Gluster Storage Version Compatibility and Support . 5.7. Customizing Multipath Configurations for SAN Vendors If your RHV environment is configured to use multipath connections with SANs, you can customize the multipath configuration settings to meet requirements specified by your storage vendor. These customizations can override both the default settings and settings that are specified in /etc/multipath.conf . To override the multipath settings, do not customize /etc/multipath.conf . Because VDSM owns /etc/multipath.conf , installing or upgrading VDSM or Red Hat Virtualization can overwrite this file including any customizations it contains. This overwriting can cause severe storage failures. Instead, you create a file in the /etc/multipath/conf.d directory that contains the settings you want to customize or override. VDSM executes the files in /etc/multipath/conf.d in alphabetical order. So, to control the order of execution, you begin the filename with a number that makes it come last. For example, /etc/multipath/conf.d/90-myfile.conf . To avoid causing severe storage failures, follow these guidelines: Do not modify /etc/multipath.conf . If the file contains user modifications, and the file is overwritten, it can cause unexpected storage problems. Do not override the user_friendly_names and find_multipaths settings. For details, see Recommended Settings for Multipath.conf . Avoid overriding the no_path_retry and polling_interval settings unless a storage vendor specifically requires you to do so. For details, see Recommended Settings for Multipath.conf . Warning Not following these guidelines can cause catastrophic storage errors. Prerequisites VDSM is configured to use the multipath module. To verify this, enter: Procedure Create a new configuration file in the /etc/multipath/conf.d directory. Copy the individual setting you want to override from /etc/multipath.conf to the new configuration file in /etc/multipath/conf.d/<my_device>.conf . Remove any comment marks, edit the setting values, and save your changes. Apply the new configuration settings by entering: Note Do not restart the multipathd service. Doing so generates errors in the VDSM logs. Verification steps Test that the new configuration performs as expected on a non-production cluster in a variety of failure scenarios. For example, disable all of the storage connections. Enable one connection at a time and verify that doing so makes the storage domain reachable. Additional resources Recommended Settings for Multipath.conf Red Hat Enterprise Linux DM Multipath Configuring iSCSI Multipathing How do I customize /etc/multipath.conf on my RHVH hypervisors? What values must not change and why? 5.8. Recommended Settings for Multipath.conf Do not override the following settings: user_friendly_names no Device names must be consistent across all hypervisors. For example, /dev/mapper/{WWID} . The default value of this setting, no , prevents the assignment of arbitrary and inconsistent device names such as /dev/mapper/mpath{N} on various hypervisors, which can lead to unpredictable system behavior. Warning Do not change this setting to user_friendly_names yes . User-friendly names are likely to cause unpredictable system behavior or failures, and are not supported. find_multipaths no This setting controls whether RHVH tries to access devices through multipath only if more than one path is available. The current value, no , allows RHV to access devices through multipath even if only one path is available. Warning Do not override this setting. Avoid overriding the following settings unless required by the storage system vendor: no_path_retry 4 This setting controls the number of polling attempts to retry when no paths are available. Before RHV version 4.2, the value of no_path_retry was fail because QEMU had trouble with the I/O queuing when no paths were available. The fail value made it fail quickly and paused the virtual machine. RHV version 4.2 changed this value to 4 so when multipathd detects the last path has failed, it checks all of the paths four more times. Assuming the default 5-second polling interval, checking the paths takes 20 seconds. If no path is up, multipathd tells the kernel to stop queuing and fails all outstanding and future I/O until a path is restored. When a path is restored, the 20-second delay is reset for the time all paths fail. For more details, see the commit that changed this setting . polling_interval 5 This setting determines the number of seconds between polling attempts to detect whether a path is open or has failed. Unless the vendor provides a clear reason for increasing the value, keep the VDSM-generated default so the system responds to path failures sooner. | [
"dnf install nfs-utils -y",
"cat /proc/fs/nfsd/versions",
"systemctl enable nfs-server systemctl enable rpcbind",
"groupadd kvm -g 36",
"useradd vdsm -u 36 -g kvm",
"mkdir /storage chmod 0755 /storage chown 36:36 /storage/",
"vi /etc/exports cat /etc/exports /storage *(rw)",
"systemctl restart rpcbind systemctl restart nfs-server",
"exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server <world>",
"cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue }",
"cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } }",
"mkdir -p /data/images",
"chown 36:36 /data /data/images chmod 0755 /data /data/images",
"mkdir /data lvcreate -L USDSIZE rhvh -n data mkfs.ext4 /dev/mapper/rhvh-data echo \"/dev/mapper/rhvh-data /data ext4 defaults,discard 1 2\" >> /etc/fstab mount /data",
"mount -a",
"chown 36:36 /data /rhvh-data chmod 0755 /data /rhvh-data",
"vdsm-tool is-configured --module multipath",
"systemctl reload multipathd"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/Preparing_Storage_for_RHV_SM_localDB_deploy |
Data Grid downloads | Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software. | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/hot_rod_java_client_guide/rhdg-downloads_datagrid |
14.7. Additional Resources | 14.7. Additional Resources Refer to the follow resources for more information. 14.7.1. Installed Documentation acl man page - Description of ACLs getfacl man page - Discusses how to get file access control lists setfacl man page - Explains how to set file access control lists star man page - Explains more about the star utility and its many options | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/access_control_lists-additional_resources |
Chapter 6. ImageStreamLayers [image.openshift.io/v1] | Chapter 6. ImageStreamLayers [image.openshift.io/v1] Description ImageStreamLayers describes information about the layers referenced by images in this image stream. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required blobs images 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources blobs object blobs is a map of blob name to metadata about the blob. blobs{} object ImageLayerData contains metadata about an image layer. images object images is a map between an image name and the names of the blobs and config that comprise the image. images{} object ImageBlobReferences describes the blob references within an image. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta 6.1.1. .blobs Description blobs is a map of blob name to metadata about the blob. Type object 6.1.2. .blobs{} Description ImageLayerData contains metadata about an image layer. Type object Required size mediaType Property Type Description mediaType string MediaType of the referenced object. size integer Size of the layer in bytes as defined by the underlying store. This field is optional if the necessary information about size is not available. 6.1.3. .images Description images is a map between an image name and the names of the blobs and config that comprise the image. Type object 6.1.4. .images{} Description ImageBlobReferences describes the blob references within an image. Type object Property Type Description config string config, if set, is the blob that contains the image config. Some images do not have separate config blobs and this field will be set to nil if so. imageMissing boolean imageMissing is true if the image is referenced by the image stream but the image object has been deleted from the API by an administrator. When this field is set, layers and config fields may be empty and callers that depend on the image metadata should consider the image to be unavailable for download or viewing. layers array (string) layers is the list of blobs that compose this image, from base layer to top layer. All layers referenced by this array will be defined in the blobs map. Some images may have zero layers. manifests array (string) manifests is the list of other image names that this image points to. For a single architecture image, it is empty. For a multi-arch image, it consists of the digests of single architecture images, such images shouldn't have layers nor config. 6.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/layers GET : read layers of the specified ImageStream 6.2.1. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/layers Table 6.1. Global path parameters Parameter Type Description name string name of the ImageStreamLayers namespace string object name and auth scope, such as for teams and projects Table 6.2. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read layers of the specified ImageStream Table 6.3. HTTP responses HTTP code Reponse body 200 - OK ImageStreamLayers schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/image_apis/imagestreamlayers-image-openshift-io-v1 |
12.5. Troubleshooting | 12.5. Troubleshooting 12.5.1. Marshalling Troubleshooting In Red Hat JBoss Data Grid, the marshalling layer and JBoss Marshalling in particular, can produce errors when marshalling or unmarshalling a user object. The exception stack trace contains further information to help you debug the problem. Example 12.1. Exception Stack Trace Messages starting with in object and stack traces are read in the same way: the highest in object message is the innermost one and the outermost in object message is the lowest. The provided example indicates that a java.lang.Object instance within an org.infinispan.commands.write.PutKeyValueCommand instance cannot be serialized because java.lang.Object@b40ec4 is not serializable. However, if the DEBUG or TRACE logging levels are enabled, marshalling exceptions will contain toString() representations of objects in the stack trace. The following is an example that depicts such a scenario: Example 12.2. Exceptions with Logging Levels Enabled Displaying this level of information for unmarshalling exceptions is expensive in terms of resources. However, where possible, JBoss Data Grid displays class type information. The following example depicts such levels of information on display: Example 12.3. Unmarshalling Exceptions In the provided example, an IOException was thrown when an instance of the inner class org.infinispan.marshall.VersionAwareMarshallerTestUSD1 is unmarshalled. In a manner similar to marshalling exceptions, when DEBUG or TRACE logging levels are enabled, the class type's classloader information is provided. An example of this classloader information is as follows: Example 12.4. Classloader Information Report a bug 12.5.2. Other Marshalling Related Issues Issues and exceptions related to Marshalling can also appear in different contexts, for example during the State transfer with EOFException . During a state transfer, if an EOFException is logged that states that the state receiver has Read past end of file , this can be dealt with depending on whether the state provider encounters an error when generating the state. For example, if the state provider is currently providing a state to a node, when another node requests a state, the state generator log can contain: Example 12.5. State Generator Log In logs, you can also spot exceptions which seems to be related to marshaling. However, the root cause of the exception can be different. The implication of this exception is that the state generator was unable to generate the transaction log hence the output it was writing in now closed. In such a situation, the state receiver will often log an EOFException , displayed as follows, when failing to read the transaction log that was not written by the sender: Example 12.6. EOFException When this error occurs, the state receiver attempts the operation every few seconds until it is successful. In most cases, after the first attempt, the state generator has already finished processing the second node and is fully receptive to the state, as expected. Report a bug | [
"java.io.NotSerializableException: java.lang.Object at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:857) at org.jboss.marshalling.AbstractMarshaller.writeObject(AbstractMarshaller.java:407) at org.infinispan.marshall.exts.ReplicableCommandExternalizer.writeObject(ReplicableCommandExternalizer.java:54) at org.infinispan.marshall.jboss.ConstantObjectTableUSDExternalizerAdapter.writeObject(ConstantObjectTable.java:267) at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:143) at org.jboss.marshalling.AbstractMarshaller.writeObject(AbstractMarshaller.java:407) at org.infinispan.marshall.jboss.JBossMarshaller.objectToObjectStream(JBossMarshaller.java:167) at org.infinispan.marshall.VersionAwareMarshaller.objectToBuffer(VersionAwareMarshaller.java:92) at org.infinispan.marshall.VersionAwareMarshaller.objectToByteBuffer(VersionAwareMarshaller.java:170) at org.infinispan.marshall.VersionAwareMarshallerTest.testNestedNonSerializable(VersionAwareMarshallerTest.java:415) Caused by: an exception which occurred: in object java.lang.Object@b40ec4 in object org.infinispan.commands.write.PutKeyValueCommand@df661da7 ... Removed 22 stack frames",
"java.io.NotSerializableException: java.lang.Object Caused by: an exception which occurred: in object java.lang.Object@b40ec4 -> toString = java.lang.Object@b40ec4 in object org.infinispan.commands.write.PutKeyValueCommand@df661da7 -> toString = PutKeyValueCommand{key=k, value=java.lang.Object@b40ec4, putIfAbsent=false, lifespanMillis=0, maxIdleTimeMillis=0}",
"java.io.IOException: Injected failue! at org.infinispan.marshall.VersionAwareMarshallerTestUSD1.readExternal(VersionAwareMarshallerTest.java:426) at org.jboss.marshalling.river.RiverUnmarshaller.doReadNewObject(RiverUnmarshaller.java:1172) at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:273) at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:210) at org.jboss.marshalling.AbstractUnmarshaller.readObject(AbstractUnmarshaller.java:85) at org.infinispan.marshall.jboss.JBossMarshaller.objectFromObjectStream(JBossMarshaller.java:210) at org.infinispan.marshall.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:104) at org.infinispan.marshall.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:177) at org.infinispan.marshall.VersionAwareMarshallerTest.testErrorUnmarshalling(VersionAwareMarshallerTest.java:431) Caused by: an exception which occurred: in object of type org.infinispan.marshall.VersionAwareMarshallerTestUSD1",
"java.io.IOException: Injected failue! Caused by: an exception which occurred: in object of type org.infinispan.marshall.VersionAwareMarshallerTestUSD1 -> classloader hierarchy: -> type classloader = sun.misc.LauncherUSDAppClassLoader@198dfaf ->...file:/opt/eclipse/configuration/org.eclipse.osgi/bundles/285/1/.cp/eclipse-testng.jar ->...file:/opt/eclipse/configuration/org.eclipse.osgi/bundles/285/1/.cp/lib/testng-jdk15.jar ->...file:/home/galder/jboss/infinispan/code/trunk/core/target/test-classes/ ->...file:/home/galder/jboss/infinispan/code/trunk/core/target/classes/ ->...file:/home/galder/.m2/repository/org/testng/testng/5.9/testng-5.9-jdk15.jar ->...file:/home/galder/.m2/repository/net/jcip/jcip-annotations/1.0/jcip-annotations-1.0.jar ->...file:/home/galder/.m2/repository/org/easymock/easymockclassextension/2.4/easymockclassextension-2.4.jar ->...file:/home/galder/.m2/repository/org/easymock/easymock/2.4/easymock-2.4.jar ->...file:/home/galder/.m2/repository/cglib/cglib-nodep/2.1_3/cglib-nodep-2.1_3.jar ->...file:/home/galder/.m2/repository/javax/xml/bind/jaxb-api/2.1/jaxb-api-2.1.jar ->...file:/home/galder/.m2/repository/javax/xml/stream/stax-api/1.0-2/stax-api-1.0-2.jar ->...file:/home/galder/.m2/repository/javax/activation/activation/1.1/activation-1.1.jar ->...file:/home/galder/.m2/repository/jgroups/jgroups/2.8.0.CR1/jgroups-2.8.0.CR1.jar ->...file:/home/galder/.m2/repository/org/jboss/javaee/jboss-transaction-api/1.0.1.GA/jboss-transaction-api-1.0.1.GA.jar ->...file:/home/galder/.m2/repository/org/jboss/marshalling/river/1.2.0.CR4-SNAPSHOT/river-1.2.0.CR4-SNAPSHOT.jar ->...file:/home/galder/.m2/repository/org/jboss/marshalling/marshalling-api/1.2.0.CR4-SNAPSHOT/marshalling-api-1.2.0.CR4-SNAPSHOT.jar ->...file:/home/galder/.m2/repository/org/jboss/jboss-common-core/2.2.14.GA/jboss-common-core-2.2.14.GA.jar ->...file:/home/galder/.m2/repository/org/jboss/logging/jboss-logging-spi/2.0.5.GA/jboss-logging-spi-2.0.5.GA.jar ->...file:/home/galder/.m2/repository/log4j/log4j/1.2.14/log4j-1.2.14.jar ->...file:/home/galder/.m2/repository/com/thoughtworks/xstream/xstream/1.2/xstream-1.2.jar ->...file:/home/galder/.m2/repository/xpp3/xpp3_min/1.1.3.4.O/xpp3_min-1.1.3.4.O.jar ->...file:/home/galder/.m2/repository/com/sun/xml/bind/jaxb-impl/2.1.3/jaxb-impl-2.1.3.jar -> parent classloader = sun.misc.LauncherUSDExtClassLoader@1858610 ->...file:/usr/java/jdk1.5.0_19/jre/lib/ext/localedata.jar ->...file:/usr/java/jdk1.5.0_19/jre/lib/ext/sunpkcs11.jar ->...file:/usr/java/jdk1.5.0_19/jre/lib/ext/sunjce_provider.jar ->...file:/usr/java/jdk1.5.0_19/jre/lib/ext/dnsns.jar ... Removed 22 stack frames",
"2010-12-09 10:26:21,533 20267 ERROR [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (STREAMING_STATE_TRANSFER-sender-1,Infinispan-Cluster,NodeJ-2368:) Caught while responding to state transfer request org.infinispan.statetransfer.StateTransferException: java.util.concurrent.TimeoutException: Could not obtain exclusive processing lock at org.infinispan.statetransfer.StateTransferManagerImpl.generateState(StateTransferManagerImpl.java:175) at org.infinispan.remoting.InboundInvocationHandlerImpl.generateState(InboundInvocationHandlerImpl.java:119) at org.infinispan.remoting.transport.jgroups.JGroupsTransport.getState(JGroupsTransport.java:586) at org.jgroups.blocks.MessageDispatcherUSDProtocolAdapter.handleUpEvent(MessageDispatcher.java:691) at org.jgroups.blocks.MessageDispatcherUSDProtocolAdapter.up(MessageDispatcher.java:772) at org.jgroups.JChannel.up(JChannel.java:1465) at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:954) at org.jgroups.protocols.pbcast.FLUSH.up(FLUSH.java:478) at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFERUSDStateProviderHandler.process(STREAMING_STATE_TRANSFER.java:653) at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFERUSDStateProviderThreadSpawnerUSD1.run(STREAMING_STATE_TRANSFER.java:582) at java.util.concurrent.ThreadPoolExecutorUSDWorker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutorUSDWorker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:680) Caused by: java.util.concurrent.TimeoutException: Could not obtain exclusive processing lock at org.infinispan.remoting.transport.jgroups.JGroupsDistSync.acquireProcessingLock(JGroupsDistSync.java:71) at org.infinispan.statetransfer.StateTransferManagerImpl.generateTransactionLog(StateTransferManagerImpl.java:202) at org.infinispan.statetransfer.StateTransferManagerImpl.generateState(StateTransferManagerImpl.java:165) ... 12 more",
"2010-12-09 10:26:21,535 20269 TRACE [org.infinispan.marshall.VersionAwareMarshaller] (Incoming-2,Infinispan-Cluster,NodeI-38030:) Log exception reported java.io.EOFException: Read past end of file at org.jboss.marshalling.AbstractUnmarshaller.eofOnRead(AbstractUnmarshaller.java:184) at org.jboss.marshalling.AbstractUnmarshaller.readUnsignedByteDirect(AbstractUnmarshaller.java:319) at org.jboss.marshalling.AbstractUnmarshaller.readUnsignedByte(AbstractUnmarshaller.java:280) at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:207) at org.jboss.marshalling.AbstractUnmarshaller.readObject(AbstractUnmarshaller.java:85) at org.infinispan.marshall.jboss.GenericJBossMarshaller.objectFromObjectStream(GenericJBossMarshaller.java:175) at org.infinispan.marshall.VersionAwareMarshaller.objectFromObjectStream(VersionAwareMarshaller.java:184) at org.infinispan.statetransfer.StateTransferManagerImpl.processCommitLog(StateTransferManagerImpl.java:228) at org.infinispan.statetransfer.StateTransferManagerImpl.applyTransactionLog(StateTransferManagerImpl.java:250) at org.infinispan.statetransfer.StateTransferManagerImpl.applyState(StateTransferManagerImpl.java:320) at org.infinispan.remoting.InboundInvocationHandlerImpl.applyState(InboundInvocationHandlerImpl.java:102) at org.infinispan.remoting.transport.jgroups.JGroupsTransport.setState(JGroupsTransport.java:603)"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/sect-troubleshooting |
4.195. nfs-utils | 4.195. nfs-utils 4.195.1. RHSA-2011:1534 - Low: nfs-utils security, bug fix, and enhancement update Updated nfs-utils packages that fix two security issues, various bugs, and add one enhancement are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having low security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The nfs-utils package provides a daemon for the kernel Network File System (NFS) server, and related tools such as the mount.nfs, umount.nfs, and showmount programs. Security Fixes CVE-2011-2500 A flaw was found in the way nfs-utils performed IP based authentication of mount requests. In configurations where a directory was exported to a group of systems using a DNS wildcard or NIS (Network Information Service) netgroup, an attacker could possibly gain access to other directories exported to a specific host or subnet, bypassing intended access restrictions. CVE-2011-1749 It was found that the mount.nfs tool did not handle certain errors correctly when updating the mtab (mounted file systems table) file. A local attacker could use this flaw to corrupt the mtab file. Bug Fixes BZ# 702273 The function responsible for parsing the /proc/mounts file was not able to handle single quote characters (') in the path name of a mount point entry if the path name contained whitespaces. As a consequence, an NFS-exported file system with such a mount point could not be unmounted. The parsing routine has been modified to parse the entries in the /proc/mounts file properly. All NFS file systems can be now unmounted as expected. BZ# 744657 On an IPv6-ready network, an NFS share could be mounted on the same location twice if one mount failed over from IPv6 to IPv4. This update prevents the failover to IPv4 under such circumstances. BZ# 732673 Prior to this update, NFS IPv6 unmounting failed. This happened because the umount command failed to find the respective mount address in the /proc/mounts file as it was expecting the mount address to be in brackets; however, the mount command saves the addresses without brackets. With this update, the brackets are stripped during the unmount process and the unmount process succeeds. BZ# 723780 Prior to this update, the system returned a misleading error message when an NFS mount failed due to TCP Wrappers constrictions on the server. With this update, the system returns the "mount.nfs: access denied by server while mounting" error message. BZ# 723438 The showmount command caused the rpc.mountd daemon to terminate unexpectedly with a segmentation fault. This happened because showmount requested a list of clients that have performed an NFS mount recently from the mount link list with an RPC (Remote Procedure Call) message sent to the daemon. However, the mount link list was not initialized correctly. With this update, the mount link list is initialized correctly and the problem no longer occurs. BZ# 731693 Mounting failed if no NFS version ("nfsvers") was defined. Also, the system returned no error message when the NFS version was specified incorrectly. With this update, the system returns the following error in such cases: "mount.nfs: invalid mount option was specified." BZ# 726112 The "showmount -e" command returned only the first client that imported a directory. This occurred due to an incorrect filtering of group names of clients. This bug has been fixed and the command returns all hosts, which import the directory. BZ# 697359 The nfs-utils manual pages did not contain description of the "-n" command-line option. This update adds the information to the rpc.svcgssd(8) man page. BZ# 720479 Due to an incorrect library order at link time, building nfs-utils from the source package resulted in a non-functional rpc.svcgssd daemon. This update reorders libgssglue in the spec file and the daemon works as expected in this scenario. BZ# 747400 Prior to this update, the rpcdebug tool run with the "pnfs" flag failed over to "nfs". This update adds the pNFS and FSCache debugging option and the problem no longer occurs. BZ# 729001 The debuginfo file for the rpcdebug binary was missing in the debuginfo package because the spec file defined the installation of the rpcdebug tool with the "-s" parameter. The parameter caused the binary to be stripped of debugging information on installation. With this update, the spec file was modified and the debuginfo file is now available in the debuginfo package. BZ# 692702 The rpc.idmapd daemon occasionally failed to start because the /var/lib/nfs/rpc_pipefs/ directory was not mounted on the daemon startup. With this update, the startup script checks if the directory is mounted. Enhancement BZ# 715078 This update adds details about exports to specific IPv6 addresses or subnets to the exports(5) manual page. Users of nfs-utils are advised to upgrade to these updated packages, which contain backported patches to resolve these issues and add this enhancement. After installing this update, the nfs service will be restarted automatically. 4.195.2. RHBA-2012:0673 - nfs-utils bug fix update Updated nfs-utils packages that fix one bug are now available for Red Hat Enterprise Linux 6. The nfs-utils packages provide a daemon for the kernel Network File System (NFS) server and related tools, which provides better performance than the traditional Linux NFS server used by most users. These packages also contain the mount.nfs, umount.nfs, and showmount programs. Bug Fix BZ# 812450 Previously, the nfsd daemon was started before the mountd daemon. However, nfsd uses mountd to validate file handles. Therefore, if an existing NFS client sent requests to the NFS server when nfsd was started, the client received the ESTALE error causing client applications to fail. This update changes the startup order of the daemons: the mountd daemon is now started first so that it can be correctly used by nfsd, and the client no longer receives the ESTALE error in this scenario. All users of nfs-utils are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/nfs-utils |
Chapter 6. Known issues | Chapter 6. Known issues Sometimes a Cryostat release might contain an issue or issues that Red Hat acknowledges and might fix at a later stage during the product's development. Review each known issue for its description and its resolution. Subviews such as /recordings/create are not accessible when visited directly in URL paths Description The Cryostat web console is not accessible from a web browser when you attempt to access the console through a URL path that includes a subview such as /topology/create-custom-target , /rules/create , or /recordings/create . For example, if you enter a URL path such as https:// my_cryostat_domain /recordings/create , the Cryostat console shows a blank page. Workaround Do not specify subviews in URL paths. For example, if you want to access https:// my_cryostat_domain /recordings/create , enter a URL path of https:// my_cryostat_domain /recordings in your web browser, and then click Create in the Cryostat web console. Active Recordings table fails to update when restarting a recording with the replace=always parameter Description If a client sends a request that includes the replace=always parameter to recreate an existing recording, the Active Recordings table in the Cryostat web console is not updated to show details of the new recording. Even though a Recording created notification is displayed, the new recording does not automatically appear in the Active Recordings table. Workaround Reload the Active Recordings page or navigate away from then back to the current page. The Active Recordings table then correctly displays the new recording. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/release_notes_for_the_red_hat_build_of_cryostat_3.0/cryostat-2-4-known-issues_cryostat |
7.229. wireless-tools | 7.229. wireless-tools 7.229.1. RHBA-2015:1386 - wireless-tools bug fix update Updated wireless-tools packages that fix one bug are now available for Red Hat Enterprise Linux 6. The wireless-tools packages contain tools used to manipulate the Wireless Extensions. The Wireless Extension is an interface that allows the user to set Wireless LAN specific parameters and to get statistics for wireless networking equipment. Bug Fix BZ# 857920 In an environment with a large number of wireless access points, using the wicd connection manager or the network-manager tool to connect to a wireless network previously failed. With this update, the buffer limit of the "iwlist scan" command has been adjusted not to exceed the maximum iwlist buffer amount, which prevents this problem from occurring. Users of wireless-tools are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-wireless-tools |
5.202. mt-st | 5.202. mt-st 5.202.1. RHBA-2012:1409 - mt-st bug fix update Updated mt-st packages that fix one bug are now available for Red Hat Enterprise Linux 6. The mt-st package contains the mt and st tape drive management programs. Mt (for magnetic tape drives) and st (for SCSI tape devices) can control rewinding, ejecting, skipping files and blocks and more. Bug Fix BZ# 820245 Prior this update, the stinit init script did not support standard actions like "status" or "restart". As a consequence, an error code was returned. This update modifies the underlying code to use all use all standard actions. All users of mt-st are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/mt-st |
Chapter 2. BrokerTemplateInstance [template.openshift.io/v1] | Chapter 2. BrokerTemplateInstance [template.openshift.io/v1] Description BrokerTemplateInstance holds the service broker-related state associated with a TemplateInstance. BrokerTemplateInstance is part of an experimental API. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta spec object BrokerTemplateInstanceSpec describes the state of a BrokerTemplateInstance. 2.1.1. .spec Description BrokerTemplateInstanceSpec describes the state of a BrokerTemplateInstance. Type object Required templateInstance secret Property Type Description bindingIDs array (string) bindingids is a list of 'binding_id's provided during successive bind calls to the template service broker. secret ObjectReference secret is a reference to a Secret object residing in a namespace, containing the necessary template parameters. templateInstance ObjectReference templateinstance is a reference to a TemplateInstance object residing in a namespace. 2.2. API endpoints The following API endpoints are available: /apis/template.openshift.io/v1/brokertemplateinstances DELETE : delete collection of BrokerTemplateInstance GET : list or watch objects of kind BrokerTemplateInstance POST : create a BrokerTemplateInstance /apis/template.openshift.io/v1/watch/brokertemplateinstances GET : watch individual changes to a list of BrokerTemplateInstance. deprecated: use the 'watch' parameter with a list operation instead. /apis/template.openshift.io/v1/brokertemplateinstances/{name} DELETE : delete a BrokerTemplateInstance GET : read the specified BrokerTemplateInstance PATCH : partially update the specified BrokerTemplateInstance PUT : replace the specified BrokerTemplateInstance /apis/template.openshift.io/v1/watch/brokertemplateinstances/{name} GET : watch changes to an object of kind BrokerTemplateInstance. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/template.openshift.io/v1/brokertemplateinstances Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of BrokerTemplateInstance Table 2.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 2.3. Body parameters Parameter Type Description body DeleteOptions schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind BrokerTemplateInstance Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK BrokerTemplateInstanceList schema 401 - Unauthorized Empty HTTP method POST Description create a BrokerTemplateInstance Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.8. Body parameters Parameter Type Description body BrokerTemplateInstance schema Table 2.9. HTTP responses HTTP code Reponse body 200 - OK BrokerTemplateInstance schema 201 - Created BrokerTemplateInstance schema 202 - Accepted BrokerTemplateInstance schema 401 - Unauthorized Empty 2.2.2. /apis/template.openshift.io/v1/watch/brokertemplateinstances Table 2.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of BrokerTemplateInstance. deprecated: use the 'watch' parameter with a list operation instead. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/template.openshift.io/v1/brokertemplateinstances/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the BrokerTemplateInstance Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a BrokerTemplateInstance Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified BrokerTemplateInstance Table 2.17. HTTP responses HTTP code Reponse body 200 - OK BrokerTemplateInstance schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified BrokerTemplateInstance Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.19. Body parameters Parameter Type Description body Patch schema Table 2.20. HTTP responses HTTP code Reponse body 200 - OK BrokerTemplateInstance schema 201 - Created BrokerTemplateInstance schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified BrokerTemplateInstance Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. Body parameters Parameter Type Description body BrokerTemplateInstance schema Table 2.23. HTTP responses HTTP code Reponse body 200 - OK BrokerTemplateInstance schema 201 - Created BrokerTemplateInstance schema 401 - Unauthorized Empty 2.2.4. /apis/template.openshift.io/v1/watch/brokertemplateinstances/{name} Table 2.24. Global path parameters Parameter Type Description name string name of the BrokerTemplateInstance Table 2.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind BrokerTemplateInstance. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/template_apis/brokertemplateinstance-template-openshift-io-v1 |
Part V. Troubleshoot | Part V. Troubleshoot | null | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/troubleshoot |
6.3. Starting ricci | 6.3. Starting ricci In order to create and distribute cluster configuration files on the nodes of the cluster, the ricci service must be running on each node. Before starting ricci , you should ensure that you have configured your system as follows: The IP ports on your cluster nodes should be enabled for ricci . For information on enabling IP ports on cluster nodes, see Section 3.3.1, "Enabling IP Ports on Cluster Nodes" . The ricci service is installed on all nodes in the cluster and assigned a ricci password, as described in Section 3.13, "Considerations for ricci " . After ricci has been installed and configured on each node, start the ricci service on each node: | [
"service ricci start Starting ricci: [ OK ]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-start-ricci-ccs-ca |
Chapter 5. Hardening Infrastructure and Virtualization | Chapter 5. Hardening Infrastructure and Virtualization This section contains component-specific advice and information. 5.1. Vulnerability Awareness Your operating procedures should have a plan to learn about new vulnerabilities and security updates. Hardware and software vendors typically announce the existence of vulnerabilities, and could offer workarounds and patches to address these. Red Hat Product Security maintains sites to help you stay aware of security updates: https://www.redhat.com/mailman/listinfo/rhsa-announce https://www.redhat.com/wapps/ugc/protected/notif.html https://access.redhat.com/security/security-updates/#/?q=openstack&p=1&sort=portal_publication_date%20desc&rows=10&portal_advisory_type=Security%20Advisory&documentKind=PortalProduct Note In addition to tracking updates, you will need to ensure your processes and deployments are designed in a way that that can accommodate the installation of regular security updates. For kernel updates, this would require rebooting the Compute and management nodes. Instance security updates should also be considered strongly when designing these processes, and hosted glance images should also be periodically updated to ensure that freshly created instances get the latest updates. 5.2. Network Time Protocol You need to ensure that systems within your Red Hat OpenStack Platform cluster have accurate and consistent timestamps between systems. Red Hat OpenStack Platform on RHEL8 supports Chrony for time management. For more information, see Using the Chrony suite to configure NTP . 5.2.1. Why consistent time is important Consistent time throughout your organization is important for both operational and security needs: Identifying a security event Consistent timekeeping helps you correlate timestamps for events on affected systems so that you can understand the sequence of events. Authentication and security systems Security systems can be sensitive to time skew, for example: A kerberos-based authentication system might refuse to authenticate clients that are affected by seconds of clock skew. Transport layer security (TLS) certificates depend on a valid source of time. A client to server TLS connection fails if the difference between client and server system times exceeds the Valid From date range. Red Hat OpenStack Platform services Some core OpenStack services are especially dependent on accurate timekeeping, including High Availability (HA) and Ceph. 5.2.2. NTP design Network time protocol (NTP) is organized in a hierarchical design. Each layer is called a stratum. At the top of the hierarchy are stratum 0 devices such as atomic clocks. In the NTP hierarchy, stratum 0 devices provide reference for publicly available stratum 1 and stratum 2 NTP time servers. Do not connect your data center clients directly to publicly available NTP stratum 1 or 2 servers. The number of direct connections would put unnecessary strain on the public NTP resources. Instead, allocate a dedicated time server in your data center, and connect the clients to that dedicated server. Configure instances to receive time from your dedicated time servers, not the host on which they reside. Note Service containers running within the Red Hat OpenStack Platform environment still receive time from the host on which they reside. 5.2.3. Configuring NTP in Red Hat OpenStack Platform Configure NTP on the undercloud and overcloud nodes using heat. To configure the undercloud with NTP, use the the undercloud_ntp_servers parameter in undercloud.conf before you run the openstack undercloud install command. For undercloud minions, use the minion_ntp_servers parameter. For more information see Director Configuration Parameters . To configure the overcloud with NTP use the following parameters as an example: For more information on network timekeeping parameters, see Time Parameters in the Overcloud Parameters guide. 5.3. Compute This section describes security considerations for Compute (nova). 5.3.1. Hypervisors in OpenStack When you evaluate a hypervisor platform, consider the supportability of the hardware on which the hypervisor will run. Additionally, consider the additional features available in the hardware and how those features are supported by the hypervisor you chose as part of the OpenStack deployment. To that end, hypervisors each have their own hardware compatibility lists (HCLs). When selecting compatible hardware it is important to know in advance which hardware-based virtualization technologies are important from a security perspective. 5.3.1.1. Hypervisor versus bare metal It is important to recognize the difference between using Linux containers or bare metal systems versus using a hypervisor like KVM. Specifically, the focus of this security guide is largely based on having a hypervisor and virtualization platform. However, should your implementation require the use of a bare metal or containerized environment, you must pay attention to the particular differences in regard to deployment of that environment. For bare metal, make sure the node has been properly sanitized of data prior to re-provisioning and decommissioning. In addition, before reusing a node, you must provide assurances that the hardware has not been tampered or otherwise compromised. For more information see https://docs.openstack.org/ironic/queens/admin/cleaning.html 5.3.1.2. Hypervisor memory optimization Certain hypervisors use memory optimization techniques that overcommit memory to guest virtual machines. This is a useful feature that allows you to deploy very dense compute clusters. One approach to this technique is through deduplication or sharing of memory pages: When two virtual machines have identical data in memory, there are advantages to having them reference the same memory. Typically this is performed through Copy-On-Write (COW) mechanisms, such as kernel same-page merging (KSM). These mechanisms are vulnerable to attack: Memory deduplication systems are vulnerable to side-channel attacks. In academic studies, attackers were able to identify software packages and versions running on neighboring virtual machines as well as software downloads and other sensitive information through analyzing memory access times on the attacker VM. Consequently, one VM can infer something about the state of another, which might not be appropriate for multi-project environments where not all projects are trusted or share the same levels of trust More importantly, row-hammer type attacks have been demonstrated against KSM to enact cross-VM modification of executable memory. This means that a hostile instance can gain code-execution access to other instances on the same Compute host. Deployers should disable KSM if they require strong project separation (as with public clouds and some private clouds): To disable KSM, refer to Deactivating KSM . 5.3.2. Virtualization 5.3.2.1. Physical Hardware (PCI Passthrough) PCI passthrough allows an instance to have direct access to a piece of hardware on the node. For example, this could be used to allow instances to access video cards or GPUs offering the compute unified device architecture (CUDA) for high performance computation. This feature carries two types of security risks: direct memory access and hardware infection. Direct memory access (DMA) is a feature that permits certain hardware devices to access arbitrary physical memory addresses in the host computer. Often video cards have this capability. However, an instance should not be given arbitrary physical memory access because this would give it full view of both the host system and other instances running on the same node. Hardware vendors use an input/output memory management unit (IOMMU) to manage DMA access in these situations. You should confirm that the hypervisor is configured to use this hardware feature. A hardware infection occurs when an instance makes a malicious modification to the firmware or some other part of a device. As this device is used by other instances or the host OS, the malicious code can spread into those systems. The end result is that one instance can run code outside of its security zone. This is a significant breach as it is harder to reset the state of physical hardware than virtual hardware, and can lead to additional exposure such as access to the management network. Due to the risk and complexities associated with PCI passthrough, it should be disabled by default. If enabled for a specific need, you will need to have appropriate processes in place to help ensure the hardware is clean before reuse. 5.3.2.2. Virtual Hardware (QEMU) When running a virtual machine, virtual hardware is a software layer that provides the hardware interface for the virtual machine. Instances use this functionality to provide network, storage, video, and other devices that might be needed. With this in mind, most instances in your environment will exclusively use virtual hardware, with a minority that will require direct hardware access. It is a good idea to only provision the hardware required. For example, it is unneccessary to provision a CD drive if you do not need it. Confirm that your iptables have the default policy configured to filter network traffic, and consider examining the existing rule set to understand each rule and determine if the policy needs to be expanded upon. Mandatory access controls limit the impact an attempted attack, by restricting the privileges on QEMU process to only what is needed. On Red Hat OpenStack Platform, SELinux is configured to run each QEMU process under a separate security context. SELinux policies have been pre-configured for Red Hat OpenStack Platform services. OpenStack's SELinux policies intend to help protect hypervisor hosts and virtual machines against two primary threat vectors: Hypervisor threats - A compromised application running within a virtual machine attacks the hypervisor to access underlying resources. For example, when a virtual machine is able to access the hypervisor OS, physical devices, or other applications. This threat vector represents considerable risk as a compromise on a hypervisor can infect the physical hardware as well as exposing other virtual machines and network segments. Virtual Machine (multi-project) threats - A compromised application running within a VM attacks the hypervisor to access or control another virtual machine and its resources. This is a threat vector unique to virtualization and represents considerable risk as a multitude of virtual machine file images could be compromised due to vulnerability in a single application. This virtual network attack is a major concern as the administrative techniques for protecting real networks do not directly apply to the virtual environment. Each KVM-based virtual machine is a process which is labeled by SELinux, effectively establishing a security boundary around each virtual machine. This security boundary is monitored and enforced by the Linux kernel, restricting the virtual machine's access to resources outside of its boundary, such as host machine data files or other VMs. Red Hat's SELinux-based isolation is provided regardless of the guest operating system running inside the virtual machine. Linux or Windows VMs can be used. 5.3.2.3. Labels and Categories KVM-based virtual machine instances are labelled with their own SELinux data type, known as svirt_image_t . Kernel level protections prevent unauthorized system processes, such as malware, from manipulating the virtual machine image files on disk. When virtual machines are powered off, images are stored as svirt_image_t as shown below: The svirt_image_t label uniquely identifies image files on disk, allowing for the SELinux policy to restrict access. When a KVM-based compute image is powered on, SELinux appends a random numerical identifier to the image. SELinux is capable of assigning numeric identifiers to a maximum of 524,288 virtual machines per hypervisor node, however most OpenStack deployments are highly unlikely to encounter this limitation. This example shows the SELinux category identifier: 5.3.2.4. SELinux users and roles SELinux manages user roles. These can be viewed through the -Z flag, or with the semanage command. On the hypervisor, only administrators should be able to access the system, and should have an appropriate context around both the administrative users and any other users that are on the system. 5.3.2.5. Containerized services Certain services, such as nova, glance, and keystone, now run within containers. This approach helps improve your security posture by making it easier to apply updates to services. Running each service in its own container also improves isolation between the services that coexist on the same bare metal. This can be helpful in reducing the attack surface should any one service be vulnerable to attack, by preventing easy access to adjacent services. Note Any paths on the host machine that are mounted into the containers can be used as mount points to transfer data between container and host, if they are configured as ro/rw. If you intend to update any configuration files, there are certain administration practices to consider, given that containerized services are ephemeral: Do not update any configuration file you might find on the physical node's host operating system, for example, /etc/cinder/cinder.conf . This is because the containerized service does not reference this file. Do not update the configuration file running within the container. This is because any changes are lost once you restart the container. Instead, if you need to add any changes to containerized services, you will need to update the configuration file that is used to seed the container. These files are generated during the initial deployment, by puppet, and contain sensitive data important to the running of the cloud, and should be treated accordingly. These are stored within /var/lib/config-data/puppet-generated/ . For example: keystone: /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf cinder: /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf nova: /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf Any changes made to these files will be applied once the container is restarted. 5.3.3. Hardening Compute Deployments One of the main security concerns with any OpenStack deployment is the security and controls around sensitive files, such as the /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf file. This configuration file contains many sensitive options including configuration details and service passwords. All such sensitive files should be given strict file level permissions, and monitored for changes through file integrity monitoring (FIM) tools, such as AIDE. These utilities will take a hash of the target file in a known good state, and then periodically take a new hash of the file and compare it to the known good hash. An alert can be created if it was found to have been modified unexpectedly. The permissions of a file can be examined by moving into the directory the file is contained in and running the ls -lh command. This will show the permissions, owner, and group that have access to the file, as well as other information such as the last time the file was modified and when it was created. The /var/lib/nova directory holds information about the instances on a given Compute node. This directory should be considered sensitive, with strictly enforced file permissions. In addition, it should be backed up regularly as it contains information and metadata for the instances associated with that host. If your deployment does not require full virtual machine backups, consider excluding the /var/lib/nova/instances directory as it will be as large as the combined space of each instance running on that node. If your deployment does require full VM backups, you will need to ensure this directory is backed up successfully. Note Data stored in the storage subsystem (for example, Ceph) being used for Block Storage (cinder) volumes should also be considered sensitive, as full virtual machine images can be retrieved from the storage subsystem if network or logical access allows this, potentially bypassing OpenStack controls. 5.3.4. Mitigating hardware vulnerabilities OpenStack runs on physical server hardware, which inherently presents its own security challenges. This chapter presents approaches to mitigating hardware-based threats and vulnerabilities. 5.3.4.1. Hardening PCI passthrough PCI passthrough allows you to give an instance direct access to certain physical hardware installed on the host. This can arise for a number of Network Function Virtualization (NFV) use cases. However, there are some security practices to consider: If using PCI passthrough, consider deploying hardware that supports interrupt remapping. Otherwise, you would need to enable the allow_unsafe_interrupts setting, which might leave the Compute node vulnerable to interrupt injection attacks from a malicious instance. For more information, see the Networking Guide: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html-single/networking_guide/#review-the-allow_unsafe_interrupts-setting 5.3.4.2. Security harden management consoles Many server vendors include a separate management console that enable a remote session to your server. Consider reviewing the practices prescribed by the vendor to security harden this point of access. 5.3.4.3. Firmware updates Physical servers use complex firmware to enable and operate server hardware and lights-out management cards, which can have their own security vulnerabilities, potentially allowing system access and interruption. To address these, hardware vendors will issue firmware updates, which are installed separately from operating system updates. You will need an operational security process that retrieves, tests, and implements these updates on a regular schedule, noting that firmware updates often require a reboot of physical hosts to become effective. 5.4. Block Storage OpenStack Block Storage (cinder) is a service that provides software (services and libraries) to self-service manage persistent block-level storage devices. This creates on-demand access to Block Storage resources for use with Compute (nova) instances. This creates software-defined storage through abstraction by virtualizing pools of block storage to a variety of back-end storage devices which can be either software implementations or traditional hardware storage products. The primary functions of this is to manage the creation, attachment, and detachment of the block devices. The consumer requires no knowledge of the type of back-end storage equipment or where it is located. Compute instances store and retrieve block storage using industry-standard storage protocols such as iSCSI, ATA over Ethernet, or Fibre-Channel. These resources are managed and configured using OpenStack native standard HTTP RESTful API. 5.4.1. Volume Wiping There are multiple ways to wipe a block storage device. The traditional approach is to set the lvm_type to thin, and then use the volume_clear parameter. Alternatively, if the volume encryption feature is used, then volume wiping is not necessary if the volume encryption key is deleted. Note Previously, lvm_type=default was used to signify a wipe. While this method still works, lvm_type=default is not recommended for setting secure delete. The volume_clear parameter can accept either zero or shred as arguments. zero will write a single pass of zeroes to the device. The shred operation will write three passes of predetermined bit patterns. 5.4.2. Hardening Block Storage This section contains practical advice to harden the security of OpenStack Block Storage. 5.4.2.1. Set user/group ownership of config files to root/cinder Configuration files contain critical parameters and information required for smooth functioning of the component. If an unprivileged user, either intentionally or accidentally, modifies or deletes any of the parameters or the file itself then it would cause severe availability issues resulting in a denial of service to the other end users. Thus user ownership of such critical configuration files must be set to root and group ownership must be set to cinder . Check that the user and group ownership of these config files is set to root and cinder respectively, with these commands: 5.4.2.2. Set strict permissions for configuration files Check that the permissions for the following files are set to 640 or stricter. 5.4.2.3. Use keystone for authentication In /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf , check that the value of auth_strategy under the [DEFAULT] section is set to keystone and not noauth . 5.4.2.4. Enable TLS for authentication In /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf , check that the value of www_authenticate_uri under the [keystone_authtoken] section is set to an Identity API endpoint that starts with https:// , and the value of the parameter insecure also under [keystone_authtoken] is set to False . 5.4.2.5. Ensure Block Storage uses TLS to communicate with Compute In cinder.conf , check that the value of glance_api_servers under the [DEFAULT] section is set to a value that starts with https:// , and the value of the parameter glance_api_insecure is set to False . 5.4.2.6. Ensure NAS devices used for NFS are operating in a hardened environment The Block Storage service (cinder) supports an NFS driver that works differently than a traditional block storage driver. The NFS driver does not actually allow an instance to access a storage device at the block level. Instead, files are created on an NFS share and mapped to instances, which emulates a block device. The Block Storage service supports secure configuration for such files by controlling the file permissions when cinder volumes are created. Cinder configuration can also control whether file operations are run as the root user or the current Red Hat OpenStack Platform process user. Note There are several director heat parameters that control whether an NFS back end or a NetApp NFS Block Storage back end supports a NetApp feature called NAS secure: CinderNetappNasSecureFileOperations CinderNetappNasSecureFilePermissions CinderNasSecureFileOperations CinderNasSecureFilePermissions Red Hat does not recommend that you enable the feature, because it interferes with normal volume operations. Director disables the feature by default, and Red Hat OpenStack Platform does not support it. Note NAS devices integrated into the Block Storage service through the use of vendor-specific drivers should be considered sensitive and should be deployed in hardened, isolated environments. Any breach of these devices can lead to access or modification to instance data. Review whether the value of nas_secure_file_permissions in the [DEFAULT] section of the cinder.conf file is set to auto . When the nas_secure_file_permissions parameter is set to auto , during startup, the Block Storage service detects whether there are existing cinder volumes: If there are no existing volumes, cinder sets the option to True and uses secure file permissions. If cinder detects existing volumes, cinder sets the option to False and uses the insecure method of handling file permissions. Review whether the nas_secure_file_operations parameter in the [DEFAULT] section in the cinder.conf file is set to auto . When the nas_secure_file_operations parameter is set to auto , during startup, the Block Storage service detects whether there are existing cinder volumes: If there are no existing volumes, cinder sets the option to True and does not run as the root user. If cinder detects existing volumes, cinder sets the option to False and uses the current method of running operations as the root user. Note For new installations, the Block Storage service creates a marker file so that on subsequent restarts the Block Storage service remembers the original determination. 5.4.2.7. Set the max size for the body of a request If the maximum body size per request is not defined, the attacker can craft an arbitrary OSAPI request of large size, causing the service to crash and finally resulting in a Denial Of Service attack. Assigning the maximum value ensures that any malicious oversized request gets blocked ensuring continued availability of the service. Review whether osapi_max_request_body_size under the [DEFAULT] section in cinder.conf is set to 114688 , or if max_request_body_size under the [oslo_middleware] section in cinder.conf is set to 114688 . 5.4.2.8. Enable volume encryption Unencrypted volume data makes volume-hosting platforms especially high-value targets for attackers, as it allows the attacker to read the data for many different VMs. In addition, the physical storage medium could be stolen, remounted, and accessed from a different machine. Encrypting volume data and volume backups can help mitigate these risks and provides defense-in-depth to volume-hosting platforms. Block Storage (cinder) is able to encrypt volume data before it is written to disk, so consider enabling volume encryption, and using Barbican for private key storage. 5.5. Networking The OpenStack Networking service (neutron) enables the end-user or project to define and consume networking resources. OpenStack Networking provides a project-facing API for defining network connectivity and IP addressing for instances in the cloud, in addition to orchestrating the network configuration. With the transition to an API-centric networking service, cloud architects and administrators should take into consideration good practices to secure physical and virtual network infrastructure and services. OpenStack Networking was designed with a plug-in architecture that provides extensibility of the API through open source community or third-party services. As you evaluate your architectural design requirements, it is important to determine what features are available in OpenStack Networking core services, any additional services that are provided by third-party products, and what supplemental services are required to be implemented in the physical infrastructure. This section is a high-level overview of what processes and good practices should be considered when implementing OpenStack Networking. 5.5.1. Networking architecture OpenStack Networking is a standalone service that deploys multiple processes across a number of nodes. These processes interact with each other and other OpenStack services. The main process of the OpenStack Networking service is neutron-server, a Python daemon that exposes the OpenStack Networking API and passes project requests to a suite of plug-ins for additional processing. The OpenStack Networking components are: Neutron server ( neutron-server and neutron-*-plugin ) - The neutron-server service runs on the Controller node to service the Networking API and its extensions (or plugins). It also enforces the network model and IP addressing of each port. The neutron-server requires direct access to a persistent database. Agents have indirect access to the database through neutron-server, with which they communicate using AMQP (Advanced Message Queuing Protocol). Neutron database - The database is the centralized source of neutron information, with the API recording all transactions in the database. This allows multiple Neutron servers to share the same database cluster, which keeps them all in sync, and allows persistence of network configuration topology. Plugin agent ( neutron-*-agent ) - Runs on each compute node and networking node (together with the L3 and DHCP agents) to manage local virtual switch (vswitch) configuration. The enabled plug-in determines which agents are enabled. These services require message queue access and depending on the plug-in being used, access to external network controllers or SDN implementations. Some plug-ins, like OpenDaylight(ODL) and Open Virtual Network (OVN), do not require any python agents on compute nodes, requiring only an enabled Neutron plug-in for integration. DHCP agent ( neutron-dhcp-agent ) - Provides DHCP services to project networks. This agent is the same across all plug-ins and is responsible for maintaining DHCP configuration. The neutron-dhcp-agent requires message queue access. Optional depending on plug-in. Metadata agent ( neutron-metadata-agent , neutron-ns-metadata-proxy ) - Provides metadata services used to apply instance operating system configuration and user-supplied initialisation scripts ('userdata'). The implementation requires the neutron-ns-metadata-proxy running in the L3 or DHCP agent namespace to intercept metadata API requests sent by cloud-init to be proxied to the metadata agent. L3 agent ( neutron-l3-agent ) - Provides L3/NAT forwarding for external network access of VMs on project networks. Requires message queue access. Optional depending on plug-in. Network provider services (SDN server/services) - Provides additional networking services to project networks. These SDN services might interact with neutron-server, neutron-plugin, and plugin-agents through communication channels such as REST APIs. The following diagram shows an architectural and networking flow diagram of the OpenStack Networking components: Note that this approach changes significantly when Distributed Virtual Routing (DVR) and Layer-3 High Availability (L3HA) are used. These modes change the security landscape of neutron, since L3HA implements VRRP between routers. The deployment needs to be correctly sized and hardened to help mitigate DoS attacks against routers, and local-network traffic between routers must be treated as sensitive, to help address the threat of VRRP spoofing. DVR moves networking components (such as routing) to the Compute nodes, while still requiring network nodes. As a result, the Compute nodes require access to and from public networks, increasing their exposure and requiring additional security consideration for customers, as they will need to make sure firewall rules and security model support this approach. 5.5.1.1. Neutron service placement on physical servers This section describes a standard architecture that includes a controller node, a network node, and a set of compute nodes for running instances. To establish network connectivity for physical servers, a typical neutron deployment has up to four distinct physical data center networks: Management network - Used for internal communication between OpenStack Components. The IP addresses on this network should be reachable only within the data center and is considered the Management Security zone. By default, the Management network role is performed by the Internal API network. Guest network(s) - Used for VM data communication within the cloud deployment. The IP addressing requirements of this network depend on the OpenStack Networking plug-in in use and the network configuration choices of the virtual networks made by the project. This network is considered the Guest Security zone. External network - Used to provide VMs with Internet access in some deployment scenarios. The IP addresses on this network should be reachable by anyone on the Internet. This network is considered to be in the Public Security zone. This network is provided by the neutron External network(s). These neutron VLANs are hosted on the external bridge. They are not created by Red Hat OpenStack Platform director, but are created by neutron in post-deployment. Public API network - Exposes all OpenStack APIs, including the OpenStack Networking API, to projects. The IP addresses on this network should be reachable by anyone on the Internet. This might be the same network as the external network, as it is possible to create a subnet for the external network that uses IP allocation ranges smaller than the full range of IP addresses in an IP block. This network is considered to be in the Public Security zone. It is recommended you segment this traffic into separate zones. See the section for more information. 5.5.2. Use security zones It is recommended that you use the concept of security zones to keep critical systems separate from each other. In a practical sense, this means isolating network traffic using VLANs and firewall rules. This should be done with granular detail, and the result should be that only the services that need to connect to neutron are able to do so. In the following diagram, you can see that zones have been created to separate certain components: Dashboard: Accessible to public network and management network. Keystone: Accessible to management network. Compute node: Accessible to management network and Compute instances. Network node: Accessible to management network, Compute instances, and possibly public network depending upon neutron-plugin in use. SDN service node: Management services, Compute instances, and possibly public depending upon product used and configuration. . 5.5.3. Networking Services In the initial architectural phases of designing your OpenStack Network infrastructure it is important to ensure appropriate expertise is available to assist with the design of the physical networking infrastructure, to identify proper security controls and auditing mechanisms. OpenStack Networking adds a layer of virtualized network services which gives projects the capability to architect their own virtual networks. Currently, these virtualized services are not as mature as their traditional networking counterparts. Consider the current state of these virtualized services before adopting them as it dictates what controls you might have to implement at the virtualized and traditional network boundaries. 5.5.3.1. L2 isolation using VLANs and tunneling OpenStack Networking can employ two different mechanisms for traffic segregation on a per project/network combination: VLANs (IEEE 802.1Q tagging) or L2 tunnels using VXLAN or GRE encapsulation. The scope and scale of your OpenStack deployment determines which method you should use for traffic segregation or isolation. 5.5.3.2. VLANs VLANs are realized as packets on a specific physical network containing IEEE 802.1Q headers with a specific VLAN ID (VID) field value. VLAN networks sharing the same physical network are isolated from each other at L2, and can even have overlapping IP address spaces. Each distinct physical network supporting VLAN networks is treated as a separate VLAN trunk, with a distinct space of VID values. Valid VID values are 1 through 4094. VLAN configuration complexity depends on your OpenStack design requirements. To allow OpenStack Networking to more efficiently use VLANs, you must allocate a VLAN range (one for each project) and turn each Compute node physical switch port into a VLAN trunk port. Note If you intend for your network to support more than 4094 projects, an L2 tunneling configuration is recommended over VLANs. 5.5.3.3. L2 tunneling Network tunneling encapsulates each project/network combination with a unique "tunnel-id" that is used to identify the network traffic belonging to that combination. The project's L2 network connectivity is independent of physical locality or underlying network design. By encapsulating traffic inside IP packets, that traffic can cross Layer-3 boundaries, removing the need for pre-configured VLANs and VLAN trunking. Tunneling adds a layer of obfuscation to network data traffic, reducing the visibility of individual project traffic from a monitoring point of view. OpenStack Networking currently supports both GRE and VXLAN encapsulation. The choice of technology to provide L2 isolation is dependent upon the scope and size of project networks that will be created in your deployment. 5.5.3.4. Network services The choice of project network isolation affects how the network security and control boundary is implemented for project services. The following additional network services are either available or currently under development to enhance the security posture of the OpenStack network architecture. 5.5.3.5. Access control lists Compute supports project network traffic access controls through use of the OpenStack Networking service. Security groups allow administrators and projects the ability to specify the type of traffic, and direction (ingress/egress) that is allowed to pass through a virtual interface port. Security groups rules are stateful L2-L4 traffic filters. 5.5.4. L3 routing and NAT OpenStack Networking routers can connect multiple L2 networks, and can also provide a gateway that connects one or more private L2 networks to a shared external network, such as a public network for access to the Internet. The L3 router provides basic Network Address Translation (SNAT and DNAT) capabilities on gateway ports that uplink the router to external networks. This router SNATs (Source NAT) all egress traffic by default, and supports floating IPs, which creates a static one-to-one bidirectional mapping from a public IP on the external network to a private IP on one of the other subnets attached to the router. Floating IPs (through DNAT) provide external inbound connectivity to instances, and can be moved from one instances to another. Consider using per-project L3 routing and Floating IPs for more granular connectivity of project instances. Special consideration should be given to instances connected to public networks or using Floating IPs. Usage of carefully considered security groups is recommended to filter access to only services which need to be exposed externally. 5.5.5. Quality of Service (QoS) By default, Quality of Service (QoS) policies and rules are managed by the cloud administrator, which results in projects being unable to create specific QoS rules, or to attach specific policies to ports. In some use cases, such as some telecommunications applications, the administrator might trust the projects and therefore let them create and attach their own policies to ports. This can be done by modifying the policy.json file. From Red Hat OpenStack Platform 12, neutron supports bandwidth-limiting QoS rules for both ingress and egress traffic. This QoS rule is named QosBandwidthLimitRule and it accepts two non-negative integers measured in kilobits per second: max-kbps : bandwidth max-burst-kbps : burst buffer The QoSBandwidthLimitRule has been implemented in the neutron Open vSwitch, Linux bridge and SR-IOV drivers. However, for SR-IOV drivers, the max-burst-kbps value is not used, and is ignored if set. The QoS rule QosDscpMarkingRule was added in the Red Hat OpenStack Platform 10 (Newton) release. This rule marks the Differentiated Service Code Point (DSCP) value in the type of service header on IPv4 (RFC 2474) and traffic class header on IPv6 on all traffic leaving a virtual machine, where the rule is applied. This is a 6-bit header with 21 valid values that denote the drop priority of a packet as it crosses networks should it meet congestion. It can also be used by firewalls to match valid or invalid traffic against its access control list. 5.5.5.1. Load balancing The OpenStack Load-balancing service (Octavia) provides a load balancing-as-a-service (LBaaS) implementation for Red Hat OpenStack platform director installations. To achieve load balancing, Octavia supports enabling multiple provider drivers. The reference provider driver (Amphora provider driver) is an open-source, scalable, and highly available load balancing provider. It accomplishes its delivery of load balancing services by managing a fleet of virtual machines- collectively known as amphorae- which it spins up on demand. For more information about the Load-balancing service, see Load Balancing-as-a-Service (LBaaS) with Octavia in the Networking Guide. 5.5.6. Hardening the Networking Service This section discusses OpenStack Networking configuration good practices as they apply to project network security within your OpenStack deployment. 5.5.6.1. Restrict bind address of the API server: neutron-server To restrict the interface or IP address on which the OpenStack Networking API service binds a network socket for incoming client connections, specify the bind_host and bind_port in the /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf file: 5.5.6.2. Restrict DB and RPC communication of the OpenStack Networking services Various components of the OpenStack Networking services use either the messaging queue or database connections to communicate with other components in OpenStack Networking. Note It is recommended that you follow the guidelines provided in Section 13.3, "Queue authentication and access control" for all components which require RPC communication. 5.5.6.3. Project network services workflow OpenStack Networking provides users self-service configuration of network resources. It is important that cloud architects and operators evaluate their design use cases in providing users the ability to create, update, and destroy available network resources. 5.5.6.4. Networking resource policy engine A policy engine and its configuration file ( policy.json ) within OpenStack Networking provides a method to provide finer grained authorization of users on project networking methods and objects. The OpenStack Networking policy definitions affect network availability, network security and overall OpenStack security. Cloud architects and operators should carefully evaluate their policy towards user and project access to administration of network resources. Note It is important to review the default networking resource policy, as this policy can be modified to suit your security posture. If your deployment of OpenStack provides multiple external access points into different security zones it is important that you limit the project's ability to attach multiple vNICs to multiple external access points - this would bridge these security zones and could lead to unforeseen security compromise. You can help mitigate this risk by using the host aggregates functionality provided by Compute, or by splitting the project instances into multiple projects with different virtual network configurations. For more information on host aggregates, see https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/instances_and_images_guide/ch-manage_instances#section-manage-host-aggregates . 5.5.6.5. Security groups A security group is a collection of security group rules. Security groups and their rules allow administrators and projects the ability to specify the type of traffic and direction (ingress/egress) that is allowed to pass through a virtual interface port. When a virtual interface port is created in OpenStack Networking it is associated with a security group. Rules can be added to the default security group in order to change the behavior on a per-deployment basis. When using the Compute API to modify security groups, the updated security group applies to all virtual interface ports on an instance. This is due to the Compute security group APIs being instance-based rather than port-based, as found in neutron. 5.5.6.6. Quotas Quotas provide the ability to limit the number of network resources available to projects. You can enforce default quotas for all projects. To review the quota options, see /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf . OpenStack Networking also supports per-project quotas limit through a quota extension API. To enable per-project quotas, you must set the quota_driver option in /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf . For example: 5.5.6.7. Mitigate ARP spoofing OpenStack Networking has a built-in feature to help mitigate the threat of ARP spoofing for instances. This should not be disabled unless careful consideration is given to the resulting risks. 5.5.6.8. Set the user/group ownership of config files to root/neutron Configuration files contain critical parameters and information required for smooth functioning of the component. If an unprivileged user, either intentionally or accidentally modifies or deletes any of the parameters or the file itself then it would cause severe availability issues causing a denial of service to the other end users. Thus user ownership of such critical configuration files must be set to root and group ownership must be set to neutron. Ensure the user and group ownership of the following files is set to root and neutron respectively. Note that the exact file path might vary for containerized services: 5.5.6.9. Set Strict Permissions for Configuration Files Check that the permissions for the following files are set to 640 or stricter. Note that the exact file path might vary for containerized services: 5.5.6.10. Use Keystone for Authentication In /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf check that the value of auth_strategy under the [DEFAULT] section is set to keystone and not noauth or noauth2 . 5.5.6.10.1. Use a Secure Protocol for Authentication In /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf check that the value of www_authenticate_uri under the [keystone_authtoken] section is set to an Identity API endpoint that starts with https:// , and the value of the parameter insecure also under [keystone_authtoken] is set to False . 5.5.6.10.2. Enable TLS on Neutron API Server In /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf , ensure the parameter use_ssl under the [DEFAULT] section is set to True . | [
"parameter_defaults: TimeZone: 'US/Central' NtpServer: ['ntpserver01.example.com']",
"system_u:object_r:svirt_image_t:SystemLow image1 system_u:object_r:svirt_image_t:SystemLow image2 system_u:object_r:svirt_image_t:SystemLow image3 system_u:object_r:svirt_image_t:SystemLow image4",
"system_u:object_r:svirt_image_t:s0:c87,c520 image1 system_u:object_r:svirt_image_t:s0:419,c172 image2",
"stat -L -c \"%U %G\" /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf | egrep \"root cinder\" stat -L -c \"%U %G\" /var/lib/config-data/puppet-generated/cinder/etc/cinder/api-paste.ini | egrep \"root cinder\" stat -L -c \"%U %G\" /var/lib/config-data/puppet-generated/cinder/etc/cinder/policy.json | egrep \"root cinder\" stat -L -c \"%U %G\" /var/lib/config-data/puppet-generated/cinder/etc/cinder/rootwrap.conf | egrep \"root cinder\"",
"stat -L -c \"%a\" /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf stat -L -c \"%a\" /var/lib/config-data/puppet-generated/cinder/etc/cinder/api-paste.ini stat -L -c \"%a\" /var/lib/config-data/puppet-generated/cinder/etc/cinder/policy.json stat -L -c \"%a\" /var/lib/config-data/puppet-generated/cinder/etc/cinder/rootwrap.conf",
"Address to bind the API server bind_host = IP ADDRESS OF SERVER Port the bind the API server to bind_port = 9696",
"quota_driver = neutron.db.quota_db.DbQuotaDriver",
"stat -L -c \"%U %G\" /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf | egrep \"root neutron\" stat -L -c \"%U %G\" /var/lib/config-data/puppet-generated/neutron/etc/neutron/api-paste.ini | egrep \"root neutron\" stat -L -c \"%U %G\" /var/lib/config-data/puppet-generated/neutron/etc/neutron/policy.json | egrep \"root neutron\" stat -L -c \"%U %G\" /var/lib/config-data/puppet-generated/neutron/etc/neutron/rootwrap.conf | egrep \"root neutron\"",
"stat -L -c \"%a\" /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf stat -L -c \"%a\" /var/lib/config-data/puppet-generated/neutron/etc/neutron/api-paste.ini stat -L -c \"%a\" /var/lib/config-data/puppet-generated/neutron/etc/neutron/policy.json stat -L -c \"%a\" /var/lib/config-data/puppet-generated/neutron/etc/neutron/rootwrap.conf"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/security_and_hardening_guide/hardening_infrastructure_and_virtualization |
Replacing devices | Replacing devices Red Hat OpenShift Data Foundation 4.13 Instructions for safely replacing operational or failed devices Red Hat Storage Documentation Team Abstract This document explains how to safely replace storage devices for Red Hat OpenShift Data Foundation. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/replacing_devices/index |
Chapter 1. Introduction to Hammer | Chapter 1. Introduction to Hammer Hammer is a powerful command-line tool provided with Red Hat Satellite 6. You can use Hammer to configure and manage a Red Hat Satellite Server either through CLI commands or automation in shell scripts. Hammer also provides an interactive shell. Hammer compared to Satellite web UI Compared to navigating the web UI, using Hammer can result in much faster interaction with the Satellite Server, as common shell features such as environment variables and aliases are at your disposal. You can also incorporate Hammer commands into reusable scripts for automating tasks of various complexity. Output from Hammer commands can be redirected to other tools, which allows for integration with your existing environment. You can issue Hammer commands directly on the base operating system running Red Hat Satellite. Access to Satellite Server's base operating system is required to issue Hammer commands, which can limit the number of potential users compared to the web UI. Although the parity between Hammer and the web UI is almost complete, the web UI has development priority and can be ahead especially for newly introduced features. Hammer compared to Satellite API For many tasks, both Hammer and Satellite API are equally applicable. Hammer can be used as a human friendly interface to Satellite API, for example to test responses to API calls before applying them in a script (use the -d option to inspect API calls issued by Hammer, for example hammer -d organization list ). Changes in the API are automatically reflected in Hammer, while scripts using the API directly have to be updated manually. In the background, each Hammer command first establishes a binding to the API, then sends a request. This can have performance implications when executing a large number of Hammer commands in sequence. In contrast, a script communicating directly with the API establishes the binding only once. See the API Guide for more information. 1.1. Getting Help View the full list of hammer options and subcommands by executing: Use --help to inspect any subcommand, for example: You can search the help output using grep , or redirect it to a text viewer, for example: 1.2. Authentication A Satellite user must prove their identity to Red Hat Satellite when entering hammer commands. Hammer commands can be run manually or automatically. In either case, hammer requires Satellite credentials for authentication. There are three methods of hammer authentication: Hammer authentication session Storing credentials in the hammer configuration file Providing credentials with each hammer command The hammer configuration file method is recommended when running commands automatically. For example, running Satellite maintenance commands from a cron job. When running commands manually, Red Hat recommends using the hammer authentication session and providing credentials with each command. 1.2.1. Hammer Authentication Session The hammer authentication session is a cache that stores your credentials, and you have to provide them only once, at the beginning of the session. This method is suited to running several hammer commands in succession, for example a script containing hammer commands. In this scenario, you enter your Satellite credentials once, and the script runs as expected. By using the hammer authentication session, you avoid storing your credentials in the script itself and in the ~/.hammer/cli.modules.d/foreman.yml hammer configuration file. See the instructions on how to use the sessions: To enable sessions, add :use_sessions: true to the ~/.hammer/cli.modules.d/foreman.yml file: Note that if you enable sessions, credentials stored in the configuration file will be ignored. To start a session, enter the following command: You are prompted for your Satellite credentials, and logged in. You will not be prompted for the credentials again until your session expires. The default length of a session is 60 minutes. You can change the time to suit your preference. For example, to change it to 30 minutes, enter the following command: To see the current status of the session, enter the following command: To end the session, enter the following command: 1.2.2. Hammer Configuration File If you ran the Satellite installation with --foreman-initial-admin-username and --foreman-initial-admin-password options, credentials you entered are stored in the ~/.hammer/cli.modules.d/foreman.yml configuration file, and hammer does not prompt for your credentials. You can also add your credentials to the ~/.hammer/cli.modules.d/foreman.yml configuration file manually: Important Use only spaces for indentation in hammer configuration files. Do not use tabs for indentation in hammer configuration files. 1.2.3. Command Line If you do not have your Satellite credentials saved in the ~/.hammer/cli.modules.d/foreman.yml configuration file, hammer prompts you for them each time you enter a command. You can specify your credentials when executing a command as follows: Note Examples in this guide assume that you have saved credentials in the configuration file, or are using a hammer authentication session. 1.3. Using Standalone Hammer You can install hammer on a host running Red Hat Enterprise Linux 8 or Red Hat Enterprise Linux 7 that has no Satellite Server installed, and use it to connect the host to a remote Satellite. Prerequisites Ensure that you register the host to Satellite Server or Capsule Server. For more information, see Registering Hosts in Managing Hosts . Ensure that you synchronize the following repositories on Satellite Server or Capsule Server. For more information, see Synchronizing Repositories in Managing Content . On Red Hat Enterprise Linux 8: rhel-8-for-x86_64-baseos-rpms rhel-8-for-x86_64-appstream-rpms satellite-utils-6.11-for-rhel-8-x86_64-rpms On Red Hat Enterprise Linux 7: rhel-7-server-rpms rhel-7-server-satellite-utils-6.11-rpms rhel-server-rhscl-7-rpms Procedure On a host, complete the following steps to install hammer : Enable the required repositories: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 7: If your host is running Red Hat Enterprise Linux 8, enable the Satellite Utils module: Install hammer : On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 7: Edit the :host: entry in the /etc/hammer/cli.modules.d/foreman.yml file to include the Satellite IP address or FQDN. 1.4. Setting a Default Organization and Location Many hammer commands are organization specific. You can set a default organization and location for hammer commands so that you do not have to specify them every time with the --organization and --location options. Specifying a default organization is useful when you mostly manage a single organization, as it makes your commands shorter. However, when you switch to a different organization, you must use hammer with the --organization option to specify it. Procedure To set a default organization and location, complete the following steps: To set a default organization, enter the following command: You can find the name of your organization with the hammer organization list command. Optional: To set a default location, enter the following command: You can find the name of your location with the hammer location list command. To verify the currently specified default settings, enter the following command: 1.5. Configuring Hammer The default location for global hammer configuration is: /etc/hammer/cli_config.yml for general hammer settings /etc/hammer/cli.modules.d/ for CLI module configuration files You can set user specific directives for hammer (in ~/.hammer/cli_config.yml ) as well as for CLI modules (in respective .yml files under ~/.hammer/cli.modules.d/ ). To see the order in which configuration files are loaded, as well as versions of loaded modules, use: Note Loading configuration for many CLI modules can slow down the execution of hammer commands. In such a case, consider disabling CLI modules that are not regularly used. Apart from saving credentials as described in Section 1.2, "Authentication" , you can set several other options in the ~/.hammer/ configuration directory. For example, you can change the default log level and set log rotation with the following directives in ~/.hammer/cli_config.yml . These directives affect only the current user and are not applied globally. Similarly, you can configure user interface settings. For example, set the number of entries displayed per request in the Hammer output by changing the following line: This setting is an equivalent of the --per-page Hammer option. 1.6. Configuring Hammer Logging You can set hammer to log debugging information for various Satellite components. You can set debug or normal configuration options for all Satellite components. Note After changing hammer's logging behavior, you must restart Satellite services. To set debug level for all components, use the following command: To set production level logging, use the following command: To list the currently recognized components, that you can set logging for: To list all available logging options: 1.7. Invoking the Hammer Shell You can issue hammer commands through the interactive shell. To invoke the shell, issue the following command: In the shell, you can enter sub-commands directly without typing "hammer", which can be useful for testing commands before using them in a script. To exit the shell, type exit or press Ctrl + D . 1.8. Generating Formatted Output You can modify the default formatting of the output of hammer commands to simplify the processing of this output by other command line tools and applications. For example, to list organizations in a CSV format with a custom separator (in this case a semicolon), use the following command: Output in CSV format is useful for example when you need to parse IDs and use them in a for loop. Several other formatting options are available with the --output option: Replace output_format with one of: table - generates output in the form of a human readable table (default). base - generates output in the form of key-value pairs. yaml - generates output in the YAML format. csv - generates output in the Comma Separated Values format. To define a custom separator, use the --csv and --csv-separator options instead. json - generates output in the JavaScript Object Notation format. silent - suppresses the output. 1.9. Hiding Header Output from Hammer Commands When you use any hammer command, you have the option of hiding headers from the output. If you want to pipe or use the output in custom scripts, hiding the output is useful. To hide the header output, add the --no-headers option to any hammer command. 1.10. Using JSON for Complex Parameters JSON is the preferred way to describe complex parameters. An example of JSON formatted content appears below: 1.11. Troubleshooting with Hammer You can use the hammer ping command to check the status of core Satellite services. Together with the satellite-maintain service status command, this can help you to diagnose and troubleshoot Satellite issues. If all services are running as expected, the output looks as follows: | [
"USD hammer --help",
"USD hammer organization --help",
"USD hammer | less",
":foreman: :use_sessions: true",
"hammer auth login",
"hammer settings set --name idle_timeout --value 30 Setting [idle_timeout] updated to [30]",
"hammer auth status",
"hammer auth logout",
":foreman: :username: ' username ' :password: ' password '",
"USD hammer -u username -p password subcommands",
"subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=satellite-utils-6.11-for-rhel-8-x86_64-rpms",
"subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-satellite-utils-6.11-rpms --enable=rhel-server-rhscl-7-rpms",
"dnf module enable satellite-utils:el8",
"dnf install rubygem-hammer_cli_katello",
"yum install tfm-rubygem-hammer_cli_katello",
":host: 'https:// satellite.example.com '",
"hammer defaults add --param-name organization --param-value \"Your_Organization\"",
"hammer defaults add --param-name location --param-value \"Your_Location\"",
"hammer defaults list",
"hammer -d --version",
":log_level: 'warning' :log_size: 5 #in MB",
":per_page: 30",
"satellite-maintain service restart",
"hammer admin logging --all --level-debug satellite-maintain service restart",
"hammer admin logging --all --level-production satellite-maintain service restart",
"hammer admin logging --list",
"hammer admin logging --help Usage: hammer admin logging [OPTIONS]",
"hammer shell",
"hammer --csv --csv-separator \";\" organization list",
"hammer --output output_format organization list",
"hammer compute-profile values create --compute-profile-id 22 --compute-resource-id 1 --compute-attributes= '{ \"cpus\": 2, \"corespersocket\": 2, \"memory_mb\": 4096, \"firmware\": \"efi\", \"resource_pool\": \"Resources\", \"cluster\": \"Example_Cluster\", \"guest_id\": \"rhel8\", \"path\": \"/Datacenters/EXAMPLE/vm/\", \"hardware_version\": \"Default\", \"memoryHotAddEnabled\": 0, \"cpuHotAddEnabled\": 0, \"add_cdrom\": 0, \"boot_order\": [ \"disk\", \"network\" ], \"scsi_controllers\":[ { \"type\": \"ParaVirtualSCSIController\", \"key\":1000 }, { \"type\": \"ParaVirtualSCSIController\", \"key\":1001 }it ] }'",
"hammer ping candlepin: Status: ok Server Response: Duration: 22ms candlepin_auth: Status: ok Server Response: Duration: 17ms pulp: Status: ok Server Response: Duration: 41ms pulp_auth: Status: ok Server Response: Duration: 23ms foreman_tasks: Status: ok Server Response: Duration: 33ms"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/hammer_cli_guide/chap-cli_guide-introduction_to_hammer |
Chapter 2. Installing RHEL image builder | Chapter 2. Installing RHEL image builder Before using RHEL image builder, you must install it. 2.1. RHEL image builder system requirements The host that runs RHEL image builder must meet the following requirements: Table 2.1. RHEL image builder system requirements Parameter Minimal Required Value System type A dedicated host or virtual machine. Note that RHEL image builder is not supported in containers, including Red Hat Universal Base Images (UBI). Processor 2 cores Memory 4 GiB Disk space 20 GiB of free space in the ` /var/cache/` filesystem Access privileges root Network Internet connectivity to the Red Hat Content Delivery Network (CDN). Note If you do not have internet connectivity, use RHEL image builder in isolated networks. For that, you must override the default repositories to point to your local repositories to not connect to Red Hat Content Delivery Network (CDN). Ensure that you have your content mirrored internally or use Red Hat Satellite. Additional resources Configuring RHEL image builder repositories Provisioning to Satellite using a Red Hat image builder image 2.2. Installing RHEL image builder Install RHEL image builder to have access to all the osbuild-composer package functionalities. Prerequisites You are logged in to the RHEL host on which you want to install RHEL image builder. The host is subscribed to Red Hat Subscription Manager (RHSM) or Red Hat Satellite. You have enabled the BaseOS and AppStream repositories to be able to install the RHEL image builder packages. Procedure Install RHEL image builder and other necessary packages: osbuild-composer - A service to build customized RHEL operating system images. composer-cli - This package enables access to the CLI interface. cockpit-composer - This package enables access to the Web UI interface. The web console is installed as a dependency of the cockpit-composer package. Enable and start RHEL image builder socket: If you want to use RHEL image builder in the web console, enable and start it. The osbuild-composer and cockpit services start automatically on first access. Load the shell configuration script so that the autocomplete feature for the composer-cli command starts working immediately without logging out and in: Verification Verify that the installation works by running composer-cli : Troubleshooting You can use a system journal to track RHEL image builder activities. Additionally, you can find the log messages in the file. To find the journal output for traceback, run the following commands: To show the local worker, such as the [email protected] , a template service that can start multiple service instances: To show the running services: | [
"dnf install osbuild-composer composer-cli cockpit-composer",
"systemctl enable --now osbuild-composer.socket",
"systemctl enable --now cockpit.socket",
"source /etc/bash_completion.d/composer-cli",
"composer-cli status show",
"journalctl | grep osbuild",
"journalctl -u osbuild-worker*",
"journalctl -u osbuild-composer.service"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/composing_a_customized_rhel_system_image/installing-composer_composing-a-customized-rhel-system-image |
Chapter 41. JSLT Action | Chapter 41. JSLT Action Apply a JSLT query or transformation on JSON. 41.1. Configuration Options The following table summarizes the configuration options available for the jslt-action Kamelet: Property Name Description Type Default Example template * Template The inline template for JSLT Transformation string "file://template.json" Note Fields marked with an asterisk (*) are mandatory. 41.2. Dependencies At runtime, the jslt-action Kamelet relies upon the presence of the following dependencies: camel:jslt camel:kamelet 41.3. Usage This section describes how you can use the jslt-action . 41.3.1. Knative Action You can use the jslt-action Kamelet as an intermediate step in a Knative binding. jslt-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jslt-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: {"foo" : "bar"} steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jslt-action properties: template: "file://template.json" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 41.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you are connected to. 41.3.1.2. Procedure for using the cluster CLI Save the jslt-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f jslt-action-binding.yaml 41.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step jslt-action -p "step-0.template=file://template.json" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. If the template points to a file that is not in the current directory, and if file:// or classpath:// is used, supply the transformation using the secret or the configmap. To view examples, see with secret and with configmap . For details about necessary traits, see Mount trait and JVM classpath trait . 41.3.2. Kafka Action You can use the jslt-action Kamelet as an intermediate step in a Kafka binding. jslt-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jslt-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: {"foo" : "bar"} steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jslt-action properties: template: "file://template.json" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 41.3.2.1. Prerequisites Ensure that you have installed the AMQ Streams operator in your OpenShift cluster and create a topic named my-topic in the current namespace. Also, you must have "Red Hat Integration - Camel K" installed into the OpenShift cluster you are connected to. 41.3.2.2. Procedure for using the cluster CLI Save the jslt-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f jslt-action-binding.yaml 41.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step jslt-action -p "step-0.template=file://template.json" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 41.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/blob/main/jslt-action.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jslt-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: {\"foo\" : \"bar\"} steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jslt-action properties: template: \"file://template.json\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f jslt-action-binding.yaml",
"kamel bind timer-source?message=Hello --step jslt-action -p \"step-0.template=file://template.json\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jslt-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: {\"foo\" : \"bar\"} steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jslt-action properties: template: \"file://template.json\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f jslt-action-binding.yaml",
"kamel bind timer-source?message=Hello --step jslt-action -p \"step-0.template=file://template.json\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/jslt-action |
4.354. xorg-x11-drv-mga | 4.354. xorg-x11-drv-mga 4.354.1. RHBA-2011:1620 - xorg-x11-drv-mga bug fix and enhancement update Updated xorg-x11-drv-mga packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The xorg-x11-drv-mga packages porvide a video driver for Matrox G-series chipsets for the X.Org implementation of the X Window System. The xorg-x11-drv-mga packages have been upgraded to upstream version 1.4.13, which provides a number of bug fixes over the version. (BZ# 713858 ) Bug Fixes BZ# 713388 Previously, the MGA driver rendered the image incorrectly on big-endian architectures, including PowerPC and 64-bit PowerPC. Consequently, the display showed altered colors. With this update, the colors are displayed correctly in the described scenario. BZ# 745080 Previously, the MGA driver caused a shift in the video screen. Consequently, the screen became corrupted and the windows were pushed around randomly. This update, modifies the code so that the MGA driver no longer causes problems for the video screen. Enhancement BZ# 526104 This update adds support for ServerEngines Pilot III to the xorg-x11-drv-mga packages. All users of the Xorg x11 MGA driver, are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/xorg-x11-drv-mga |
Chapter 8. Conditional policies in Red Hat Developer Hub | Chapter 8. Conditional policies in Red Hat Developer Hub The permission framework in Red Hat Developer Hub provides conditions, supported by the RBAC backend plugin ( backstage-plugin-rbac-backend ). The conditions work as content filters for the Developer Hub resources that are provided by the RBAC backend plugin. The RBAC backend API stores conditions assigned to roles in the database. When you request to access the frontend resources, the RBAC backend API searches for the corresponding conditions and delegates them to the appropriate plugin using its plugin ID. If you are assigned to multiple roles with different conditions, then the RBAC backend merges the conditions using the anyOf criteria. Conditional criteria A condition in Developer Hub is a simple condition with a rule and parameters. However, a condition can also contain a parameter or an array of parameters combined by conditional criteria. The supported conditional criteria includes: allOf : Ensures that all conditions within the array must be true for the combined condition to be satisfied. anyOf : Ensures that at least one of the conditions within the array must be true for the combined condition to be satisfied. not : Ensures that the condition within it must not be true for the combined condition to be satisfied. Conditional object The plugin specifies the parameters supported for conditions. You can access the conditional object schema from the RBAC API endpoint to understand how to construct a conditional JSON object, which is then used by the RBAC backend plugin API. A conditional object contains the following parameters: Table 8.1. Conditional object parameters Parameter Type Description result String Always has the value CONDITIONAL roleEntityRef String String entity reference to the RBAC role, such as role:default/dev pluginId String Corresponding plugin ID, such as catalog permissionMapping String array Array permission actions, such as ['read', 'update', 'delete'] resourceType String Resource type provided by the plugin, such as catalog-entity conditions JSON Condition JSON with parameters or array parameters joined by criteria Conditional policy aliases The RBAC backend plugin ( backstage-plugin-rbac-backend ) supports the use of aliases in conditional policy rule parameters. The conditional policy aliases are dynamically replaced with the corresponding values during policy evaluation. Each alias in conditional policy is prefixed with a USD sign indicating its special function. The supported conditional aliases include: USDcurrentUser : This alias is replaced with the user entity reference for the user who requests access to the resource. For example, if user Tom from the default namespace requests access, USDcurrentUser becomes user:default/tom . Example conditional policy object with USDcurrentUser alias { "result": "CONDITIONAL", "roleEntityRef": "role:default/developer", "pluginId": "catalog", "resourceType": "catalog-entity", "permissionMapping": ["delete"], "conditions": { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["USDcurrentUser"] } } } USDownerRefs : This alias is replaced with ownership references, usually as an array that includes the user entity reference and the user's parent group entity reference. For example, for user Tom from team-a, USDownerRefs becomes ['user:default/tom', 'group:default/team-a'] . Example conditional policy object with USDownerRefs alias { "result": "CONDITIONAL", "roleEntityRef": "role:default/developer", "pluginId": "catalog", "resourceType": "catalog-entity", "permissionMapping": ["delete"], "conditions": { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["USDownerRefs"] } } } 8.1. Conditional policies reference You can access API endpoints for conditional policies in Red Hat Developer Hub. For example, to retrieve the available conditional rules, which can help you define these policies, you can access the GET [api/plugins/condition-rules] endpoint. The api/plugins/condition-rules returns the condition parameters schemas, for example: [ { "pluginId": "catalog", "rules": [ { "name": "HAS_ANNOTATION", "description": "Allow entities with the specified annotation", "resourceType": "catalog-entity", "paramsSchema": { "type": "object", "properties": { "annotation": { "type": "string", "description": "Name of the annotation to match on" }, "value": { "type": "string", "description": "Value of the annotation to match on" } }, "required": [ "annotation" ], "additionalProperties": false, "USDschema": "http://json-schema.org/draft-07/schema#" } }, { "name": "HAS_LABEL", "description": "Allow entities with the specified label", "resourceType": "catalog-entity", "paramsSchema": { "type": "object", "properties": { "label": { "type": "string", "description": "Name of the label to match on" } }, "required": [ "label" ], "additionalProperties": false, "USDschema": "http://json-schema.org/draft-07/schema#" } }, { "name": "HAS_METADATA", "description": "Allow entities with the specified metadata subfield", "resourceType": "catalog-entity", "paramsSchema": { "type": "object", "properties": { "key": { "type": "string", "description": "Property within the entities metadata to match on" }, "value": { "type": "string", "description": "Value of the given property to match on" } }, "required": [ "key" ], "additionalProperties": false, "USDschema": "http://json-schema.org/draft-07/schema#" } }, { "name": "HAS_SPEC", "description": "Allow entities with the specified spec subfield", "resourceType": "catalog-entity", "paramsSchema": { "type": "object", "properties": { "key": { "type": "string", "description": "Property within the entities spec to match on" }, "value": { "type": "string", "description": "Value of the given property to match on" } }, "required": [ "key" ], "additionalProperties": false, "USDschema": "http://json-schema.org/draft-07/schema#" } }, { "name": "IS_ENTITY_KIND", "description": "Allow entities matching a specified kind", "resourceType": "catalog-entity", "paramsSchema": { "type": "object", "properties": { "kinds": { "type": "array", "items": { "type": "string" }, "description": "List of kinds to match at least one of" } }, "required": [ "kinds" ], "additionalProperties": false, "USDschema": "http://json-schema.org/draft-07/schema#" } }, { "name": "IS_ENTITY_OWNER", "description": "Allow entities owned by a specified claim", "resourceType": "catalog-entity", "paramsSchema": { "type": "object", "properties": { "claims": { "type": "array", "items": { "type": "string" }, "description": "List of claims to match at least one on within ownedBy" } }, "required": [ "claims" ], "additionalProperties": false, "USDschema": "http://json-schema.org/draft-07/schema#" } } ] } ... <another plugin condition parameter schemas> ] The RBAC backend API constructs a condition JSON object based on the condition schema. 8.1.1. Examples of conditional policies In Red Hat Developer Hub, you can define conditional policies with or without criteria. You can use the following examples to define the conditions based on your use case: A condition without criteria Consider a condition without criteria displaying catalogs only if user is a member of the owner group. To add this condition, you can use the catalog plugin schema IS_ENTITY_OWNER as follows: Example condition without criteria { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["group:default/team-a"] } } In the example, the only conditional parameter used is claims , which contains a list of user or group entity references. You can apply the example condition to the RBAC REST API by adding additional parameters as follows: { "result": "CONDITIONAL", "roleEntityRef": "role:default/test", "pluginId": "catalog", "resourceType": "catalog-entity", "permissionMapping": ["read"], "conditions": { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["group:default/team-a"] } } } A condition with criteria Consider a condition with criteria, which displays catalogs only if user is a member of owner group OR displays list of all catalog user groups. To add the criteria, you can add another rule as IS_ENTITY_KIND in the condition as follows: Example condition with criteria { "anyOf": [ { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["group:default/team-a"] } }, { "rule": "IS_ENTITY_KIND", "resourceType": "catalog-entity", "params": { "kinds": ["Group"] } } ] } Note Running conditions in parallel during creation is not supported. Therefore, consider defining nested conditional policies based on the available criteria. Example of nested conditions { "anyOf": [ { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["group:default/team-a"] } }, { "rule": "IS_ENTITY_KIND", "resourceType": "catalog-entity", "params": { "kinds": ["Group"] } } ], "not": { "rule": "IS_ENTITY_KIND", "resourceType": "catalog-entity", "params": { "kinds": ["Api"] } } } You can apply the example condition to the RBAC REST API by adding additional parameters as follows: { "result": "CONDITIONAL", "roleEntityRef": "role:default/test", "pluginId": "catalog", "resourceType": "catalog-entity", "permissionMapping": ["read"], "conditions": { "anyOf": [ { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["group:default/team-a"] } }, { "rule": "IS_ENTITY_KIND", "resourceType": "catalog-entity", "params": { "kinds": ["Group"] } } ] } } The following examples can be used with Developer Hub plugins. These examples can help you determine how to define conditional policies: Conditional policy defined for Keycloak plugin { "result": "CONDITIONAL", "roleEntityRef": "role:default/developer", "pluginId": "catalog", "resourceType": "catalog-entity", "permissionMapping": ["update", "delete"], "conditions": { "not": { "rule": "HAS_ANNOTATION", "resourceType": "catalog-entity", "params": { "annotation": "keycloak.org/realm", "value": "<YOUR_REALM>" } } } } The example of Keycloak plugin prevents users in the role:default/developer from updating or deleting users that are ingested into the catalog from the Keycloak plugin. Note In the example, the annotation keycloak.org/realm requires the value of <YOUR_REALM> . Conditional policy defined for Quay plugin { "result": "CONDITIONAL", "roleEntityRef": "role:default/developer", "pluginId": "scaffolder", "resourceType": "scaffolder-action", "permissionMapping": ["use"], "conditions": { "not": { "rule": "HAS_ACTION_ID", "resourceType": "scaffolder-action", "params": { "actionId": "quay:create-repository" } } } } The example of Quay plugin prevents the role role:default/developer from using the Quay scaffolder action. Note that permissionMapping contains use , signifying that scaffolder-action resource type permission does not have a permission policy. For more information about permissions in Red Hat Developer Hub, see Chapter 7, Permission policies reference . | [
"{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/developer\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"delete\"], \"conditions\": { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"USDcurrentUser\"] } } }",
"{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/developer\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"delete\"], \"conditions\": { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"USDownerRefs\"] } } }",
"[ { \"pluginId\": \"catalog\", \"rules\": [ { \"name\": \"HAS_ANNOTATION\", \"description\": \"Allow entities with the specified annotation\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"annotation\": { \"type\": \"string\", \"description\": \"Name of the annotation to match on\" }, \"value\": { \"type\": \"string\", \"description\": \"Value of the annotation to match on\" } }, \"required\": [ \"annotation\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"HAS_LABEL\", \"description\": \"Allow entities with the specified label\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"label\": { \"type\": \"string\", \"description\": \"Name of the label to match on\" } }, \"required\": [ \"label\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"HAS_METADATA\", \"description\": \"Allow entities with the specified metadata subfield\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"key\": { \"type\": \"string\", \"description\": \"Property within the entities metadata to match on\" }, \"value\": { \"type\": \"string\", \"description\": \"Value of the given property to match on\" } }, \"required\": [ \"key\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"HAS_SPEC\", \"description\": \"Allow entities with the specified spec subfield\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"key\": { \"type\": \"string\", \"description\": \"Property within the entities spec to match on\" }, \"value\": { \"type\": \"string\", \"description\": \"Value of the given property to match on\" } }, \"required\": [ \"key\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"IS_ENTITY_KIND\", \"description\": \"Allow entities matching a specified kind\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"kinds\": { \"type\": \"array\", \"items\": { \"type\": \"string\" }, \"description\": \"List of kinds to match at least one of\" } }, \"required\": [ \"kinds\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"IS_ENTITY_OWNER\", \"description\": \"Allow entities owned by a specified claim\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"claims\": { \"type\": \"array\", \"items\": { \"type\": \"string\" }, \"description\": \"List of claims to match at least one on within ownedBy\" } }, \"required\": [ \"claims\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } } ] } ... <another plugin condition parameter schemas> ]",
"{ \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }",
"{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/test\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"read\"], \"conditions\": { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } } }",
"{ \"anyOf\": [ { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }, { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Group\"] } } ] }",
"{ \"anyOf\": [ { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }, { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Group\"] } } ], \"not\": { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Api\"] } } }",
"{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/test\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"read\"], \"conditions\": { \"anyOf\": [ { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }, { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Group\"] } } ] } }",
"{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/developer\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"update\", \"delete\"], \"conditions\": { \"not\": { \"rule\": \"HAS_ANNOTATION\", \"resourceType\": \"catalog-entity\", \"params\": { \"annotation\": \"keycloak.org/realm\", \"value\": \"<YOUR_REALM>\" } } } }",
"{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/developer\", \"pluginId\": \"scaffolder\", \"resourceType\": \"scaffolder-action\", \"permissionMapping\": [\"use\"], \"conditions\": { \"not\": { \"rule\": \"HAS_ACTION_ID\", \"resourceType\": \"scaffolder-action\", \"params\": { \"actionId\": \"quay:create-repository\" } } } }"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/authorization/con-rbac-conditional-policies-rhdh_title-authorization |
Chapter 2. Upgrade requirements | Chapter 2. Upgrade requirements You must upgrade your custom resources to use API version v1beta2 before upgrading to AMQ Streams version 1.8. The v1beta2 API version for all custom resources was introduced with AMQ Streams 1.7. For AMQ Streams 1.8, v1alpha1 and v1beta1 API versions were removed from all AMQ Streams custom resources apart from KafkaTopic and KafkaUser . Upgrade of the custom resources to v1beta2 prepares AMQ Streams for a move to Kubernetes CRD v1 , which is required for Kubernetes v1.22. If you are upgrading from an AMQ Streams version prior to version 1.7: Upgrade to AMQ Streams 1.7 Convert the custom resources to v1beta2 Upgrade to AMQ Streams 1.8 See Deploying and upgrading AMQ Streams . 2.1. Upgrading custom resources to the v1beta2 version To support the upgrade of custom resources to v1beta2 , AMQ Streams provides an API conversion tool , which you can download from the AMQ Streams download site . You perform the custom resources upgrades in two steps. Step one: Convert the format of custom resources Using the API conversion tool, you can convert the format of your custom resources into a format applicable to v1beta2 in one of two ways: Converting the YAML files that describe the configuration for AMQ Streams custom resources Converting AMQ Streams custom resources directly in the cluster Alternatively, you can manually convert each custom resource into a format applicable to v1beta2 . Instructions for manually converting custom resources are included in the documentation. Step two: Upgrade CRDs to v1beta2 , using the API conversion tool with the crd-upgrade command, you must set v1beta2 as the storage API version in your CRDs. You cannot perform this step manually. For full instructions, see Upgrading AMQ Streams . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/release_notes_for_amq_streams_1.8_on_openshift/features-upgrade-str |
Appendix A. KDE Plasma Workspaces | Appendix A. KDE Plasma Workspaces As an alternative to the default GNOME desktop environment, Red Hat Enterprise Linux 7 provides version 4 of KDE Plasma Workspaces (previously known as K Desktop Environment) to match different work styles and preferences. Refer to the Red Hat Enterprise Linux 7 Installation Guide on setting KDE Plasma Workspaces as the default desktop during the installation process or changing your current desktop environment to KDE Plasma Workspaces . For more information on KDE Plasma Workspaces , see its upstream websites, such as https://www.kde.org/ and https://docs.kde.org/ . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/kde-plasma-workspace |
Chapter 3. Manually creating IAM for GCP | Chapter 3. Manually creating IAM for GCP In environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace, you can put the Cloud Credential Operator (CCO) into manual mode before you install the cluster. 3.1. Alternatives to storing administrator-level secrets in the kube-system project The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file. If you prefer not to store an administrator-level credential secret in the cluster kube-system project, you can choose one of the following options when installing OpenShift Container Platform: Use manual mode with GCP Workload Identity : You can use the CCO utility ( ccoctl ) to configure the cluster to use manual mode with GCP Workload Identity. When the CCO utility is used to configure the cluster for GCP Workload Identity, it signs service account tokens that provide short-term, limited-privilege security credentials to components. Note This credentials strategy is supported for only new OpenShift Container Platform clusters and must be configured during installation. You cannot reconfigure an existing cluster that uses a different credentials strategy to use this feature. Manage cloud credentials manually : You can set the credentialsMode parameter for the CCO to Manual to manage cloud credentials manually. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them. Remove the administrator-level credential secret after installing OpenShift Container Platform with mint mode : If you are using the CCO with the credentialsMode parameter set to Mint , you can remove or rotate the administrator-level credential after installing OpenShift Container Platform. Mint mode is the default configuration for the CCO. This option requires the presence of the administrator-level credential during an installation. The administrator-level credential is used during the installation to mint other credentials with some permissions granted. The original credential secret is not stored in the cluster permanently. Note Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked. Additional resources Using manual mode with GCP Workload Identity Rotating or removing cloud provider credentials For a detailed description of all available CCO credential modes and their supported platforms, see About the Cloud Credential Operator . 3.2. Manually create IAM The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Change to the directory that contains the installation program and create the install-config.yaml file by running the following command: USD openshift-install create install-config --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled ... 1 This line is added to set the credentialsMode parameter to Manual . Generate the manifests by running the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your openshift-install binary is built to use by running the following command: USD openshift-install version Example output release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 Locate all CredentialsRequest objects in this release image that target the cloud you are deploying on by running the following command: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 \ --credentials-requests \ --cloud=gcp This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component-secret> namespace: <component-namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important The release image includes CredentialsRequest objects for Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set. You can identify these objects by their use of the release.openshift.io/feature-set: TechPreviewNoUpgrade annotation. If you are not using any of these features, do not create secrets for these objects. Creating secrets for Technology Preview features that you are not using can cause the installation to fail. If you are using any of these features, you must create secrets for the corresponding objects. To find CredentialsRequest objects with the TechPreviewNoUpgrade annotation, run the following command: USD grep "release.openshift.io/feature-set" * Example output 0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade From the directory that contains the installation program, proceed with your cluster creation: USD openshift-install create cluster --dir <installation_directory> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. Additional resources Updating a cluster using the web console Updating a cluster using the CLI 3.3. Mint mode Mint mode is the default Cloud Credential Operator (CCO) credentials mode for OpenShift Container Platform on platforms that support it. In this mode, the CCO uses the provided administrator-level cloud credential to run the cluster. Mint mode is supported for AWS and GCP. In mint mode, the admin credential is stored in the kube-system namespace and then used by the CCO to process the CredentialsRequest objects in the cluster and create users for each with specific permissions. The benefits of mint mode include: Each cluster component has only the permissions it requires Automatic, on-going reconciliation for cloud credentials, including additional credentials or permissions that might be required for upgrades One drawback is that mint mode requires admin credential storage in a cluster kube-system secret. 3.4. Mint mode with removal or rotation of the administrator-level credential Currently, this mode is only supported on AWS and GCP. In this mode, a user installs OpenShift Container Platform with an administrator-level credential just like the normal mint mode. However, this process removes the administrator-level credential secret from the cluster post-installation. The administrator can have the Cloud Credential Operator make its own request for a read-only credential that allows it to verify if all CredentialsRequest objects have their required permissions, thus the administrator-level credential is not required unless something needs to be changed. After the associated credential is removed, it can be deleted or deactivated on the underlying cloud, if desired. Note Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked. The administrator-level credential is not stored in the cluster permanently. Following these steps still requires the administrator-level credential in the cluster for brief periods of time. It also requires manually re-instating the secret with administrator-level credentials for each upgrade. 3.5. steps Install an OpenShift Container Platform cluster: Installing a cluster quickly on GCP with default options on installer-provisioned infrastructure Install a cluster with cloud customizations on installer-provisioned infrastructure Install a cluster with network customizations on installer-provisioned infrastructure | [
"openshift-install create install-config --dir <installation_directory>",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"openshift-install create manifests --dir <installation_directory>",
"openshift-install version",
"release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=gcp",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component-secret> namespace: <component-namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"grep \"release.openshift.io/feature-set\" *",
"0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade",
"openshift-install create cluster --dir <installation_directory>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_gcp/manually-creating-iam-gcp |
Chapter 41. Infinispan | Chapter 41. Infinispan Both producer and consumer are supported This component allows you to interact with Infinispan distributed data grid / cache using the Hot Rod procol. Infinispan is an extremely scalable, highly available key/value data store and data grid platform written in Java. 41.1. Dependencies When using infinispan with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-infinispan-starter</artifactId> </dependency> 41.2. URI format The producer allows sending messages to a remote cache using the HotRod protocol. The consumer allows listening for events from a remote cache using the HotRod protocol. 41.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 41.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 41.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 41.4. Component Options The Infinispan component supports 26 options, which are listed below. Name Description Default Type configuration (common) Component configuration. InfinispanRemoteConfiguration hosts (common) Specifies the host of the cache on Infinispan instance. String queryBuilder (common) Specifies the query builder. InfinispanQueryBuilder secure (common) Define if we are connecting to a secured Infinispan instance. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean customListener (consumer) Returns the custom listener in use, if provided. InfinispanRemoteCustomListener eventTypes (consumer) Specifies the set of event types to register by the consumer.Multiple event can be separated by comma. The possible event types are: CLIENT_CACHE_ENTRY_CREATED, CLIENT_CACHE_ENTRY_MODIFIED, CLIENT_CACHE_ENTRY_REMOVED, CLIENT_CACHE_ENTRY_EXPIRED, CLIENT_CACHE_FAILOVER. String defaultValue (producer) Set a specific default value for some producer operations. Object key (producer) Set a specific key for producer operations. Object lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean oldValue (producer) Set a specific old value for some producer operations. Object operation (producer) The operation to perform. Enum values: PUT PUTASYNC PUTALL PUTALLASYNC PUTIFABSENT PUTIFABSENTASYNC GET GETORDEFAULT CONTAINSKEY CONTAINSVALUE REMOVE REMOVEASYNC REPLACE REPLACEASYNC SIZE CLEAR CLEARASYNC QUERY STATS COMPUTE COMPUTEASYNC PUT InfinispanOperation value (producer) Set a specific value for producer operations. Object password ( security) Define the password to access the infinispan instance. String saslMechanism ( security) Define the SASL Mechanism to access the infinispan instance. String securityRealm ( security) Define the security realm to access the infinispan instance. String securityServerName ( security) Define the security server name to access the infinispan instance. String username ( security) Define the username to access the infinispan instance. String autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean cacheContainer (advanced) Autowired Specifies the cache Container to connect. RemoteCacheManager cacheContainerConfiguration (advanced) Autowired The CacheContainer configuration. Used if the cacheContainer is not defined. Configuration configurationProperties (advanced) Implementation specific properties for the CacheManager. Map configurationUri (advanced) An implementation specific URI for the CacheManager. String flags (advanced) A comma separated list of org.infinispan.client.hotrod.Flag to be applied by default on each cache invocation. String remappingFunction (advanced) Set a specific remappingFunction to use in a compute operation. BiFunction resultHeader (advanced) Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader. String 41.5. Endpoint Options The Infinispan endpoint is configured using URI syntax: with the following path and query parameters: 41.5.1. Path Parameters (1 parameters) Name Description Default Type cacheName (common) Required The name of the cache to use. Use current to use the existing cache name from the currently configured cached manager. Or use default for the default cache manager name. String 41.5.2. Query Parameters (26 parameters) Name Description Default Type hosts (common) Specifies the host of the cache on Infinispan instance. String queryBuilder (common) Specifies the query builder. InfinispanQueryBuilder secure (common) Define if we are connecting to a secured Infinispan instance. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean customListener (consumer) Returns the custom listener in use, if provided. InfinispanRemoteCustomListener eventTypes (consumer) Specifies the set of event types to register by the consumer.Multiple event can be separated by comma. The possible event types are: CLIENT_CACHE_ENTRY_CREATED, CLIENT_CACHE_ENTRY_MODIFIED, CLIENT_CACHE_ENTRY_REMOVED, CLIENT_CACHE_ENTRY_EXPIRED, CLIENT_CACHE_FAILOVER. String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern defaultValue (producer) Set a specific default value for some producer operations. Object key (producer) Set a specific key for producer operations. Object lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean oldValue (producer) Set a specific old value for some producer operations. Object operation (producer) The operation to perform. Enum values: PUT PUTASYNC PUTALL PUTALLASYNC PUTIFABSENT PUTIFABSENTASYNC GET GETORDEFAULT CONTAINSKEY CONTAINSVALUE REMOVE REMOVEASYNC REPLACE REPLACEASYNC SIZE CLEAR CLEARASYNC QUERY STATS COMPUTE COMPUTEASYNC PUT InfinispanOperation value (producer) Set a specific value for producer operations. Object password ( security) Define the password to access the infinispan instance. String saslMechanism ( security) Define the SASL Mechanism to access the infinispan instance. String securityRealm ( security) Define the security realm to access the infinispan instance. String securityServerName ( security) Define the security server name to access the infinispan instance. String username ( security) Define the username to access the infinispan instance. String cacheContainer (advanced) Autowired Specifies the cache Container to connect. RemoteCacheManager cacheContainerConfiguration (advanced) Autowired The CacheContainer configuration. Used if the cacheContainer is not defined. Configuration configurationProperties (advanced) Implementation specific properties for the CacheManager. Map configurationUri (advanced) An implementation specific URI for the CacheManager. String flags (advanced) A comma separated list of org.infinispan.client.hotrod.Flag to be applied by default on each cache invocation. String remappingFunction (advanced) Set a specific remappingFunction to use in a compute operation. BiFunction resultHeader (advanced) Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader. String 41.6. Camel Operations This section lists all available operations, along with their header information. Table 41.1. Table 1. Put Operations Operation Name Description InfinispanOperation.PUT Puts a key/value pair in the cache, optionally with expiration InfinispanOperation.PUTASYNC Asynchronously puts a key/value pair in the cache, optionally with expiration InfinispanOperation.PUTIFABSENT Puts a key/value pair in the cache if it did not exist, optionally with expiration InfinispanOperation.PUTIFABSENTASYNC Asynchronously puts a key/value pair in the cache if it did not exist, optionally with expiration Required Headers : CamelInfinispanKey CamelInfinispanValue Optional Headers : CamelInfinispanLifespanTime CamelInfinispanLifespanTimeUnit CamelInfinispanMaxIdleTime CamelInfinispanMaxIdleTimeUnit Result Header : CamelInfinispanOperationResult Table 41.2. Table 2. Put All Operations Operation Name Description InfinispanOperation.PUTALL Adds multiple entries to a cache, optionally with expiration CamelInfinispanOperation.PUTALLASYNC Asynchronously adds multiple entries to a cache, optionally with expiration Required Headers : CamelInfinispanMap Optional Headers : CamelInfinispanLifespanTime CamelInfinispanLifespanTimeUnit CamelInfinispanMaxIdleTime CamelInfinispanMaxIdleTimeUnit Table 41.3. Table 3. Get Operations Operation Name Description InfinispanOperation.GET Retrieves the value associated with a specific key from the cache InfinispanOperation.GETORDEFAULT Retrieves the value, or default value, associated with a specific key from the cache Required Headers : CamelInfinispanKey Table 41.4. Table 4. Contains Key Operation Operation Name Description InfinispanOperation.CONTAINSKEY Determines whether a cache contains a specific key Required Headers CamelInfinispanKey Result Header CamelInfinispanOperationResult Table 41.5. Table 5. Contains Value Operation Operation Name Description InfinispanOperation.CONTAINSVALUE Determines whether a cache contains a specific value Required Headers : CamelInfinispanKey Table 41.6. Table 6. Remove Operations Operation Name Description InfinispanOperation.REMOVE Removes an entry from a cache, optionally only if the value matches a given one InfinispanOperation.REMOVEASYNC Asynchronously removes an entry from a cache, optionally only if the value matches a given one Required Headers : CamelInfinispanKey Optional Headers : CamelInfinispanValue Result Header : CamelInfinispanOperationResult Table 41.7. Table 7. Replace Operations Operation Name Description InfinispanOperation.REPLACE Conditionally replaces an entry in the cache, optionally with expiration InfinispanOperation.REPLACEASYNC Asynchronously conditionally replaces an entry in the cache, optionally with expiration Required Headers : CamelInfinispanKey CamelInfinispanValue CamelInfinispanOldValue Optional Headers : CamelInfinispanLifespanTime CamelInfinispanLifespanTimeUnit CamelInfinispanMaxIdleTime CamelInfinispanMaxIdleTimeUnit Result Header : CamelInfinispanOperationResult Table 41.8. Table 8. Clear Operations Operation Name Description InfinispanOperation.CLEAR Clears the cache InfinispanOperation.CLEARASYNC Asynchronously clears the cache Table 41.9. Table 9. Size Operation Operation Name Description InfinispanOperation.SIZE Returns the number of entries in the cache Result Header CamelInfinispanOperationResult Table 41.10. Table 10. Stats Operation Operation Name Description InfinispanOperation.STATS Returns statistics about the cache Result Header : CamelInfinispanOperationResult Table 41.11. Table 11. Query Operation Operation Name Description InfinispanOperation.QUERY Executes a query on the cache Required Headers : CamelInfinispanQueryBuilder Result Header : CamelInfinispanOperationResult Note Write methods like put(key, value) and remove(key) do not return the value by default. 41.7. Message Headers Name Default Value Type Context Description CamelInfinispanCacheName null String Shared The cache participating in the operation or event. CamelInfinispanOperation PUT InfinispanOperation Producer The operation to perform. CamelInfinispanMap null Map Producer A Map to use in case of CamelInfinispanOperationPutAll operation CamelInfinispanKey null Object Shared The key to perform the operation to or the key generating the event. CamelInfinispanValue null Object Producer The value to use for the operation. CamelInfinispanEventType null String Consumer The type of the received event. CamelInfinispanLifespanTime null long Producer The Lifespan time of a value inside the cache. Negative values are interpreted as infinity. CamelInfinispanTimeUnit null String Producer The Time Unit of an entry Lifespan Time. CamelInfinispanMaxIdleTime null long Producer The maximum amount of time an entry is allowed to be idle for before it is considered as expired. CamelInfinispanMaxIdleTimeUnit null String Producer The Time Unit of an entry Max Idle Time. CamelInfinispanQueryBuilder null InfinispanQueryBuilder Producer The QueryBuilde to use for QUERY command, if not present the command defaults to InifinispanConfiguration's one CamelInfinispanOperationResultHeader null String Producer Store the operation result in a header instead of the message body 41.8. Examples Put a key/value into a named cache: from("direct:start") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.PUT) (1) .setHeader(InfinispanConstants.KEY).constant("123") (2) .to("infinispan:myCacheName&cacheContainer=#cacheContainer"); (3) Where, 1 - Set the operation to perform 2 - Set the key used to identify the element in the cache 3 - Use the configured cache manager cacheContainer from the registry to put an element to the cache named myCacheName It is possible to configure the lifetime and/or the idle time before the entry expires and gets evicted from the cache, as example: from("direct:start") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.GET) .setHeader(InfinispanConstants.KEY).constant("123") .setHeader(InfinispanConstants.LIFESPAN_TIME).constant(100L) (1) .setHeader(InfinispanConstants.LIFESPAN_TIME_UNIT.constant(TimeUnit.MILLISECONDS.toString()) (2) .to("infinispan:myCacheName"); where, 1 - Set the lifespan of the entry 2 - Set the time unit for the lifespan Queries from("direct:start") .setHeader(InfinispanConstants.OPERATION, InfinispanConstants.QUERY) .setHeader(InfinispanConstants.QUERY_BUILDER, new InfinispanQueryBuilder() { @Override public Query build(QueryFactory<Query> qf) { return qf.from(User.class).having("name").like("%abc%").build(); } }) .to("infinispan:myCacheName?cacheContainer=#cacheManager") ; Note The .proto descriptors for domain objects must be registered with the remote Data Grid server, see Remote Query Example in the official Infinispan documentation. Custom Listeners from("infinispan://?cacheContainer=#cacheManager&customListener=#myCustomListener") .to("mock:result"); The instance of myCustomListener must exist and Camel should be able to look it up from the Registry . Users are encouraged to extend the org.apache.camel.component.infinispan.remote.InfinispanRemoteCustomListener class and annotate the resulting class with @ClientListener which can be found found in package org.infinispan.client.hotrod.annotation . 41.9. Using the Infinispan based idempotent repository In this section we will use the Infinispan based idempotent repository. Java Example InfinispanRemoteConfiguration conf = new InfinispanRemoteConfiguration(); (1) conf.setHosts("localhost:1122") InfinispanRemoteIdempotentRepository repo = new InfinispanRemoteIdempotentRepository("idempotent"); (2) repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from("direct:start") .idempotentConsumer(header("MessageID"), repo) (3) .to("mock:result"); } }); where, 1 - Configure the cache 2 - Configure the repository bean 3 - Set the repository to the route XML Example <bean id="infinispanRepo" class="org.apache.camel.component.infinispan.remote.InfinispanRemoteIdempotentRepository" destroy-method="stop"> <constructor-arg value="idempotent"/> (1) <property name="configuration"> (2) <bean class="org.apache.camel.component.infinispan.remote.InfinispanRemoteConfiguration"> <property name="hosts" value="localhost:11222"/> </bean> </property> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start" /> <idempotentConsumer messageIdRepositoryRef="infinispanRepo"> (3) <header>MessageID</header> <to uri="mock:result" /> </idempotentConsumer> </route> </camelContext> where, 1 - Set the name of the cache that will be used by the repository 2 - Configure the repository bean 3 - Set the repository to the route 41.10. Using the Infinispan based aggregation repository In this section we will use the Infinispan based aggregation repository. Java Example InfinispanRemoteConfiguration conf = new InfinispanRemoteConfiguration(); (1) conf.setHosts("localhost:1122") InfinispanRemoteAggregationRepository repo = new InfinispanRemoteAggregationRepository(); (2) repo.setCacheName("aggregation"); repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from("direct:start") .aggregate(header("MessageID")) .completionSize(3) .aggregationRepository(repo) (3) .aggregationStrategyRef("myStrategy") .to("mock:result"); } }); where, 1 - Configure the cache 2 - Create the repository bean 3 - Set the repository to the route XML Example <bean id="infinispanRepo" class="org.apache.camel.component.infinispan.remote.InfinispanRemoteAggregationRepository" destroy-method="stop"> <constructor-arg value="aggregation"/> (1) <property name="configuration"> (2) <bean class="org.apache.camel.component.infinispan.remote.InfinispanRemoteConfiguration"> <property name="hosts" value="localhost:11222"/> </bean> </property> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start" /> <aggregate strategyRef="myStrategy" completionSize="3" aggregationRepositoryRef="infinispanRepo"> (3) <correlationExpression> <header>MessageID</header> </correlationExpression> <to uri="mock:result"/> </aggregate> </route> </camelContext> where, 1 - Set the name of the cache that will be used by the repository 2 - Configure the repository bean 3 - Set the repository to the route Note With the release of Infinispan 11, it is required to set the encoding configuration on any cache created. This is critical for consuming events too. For more information have a look at Data Encoding and MediaTypes in the official Infinispan documentation. 41.11. Spring Boot Auto-Configuration The component supports 23 options, which are listed below. Name Description Default Type camel.component.infinispan.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.infinispan.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.infinispan.cache-container Specifies the cache Container to connect. The option is a org.infinispan.client.hotrod.RemoteCacheManager type. RemoteCacheManager camel.component.infinispan.cache-container-configuration The CacheContainer configuration. Used if the cacheContainer is not defined. The option is a org.infinispan.client.hotrod.configuration.Configuration type. Configuration camel.component.infinispan.configuration Component configuration. The option is a org.apache.camel.component.infinispan.remote.InfinispanRemoteConfiguration type. InfinispanRemoteConfiguration camel.component.infinispan.configuration-properties Implementation specific properties for the CacheManager. Map camel.component.infinispan.configuration-uri An implementation specific URI for the CacheManager. String camel.component.infinispan.custom-listener Returns the custom listener in use, if provided. The option is a org.apache.camel.component.infinispan.remote.InfinispanRemoteCustomListener type. InfinispanRemoteCustomListener camel.component.infinispan.enabled Whether to enable auto configuration of the infinispan component. This is enabled by default. Boolean camel.component.infinispan.event-types Specifies the set of event types to register by the consumer.Multiple event can be separated by comma. The possible event types are: CLIENT_CACHE_ENTRY_CREATED, CLIENT_CACHE_ENTRY_MODIFIED, CLIENT_CACHE_ENTRY_REMOVED, CLIENT_CACHE_ENTRY_EXPIRED, CLIENT_CACHE_FAILOVER. String camel.component.infinispan.flags A comma separated list of org.infinispan.client.hotrod.Flag to be applied by default on each cache invocation. String camel.component.infinispan.hosts Specifies the host of the cache on Infinispan instance. String camel.component.infinispan.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.infinispan.operation The operation to perform. InfinispanOperation camel.component.infinispan.password Define the password to access the infinispan instance. String camel.component.infinispan.query-builder Specifies the query builder. The option is a org.apache.camel.component.infinispan.InfinispanQueryBuilder type. InfinispanQueryBuilder camel.component.infinispan.remapping-function Set a specific remappingFunction to use in a compute operation. The option is a java.util.function.BiFunction type. BiFunction camel.component.infinispan.result-header Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader. String camel.component.infinispan.sasl-mechanism Define the SASL Mechanism to access the infinispan instance. String camel.component.infinispan.secure Define if we are connecting to a secured Infinispan instance. false Boolean camel.component.infinispan.security-realm Define the security realm to access the infinispan instance. String camel.component.infinispan.security-server-name Define the security server name to access the infinispan instance. String camel.component.infinispan.username Define the username to access the infinispan instance. String | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-infinispan-starter</artifactId> </dependency>",
"infinispan://cacheName?[options]",
"infinispan:cacheName",
"from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.PUT) (1) .setHeader(InfinispanConstants.KEY).constant(\"123\") (2) .to(\"infinispan:myCacheName&cacheContainer=#cacheContainer\"); (3)",
"from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.GET) .setHeader(InfinispanConstants.KEY).constant(\"123\") .setHeader(InfinispanConstants.LIFESPAN_TIME).constant(100L) (1) .setHeader(InfinispanConstants.LIFESPAN_TIME_UNIT.constant(TimeUnit.MILLISECONDS.toString()) (2) .to(\"infinispan:myCacheName\");",
"from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION, InfinispanConstants.QUERY) .setHeader(InfinispanConstants.QUERY_BUILDER, new InfinispanQueryBuilder() { @Override public Query build(QueryFactory<Query> qf) { return qf.from(User.class).having(\"name\").like(\"%abc%\").build(); } }) .to(\"infinispan:myCacheName?cacheContainer=#cacheManager\") ;",
"from(\"infinispan://?cacheContainer=#cacheManager&customListener=#myCustomListener\") .to(\"mock:result\");",
"InfinispanRemoteConfiguration conf = new InfinispanRemoteConfiguration(); (1) conf.setHosts(\"localhost:1122\") InfinispanRemoteIdempotentRepository repo = new InfinispanRemoteIdempotentRepository(\"idempotent\"); (2) repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from(\"direct:start\") .idempotentConsumer(header(\"MessageID\"), repo) (3) .to(\"mock:result\"); } });",
"<bean id=\"infinispanRepo\" class=\"org.apache.camel.component.infinispan.remote.InfinispanRemoteIdempotentRepository\" destroy-method=\"stop\"> <constructor-arg value=\"idempotent\"/> (1) <property name=\"configuration\"> (2) <bean class=\"org.apache.camel.component.infinispan.remote.InfinispanRemoteConfiguration\"> <property name=\"hosts\" value=\"localhost:11222\"/> </bean> </property> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\" /> <idempotentConsumer messageIdRepositoryRef=\"infinispanRepo\"> (3) <header>MessageID</header> <to uri=\"mock:result\" /> </idempotentConsumer> </route> </camelContext>",
"InfinispanRemoteConfiguration conf = new InfinispanRemoteConfiguration(); (1) conf.setHosts(\"localhost:1122\") InfinispanRemoteAggregationRepository repo = new InfinispanRemoteAggregationRepository(); (2) repo.setCacheName(\"aggregation\"); repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from(\"direct:start\") .aggregate(header(\"MessageID\")) .completionSize(3) .aggregationRepository(repo) (3) .aggregationStrategyRef(\"myStrategy\") .to(\"mock:result\"); } });",
"<bean id=\"infinispanRepo\" class=\"org.apache.camel.component.infinispan.remote.InfinispanRemoteAggregationRepository\" destroy-method=\"stop\"> <constructor-arg value=\"aggregation\"/> (1) <property name=\"configuration\"> (2) <bean class=\"org.apache.camel.component.infinispan.remote.InfinispanRemoteConfiguration\"> <property name=\"hosts\" value=\"localhost:11222\"/> </bean> </property> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\" /> <aggregate strategyRef=\"myStrategy\" completionSize=\"3\" aggregationRepositoryRef=\"infinispanRepo\"> (3) <correlationExpression> <header>MessageID</header> </correlationExpression> <to uri=\"mock:result\"/> </aggregate> </route> </camelContext>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-infinispan-component-starter |
Chapter 6. Managing storage devices in the web console | Chapter 6. Managing storage devices in the web console You can use the the web console to configure physical and virtual storage devices. This chapter provides instructions for these devices: Mounted NFS Logical Volumes RAID VDO 6.1. Prerequisites The the web console has been installed. For details, see Installing the web console . 6.2. Managing NFS mounts in the web console The the web console enables you to mount remote directories using the Network File System (NFS) protocol. NFS makes it possible to reach and mount remote directories located on the network and work with the files as if the directory was located on your physical drive. Prerequisites NFS server name or IP address. Path to the directory on the remote server. 6.2.1. Connecting NFS mounts in the web console The following steps aim to help you with connecting a remote directory to your file system using NFS. Prerequisites NFS server name or IP address. Path to the directory on the remote server. Procedure Log in to the web console. For details, see Logging in to the web console . Click Storage . Click + in the NFS mounts section. In the New NFS Mount dialog box, enter the server or IP address of the remote server. In the Path on Server field, enter the path to the directory you want to mount. In the Local Mount Point field, enter the path where you want to find the directory in your local system. Select Mount at boot . This ensures that the directory will be reachable also after the restart of the local system. Optionally, select Mount read only if you do not want to change the content. Click Add . At this point, you can open the mounted directory and verify that the content is accessible. To troubleshoot the connection, you can adjust it with the Custom Mount Options . 6.2.2. Customizing NFS mount options in the web console The following section provides you with information on how to edit an existing NFS mount and shows you where to add custom mount options. Custom mount options can help you to troubleshoot the connection or change parameters of the NFS mount such as changing timeout limits or configuring authentication. Prerequisites NFS mount added. Procedure Log in to the web console. For details, see Logging in to the web console . Click Storage . Click on the NFS mount you want to adjust. If the remote directory is mounted, click Unmount . The directory must not be mounted during the custom mount options configuration. Otherwise the web console does not save the configuration and this will cause an error. Click Edit . In the NFS Mount dialog box, select Custom mount option . Enter mount options separated by a comma. For example: nfsvers=4 - the NFS protocol version number soft - type of recovery after an NFS request times out sec=krb5 - files on the NFS server can be secured by Kerberos authentication. Both the NFS client and server have to support Kerberos authentication. For a complete list of the NFS mount options, enter man nfs in the command line. Click Apply . Click Mount . Now you can open the mounted directory and verify that the content is accessible. 6.2.3. Related information For more details on NFS, see the Network File System (NFS) . 6.3. Managing Redundant Arrays of Independent Disks in the web console Redundant Arrays of Independent Disks (RAID) represents a way how to arrange more disks into one storage. RAID protects data stored in the disks against disk failure with the following data distribution strategies: Mirroring - data are copied to two different locations. If one disk fails, you have a copy and your data is not lost. Striping - data are evenly distributed among disks. Level of protection depends on the RAID level. The RHEL web console supports the following RAID levels: RAID 0 (Stripe) RAID 1 (Mirror) RAID 4 (Dedicated parity) RAID 5 (Distributed parity) RAID 6 (Double Distributed Parity) RAID 10 (Stripe of Mirrors) For more details, see RAID Levels and Linear Support . Before you can use disks in RAID, you need to: Create a RAID. Format it with file system. Mount the RAID to the server. 6.3.1. Prerequisites The the web console is running and accessible. For details, see Installing the web console . 6.3.2. Creating RAID in the web console This procedure aims to help you with configuring RAID in the web console. Prerequisites Physical disks connected to the system. Each RAID level requires different amount of disks. Procedure Open the web console. Click Storage . Click the + icon in the RAID Devices box. In the Create RAID Device dialog box, enter a name for a new RAID. In the RAID Level drop-down list, select a level of RAID you want to use. For detailed description of RAID levels supported on the RHEL 7 system, see RAID Levels and Linear Support . In the Chunk Size drop-down list, leave the predefined value as it is. The Chunk Size value specifies how large is each block for data writing. If the chunk size is 512 KiB, the system writes the first 512 KiB to the first disk, the second 512 KiB is written to the second disk, and the third chunk will be written to the third disk. If you have three disks in your RAID, the fourth 512 KiB will be written to the first disk again. Select disks you want to use for RAID. Click Create . In the Storage section, you can see the new RAID in the RAID devices box and format it. Now you have the following options how to format and mount the new RAID in the web console: Formatting RAID Creating partitions on partition table Creating a volume group on top of RAID 6.3.3. Formatting RAID in the web console This section describes formatting procedure of the new software RAID device which is created in the RHEL web interface. Prerequisites Physical disks are connected and visible by RHEL 7. RAID is created. Consider the file system which will be used for the RAID. Consider creating of a partitioning table. Procedure Open the RHEL web console. Click Storage . In the RAID devices box, choose the RAID you want to format by clicking on it. In the RAID details screen, scroll down to the Content part. Click to the newly created RAID. Click the Format button. In the Erase drop-down list, select: Don't overwrite existing data - the RHEL web console rewrites only the disk header. Advantage of this option is speed of formatting. Overwrite existing data with zeros - the RHEL web console rewrites the whole disk with zeros. This option is slower because the program has to go through the whole disk. Use this option if the RAID includes any data and you need to rewrite it. In the Type drop-down list, select a XFS file system, if you do not have another strong preference. Enter a name of the file system. In the Mounting drop down list, select Custom . The Default option does not ensure that the file system will be mounted on the boot. In the Mount Point field, add the mount path. Select Mount at boot . Click the Format button. Formatting can take several minutes depending on the used formatting options and size of RAID. After successful finish, you can see the details of the formatted RAID on the Filesystem tab. To use the RAID, click Mount . At this point, the system uses mounted and formatted RAID. 6.3.4. Using the web console for creating a partition table on RAID RAID requires formatting as any other storage device. You have two options: Format the RAID device without partitions Create a partition table with partitions This section describes formatting RAID with the partition table on the new software RAID device created in the RHEL web interface. Prerequisites Physical disks are connected and visible by RHEL 7. RAID is created. Consider the file system used for the RAID. Consider creating a partitioning table. Procedure Open the RHEL web console. Click Storage . In the RAID devices box, select the RAID you want to edit. In the RAID details screen, scroll down to the Content part. Click to the newly created RAID. Click the Create partition table button. In the Erase drop-down list, select: Don't overwrite existing data - the RHEL web console rewrites only the disk header. Advantage of this option is speed of formatting. Overwrite existing data with zeros - the RHEL web console rewrites the whole RAID with zeros. This option is slower because the program has to go through the whole RAID. Use this option if RAID includes any data and you need to rewrite it. In the Partitioning drop-down list, select: Compatible with modern system and hard disks > 2TB (GPT) - GUID Partition Table is a modern recommended partitioning system for large RAIDs with more than four partitions. Compatible with all systems and devices (MBR) - Master Boot Record works with disks up to 2 TB in size. MBR also support four primary partitions max. Click Format . At this point, the partitioning table has been created and you can create partitions. For creating partitions, see Using the web console for creating partitions on RAID . 6.3.5. Using the web console for creating partitions on RAID This section describes creating a partition in the existing partition table. Prerequisites Partition table is created. For details, see Using the web console for creating partitions on RAID . Procedure Open the web console. Click Storage . In the RAID devices box, click to the RAID you want to edit. In the RAID details screen, scroll down to the Content part. Click to the newly created RAID. Click Create Partition . In the Create partition dialog box, set up the size of the first partition. In the Erase drop-down list, select: Don't overwrite existing data - the RHEL web console rewrites only the disk header. Advantage of this option is speed of formatting. Overwrite existing data with zeros - the RHEL web console rewrites the whole RAID with zeros. This option is slower because the program have to go through the whole RAID. Use this option if RAID includes any data and you need to rewrite it. In the Type drop-down list, select a XFS file system, if you do not have another strong preference. For details about the XFS file system, see The XFS file system . Enter any name for the file system. Do not use spaces in the name. In the Mounting drop down list, select Custom . The Default option does not ensure that the file system will be mounted on the boot. In the Mount Point field, add the mount path. Select Mount at boot . Click Create partition . Formatting can take several minutes depending on used formatting options and size of RAID. After successful finish, you can continue with creating other partitions. At this point, the system uses mounted and formatted RAID. 6.3.6. Using the web console for creating a volume group on top of RAID This section shows you how to build a volume group from software RAID. Prerequisites RAID device, which is not formatted and mounted. Procedure Open the RHEL web console. Click Storage . Click the + icon in the Volume Groups box. In the Create Volume Group dialog box, enter a name for the new volume group. In the Disks list, select a RAID device. If you do not see the RAID in the list, unmount the RAID from the system. The RAID device must not be used by the RHEL system. Click Create . The new volume group has been created and you can continue with creating a logical volume. For details, see Creating logical volumes in the web console . 6.4. Using the web console for configuring LVM logical volumes Red Hat Enterprise Linux 7 supports the LVM logical volume manager. When you install a Red Hat Enterprise Linux 7, it will be installed on LVM automatically created during the installation. The screenshot shows you a clean installation of the RHEL system with two logical volumes in the the web console automatically created during the installation. To find out more about logical volumes, follow the sections describing: What is logical volume manager and when to use it. What are volume groups and how to create them. What are logical volumes and how to create them. How to format logical volumes. How to resize logical volumes. 6.4.1. Prerequisites Physical drives, RAID devices, or any other type of block device from which you can create the logical volume. 6.4.2. Logical Volume Manager in the web console The web console provides a graphical interface to create LVM volume groups and logical volumes. Volume groups create a layer between physical and logical volumes. It makes you possible to add or remove physical volumes without influencing logical volume itself. Volume groups appear as one drive with capacity consisting of capacities of all physical drives included in the group. You can join physical drives into volume groups in the web console. Logical volumes act as a single physical drive and it is built on top of a volume group in your system. Main advantages of logical volumes are: Better flexibility than the partitioning system used on your physical drive. Ability to connect more physical drives into one volume. Possibility of expanding (growing) or reducing (shrinking) capacity of the volume on-line, without restart. Ability to create snapshots. Additional resources For details, see Logical volume manager administration . 6.4.3. Creating volume groups in the web console The following describes creating volume groups from one or more physical drives or other storage devices. Logical volumes are created from volume groups. Each volume group can include multiple logical volumes. For details, see Volume groups . Prerequisites Physical drives or other types of storage devices from which you want to create volume groups. Procedure Log in to the web console. Click Storage . Click the + icon in the Volume Groups box. In the Name field, enter a name of a group without spaces. Select the drives you want to combine to create the volume group. It might happen that you cannot see devices as you expected. The RHEL web console displays only unused block devices. Used devices means, for example: Devices formatted with a file system Physical volumes in another volume group Physical volumes being a member of another software RAID device If you do not see the device, format it to be empty and unused. Click Create . The web console adds the volume group in the Volume Groups section. After clicking the group, you can create logical volumes that are allocated from that volume group. 6.4.4. Creating logical volumes in the web console The following steps describe how to create LVM logical volumes. Prerequisites Volume group created. For details, see Creating volume groups in the web console . Procedure Log in to the web console. Click Storage . Click the volume group in which you want to create logical volumes. Click Create new Logical Volume . In the Name field, enter a name for the new logical volume without spaces. In the Purpose drop down menu, select Block device for filesystems . This configuration enables you to create a logical volume with the maximum volume size which is equal to the sum of the capacities of all drives included in the volume group. Define the size of the logical volume. Consider: How much space the system using this logical volume will need. How many logical volumes you want to create. You do not have to use the whole space. If necessary, you can grow the logical volume later. Click Create . To verify the settings, click your logical volume and check the details. At this stage, the logical volume has been created and you need to create and mount a file system with the formatting process. 6.4.5. Formatting logical volumes in the web console Logical volumes act as physical drives. To use them, you need to format them with a file system. Warning Formatting logical volumes will erase all data on the volume. The file system you select determines the configuration parameters you can use for logical volumes. For example, some the XFS file system does not support shrinking volumes. For details, see Resizing logical volumes in the web console . The following steps describe the procedure to format logical volumes. Prerequisites Logical volume created. For details, see Creating volume groups in the web console . Procedure Log in to the RHEL web console. Click Storage . Click the volume group in which the logical volume is placed. Click the logical volume. Click on the Unrecognized Data tab. Click Format . In the Erase drop down menu, select: Don't overwrite existing data - the RHEL web console rewrites only the disk header. Advantage of this option is speed of formatting. Overwrite existing data with zeros - the RHEL web console rewrites the whole disk with zeros. This option is slower because the program have to go through the whole disk. Use this option if the disk includes any data and you need to overwrite it. In the Type drop down menu, select a file system: XFS file system supports large logical volumes, switching physical drives online without outage, and growing an existing file system. Leave this file system selected if you do not have a different strong preference. XFS does not support reducing the size of a volume formatted with an XFS file system ext4 file system supports: Logical volumes Switching physical drives online without outage Growing a file system Shrinking a file system You can also select a version with the LUKS (Linux Unified Key Setup) encryption, which allows you to encrypt the volume with a passphrase. In the Name field, enter the logical volume name. In the Mounting drop down menu, select Custom . The Default option does not ensure that the file system will be mounted on the boot. In the Mount Point field, add the mount path. Select Mount at boot . Click Format . Formatting can take several minutes depending on the volume size and which formatting options are selected. after the formatting has completed successfully, you can see the details of the formatted logical volume on the Filesystem tab. To use the logical volume, click Mount . At this point, the system can use mounted and formatted logical volume. 6.4.6. Resizing logical volumes in the web console This section describes how to resize logical volumes. You can extend or even reduce logical volumes. Whether you can resize a logical volume depends on which file system you are using. Most file systems enable you to extend (grow) the volume online (without outage). You can also reduce (shrink) the size of logical volumes, if the logical volume contains a file system which supports shrinking. It should be available, for example, in the ext3/ext4 file systems. Warning You cannot reduce volumes that contains GFS2 or XFS filesystem. Prerequisites Existing logical volume containing a file system which supports resizing logical volumes. Procedure The following steps provide the procedure for growing a logical volume without taking the volume offline: Log in to the RHEL web console. Click Storage . Click the volume group in which the logical volume is placed. Click the logical volume. On the Volume tab, click Grow . In the Grow Logical Volume dialog box, adjust volume space. Click Grow . LVM grows the logical volume without system outage. 6.4.7. Related information For more details on creating logical volumes, see Configuring and managing logical volumes . 6.5. Using the web console for configuring thin logical volumes Thinly-provisioned logical volumes enables you to allocate more space for designated applications or servers than how much space logical volumes actually contain. For details, see Thinly-provisioned logical volumes (thin volumes) . The following sections describe: Creating pools for the thinly provisioned logical volumes. Creating thin logical volumes. Formatting thin logical volumes. 6.5.1. Prerequisites Physical drives or other types of storage devices from which you want to create volume groups. 6.5.2. Creating pools for thin logical volumes in the web console The following steps show you how to create a pool for thinly provisioned volumes: Prerequisites Volume group created . Procedure Log in to the web console. Click Storage . Click the volume group in which you want to create thin volumes. Click Create new Logical Volume . In the Name field, enter a name for the new pool of thin volumes without spaces. In the Purpose drop down menu, select Pool for thinly provisioned volumes . This configuration enables you to create the thin volume. Define the size of the pool of thin volumes. Consider: How many thin volumes you will need in this pool? What is the expected size of each thin volume? You do not have to use the whole space. If necessary, you can grow the pool later. Click Create . The pool for thin volumes has been created and you can add thin volumes. 6.5.3. Creating thin logical volumes in the web console The following text describes creating a thin logical volume in the pool. The pool can include multiple thin volumes and each thin volume can be as large as the pool for thin volumes itself. Important Using thin volumes requires regular checkup of actual free physical space of the logical volume. Prerequisites Pool for thin volumes created. For details, see Creating volume groups in the web console . Procedure Log in to the web console. Click Storage . Click the volume group in which you want to create thin volumes. Click the desired pool. Click Create Thin Volume . In the Create Thin Volume dialog box, enter a name for the thin volume without spaces. Define the size of the thin volume. Click Create . At this stage, the thin logical volume has been created and you need to format it. 6.5.4. Formatting logical volumes in the web console Logical volumes act as physical drives. To use them, you need to format them with a file system. Warning Formatting logical volumes will erase all data on the volume. The file system you select determines the configuration parameters you can use for logical volumes. For example, some the XFS file system does not support shrinking volumes. For details, see Resizing logical volumes in the web console . The following steps describe the procedure to format logical volumes. Prerequisites Logical volume created. For details, see Creating volume groups in the web console . Procedure Log in to the RHEL web console. Click Storage . Click the volume group in which the logical volume is placed. Click the logical volume. Click on the Unrecognized Data tab. Click Format . In the Erase drop down menu, select: Don't overwrite existing data - the RHEL web console rewrites only the disk header. Advantage of this option is speed of formatting. Overwrite existing data with zeros - the RHEL web console rewrites the whole disk with zeros. This option is slower because the program have to go through the whole disk. Use this option if the disk includes any data and you need to overwrite it. In the Type drop down menu, select a file system: XFS file system supports large logical volumes, switching physical drives online without outage, and growing an existing file system. Leave this file system selected if you do not have a different strong preference. XFS does not support reducing the size of a volume formatted with an XFS file system ext4 file system supports: Logical volumes Switching physical drives online without outage Growing a file system Shrinking a file system You can also select a version with the LUKS (Linux Unified Key Setup) encryption, which allows you to encrypt the volume with a passphrase. In the Name field, enter the logical volume name. In the Mounting drop down menu, select Custom . The Default option does not ensure that the file system will be mounted on the boot. In the Mount Point field, add the mount path. Select Mount at boot . Click Format . Formatting can take several minutes depending on the volume size and which formatting options are selected. after the formatting has completed successfully, you can see the details of the formatted logical volume on the Filesystem tab. To use the logical volume, click Mount . At this point, the system can use mounted and formatted logical volume. 6.6. Using the web console for changing physical drives in volume groups The following text describes how to change the drive in a volume group using the the web console. The change of physical drives consists of the following procedures: Adding physical drives from logical volumes. Removing physical drives from logical volumes. 6.6.1. Prerequisites A new physical drive for replacing the old or broken one. The configuration expects that physical drives are organized in a volume group. 6.6.2. Adding physical drives to volume groups in the web console The web console enables you to add a new physical drive or other type of volume to the existing logical volume. Prerequisites A volume group must be created. A new drive connected to the machine. Procedure Log in to the web console. Click Storage . In the Volume Groups box, click the volume group in which you want to add a physical volume. In the Physical Volumes box, click the + icon. In the Add Disks dialog box, select the preferred drive and click Add . As a result, the web console adds the physical volume. You can see it in the Physical Volumes section, and the logical volume can immediately start to write on the drive. 6.6.3. Removing physical drives from volume groups in the web console If a logical volume includes multiple physical drives, you can remove one of the physical drives online. The system moves automatically all data from the drive to be removed to other drives during the removal process. Notice that it can take some time. The web console also verifies, if there is enough space for removing the physical drive. Prerequisites A volume group with more than one physical drive connected. Procedure The following steps describe how to remove a drive from the volume group without causing outage in the RHEL web console. Log in to the RHEL web console. Click Storage . Click the volume group in which you have the logical volume. In the Physical Volumes section, locate the preferred volume. Click the - icon. The RHEL web console verifies, if the logical volume has enough free space for removing the disk. If not, you cannot remove the disk and it is necessary to add another disk first. For details, see Adding physical drives to logical volumes in the web console . As results, the RHEL web console removes the physical volume from the created logical volume without causing an outage. 6.7. Using the web console for managing Virtual Data Optimizer volumes This chapter describes the Virtual Data Optimizer (VDO) configuration using the the web console. After reading it, you will be able to: Create VDO volumes Format VDO volumes Extend VDO volumes 6.7.1. Prerequisites The the web console is installed and accessible. For details, see Installing the web console . 6.7.2. VDO volumes in the web console Red Hat Enterprise Linux 7 supports Virtual Data Optimizer (VDO). VDO is a block virtualization technology that combines: Compression For details, see Using Compression . Deduplication For details, see Disabling and Re-enabling deduplication . Thin provisioning For details, see Thinly-provisioned logical volumes (thin volumes) . Using these technologies, VDO: Saves storage space inline Compresses files Eliminates duplications Enables you to allocate more virtual space than how much the physical or logical storage provides Enables you to extend the virtual storage by growing VDO can be created on top of many types of storage. In the web console, you can configure VDO on top of: LVM Note It is not possible to configure VDO on top of thinly-provisioned volumes. Physical volume Software RAID For details about placement of VDO in the Storage Stack, see System Requirements . Additional resources For details about VDO, see Deduplication and compression with VDO . 6.7.3. Creating VDO volumes in the web console This section helps you to create a VDO volume in the RHEL web console. Prerequisites Physical drives, LVMs, or RAID from which you want to create VDO. Procedure Log in to the web console. For details, see Logging in to the web console . Click Storage . Click the + icon in the VDO Devices box. In the Name field, enter a name of a VDO volume without spaces. Select the drive that you want to use. In the Logical Size bar, set up the size of the VDO volume. You can extend it more than ten times, but consider for what purpose you are creating the VDO volume: For active VMs or container storage, use logical size that is ten times the physical size of the volume. For object storage, use logical size that is three times the physical size of the volume. For details, see Getting started with VDO . In the Index Memory bar, allocate memory for the VDO volume. For details about VDO system requirements, see System Requirements . Select the Compression option. This option can efficiently reduce various file formats. For details, see Using Compression . Select the Deduplication option. This option reduces the consumption of storage resources by eliminating multiple copies of duplicate blocks. For details, see Disabling and Re-enabling deduplication . [Optional] If you want to use the VDO volume with applications that need a 512 bytes block size, select Use 512 Byte emulation . This reduces the performance of the VDO volume, but should be very rarely needed. If in doubt, leave it off. Click Create . If the process of creating the VDO volume succeeds, you can see the new VDO volume in the Storage section and format it with a file system. 6.7.4. Formatting VDO volumes in the web console VDO volumes act as physical drives. To use them, you need to format them with a file system. Warning Formatting VDO will erase all data on the volume. The following steps describe the procedure to format VDO volumes. Prerequisites A VDO volume is created. For details, see Section 6.7.3, "Creating VDO volumes in the web console" . Procedure Log in to the web console. For details, see Logging in to the web console . Click Storage . Click the VDO volume. Click on the Unrecognized Data tab. Click Format . In the Erase drop down menu, select: Don't overwrite existing data The RHEL web console rewrites only the disk header. The advantage of this option is the speed of formatting. Overwrite existing data with zeros The RHEL web console rewrites the whole disk with zeros. This option is slower because the program has to go through the whole disk. Use this option if the disk includes any data and you need to rewrite them. In the Type drop down menu, select a filesystem: The XFS file system supports large logical volumes, switching physical drives online without outage, and growing. Leave this file system selected if you do not have a different strong preference. XFS does not support shrinking volumes. Therefore, you will not be able to reduce volume formatted with XFS. The ext4 file system supports logical volumes, switching physical drives online without outage, growing, and shrinking. You can also select a version with the LUKS (Linux Unified Key Setup) encryption, which allows you to encrypt the volume with a passphrase. In the Name field, enter the logical volume name. In the Mounting drop down menu, select Custom . The Default option does not ensure that the file system will be mounted on the boot. In the Mount Point field, add the mount path. Select Mount at boot . Click Format . Formatting can take several minutes depending on the used formatting options and the volume size. After a successful finish, you can see the details of the formatted VDO volume on the Filesystem tab. To use the VDO volume, click Mount . At this point, the system uses the mounted and formatted VDO volume. 6.7.5. Extending VDO volumes in the web console This section describes extending VDO volumes in the web console. Prerequisites The VDO volume created. Procedure Log in to the web console. For details, see Logging in to the web console . Click Storage . Click your VDO volume in the VDO Devices box. In the VDO volume details, click the Grow button. In the Grow logical size of VDO dialog box, extend the logical size of the VDO volume. Original size of the logical volume from the screenshot was 6 GB. As you can see, the RHEL web console enables you to grow the volume to more than ten times the size and it works correctly because of the compression and deduplication. Click Grow . If the process of growing VDO succeeds, you can see the new size in the VDO volume details. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/managing_systems_using_the_rhel_7_web_console/managing-storage-devices-in-the-web-console_system-management-using-the-rhel-7-web-console |
Chapter 6. Resolved issues | Chapter 6. Resolved issues The following issues are resolved for this release: Issue Description JWS-2579 Naming convention issue in JBoss Web Server download page in customer portal JWS-2245 Remove CXF and Hibernate from JWS maven-repo zip | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_release_notes/resolved_issues |
Chapter 4. Installing a cluster on RHV with user-provisioned infrastructure | Chapter 4. Installing a cluster on RHV with user-provisioned infrastructure In OpenShift Container Platform version 4.12, you can install a customized OpenShift Container Platform cluster on Red Hat Virtualization (RHV) and other infrastructure that you provide. The OpenShift Container Platform documentation uses the term user-provisioned infrastructure to refer to this infrastructure type. The following diagram shows an example of a potential OpenShift Container Platform cluster running on a RHV cluster. The RHV hosts run virtual machines that contain both control plane and compute pods. One of the hosts also runs a Manager virtual machine and a bootstrap virtual machine that contains a temporary control plane pod.] 4.1. Prerequisites The following items are required to install an OpenShift Container Platform cluster on a RHV environment. You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You have a supported combination of versions in the Support Matrix for OpenShift Container Platform on Red Hat Virtualization (RHV) . 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Requirements for the RHV environment To install and run an OpenShift Container Platform version 4.12 cluster, the RHV environment must meet the following requirements. Not meeting these requirements can cause the installation or process to fail. Additionally, not meeting these requirements can cause the OpenShift Container Platform cluster to fail days or weeks after installation. The following requirements for CPU, memory, and storage resources are based on default values multiplied by the default number of virtual machines the installation program creates. These resources must be available in addition to what the RHV environment uses for non-OpenShift Container Platform operations. By default, the installation program creates seven virtual machines during the installation process. First, it creates a bootstrap virtual machine to provide temporary services and a control plane while it creates the rest of the OpenShift Container Platform cluster. When the installation program finishes creating the cluster, deleting the bootstrap machine frees up its resources. If you increase the number of virtual machines in the RHV environment, you must increase the resources accordingly. Requirements The RHV version is 4.4. The RHV environment has one data center whose state is Up . The RHV data center contains an RHV cluster. The RHV cluster has the following resources exclusively for the OpenShift Container Platform cluster: Minimum 28 vCPUs: four for each of the seven virtual machines created during installation. 112 GiB RAM or more, including: 16 GiB or more for the bootstrap machine, which provides the temporary control plane. 16 GiB or more for each of the three control plane machines which provide the control plane. 16 GiB or more for each of the three compute machines, which run the application workloads. The RHV storage domain must meet these etcd backend performance requirements . In production environments, each virtual machine must have 120 GiB or more. Therefore, the storage domain must provide 840 GiB or more for the default OpenShift Container Platform cluster. In resource-constrained or non-production environments, each virtual machine must have 32 GiB or more, so the storage domain must have 230 GiB or more for the default OpenShift Container Platform cluster. To download images from the Red Hat Ecosystem Catalog during installation and update procedures, the RHV cluster must have access to an internet connection. The Telemetry service also needs an internet connection to simplify the subscription and entitlement process. The RHV cluster must have a virtual network with access to the REST API on the RHV Manager. Ensure that DHCP is enabled on this network, because the VMs that the installer creates obtain their IP address by using DHCP. A user account and group with the following least privileges for installing and managing an OpenShift Container Platform cluster on the target RHV cluster: DiskOperator DiskCreator UserTemplateBasedVm TemplateOwner TemplateCreator ClusterAdmin on the target cluster Warning Apply the principle of least privilege: Avoid using an administrator account with SuperUser privileges on RHV during the installation process. The installation program saves the credentials you provide to a temporary ovirt-config.yaml file that might be compromised. 4.4. Verifying the requirements for the RHV environment Verify that the RHV environment meets the requirements to install and run an OpenShift Container Platform cluster. Not meeting these requirements can cause failures. Important These requirements are based on the default resources the installation program uses to create control plane and compute machines. These resources include vCPUs, memory, and storage. If you change these resources or increase the number of OpenShift Container Platform machines, adjust these requirements accordingly. Procedure Check that the RHV version supports installation of OpenShift Container Platform version 4.12. In the RHV Administration Portal, click the ? help icon in the upper-right corner and select About . In the window that opens, make a note of the RHV Software Version . Confirm that the RHV version is 4.4. For more information about supported version combinations, see Support Matrix for OpenShift Container Platform on RHV . Inspect the data center, cluster, and storage. In the RHV Administration Portal, click Compute Data Centers . Confirm that the data center where you plan to install OpenShift Container Platform is accessible. Click the name of that data center. In the data center details, on the Storage tab, confirm the storage domain where you plan to install OpenShift Container Platform is Active . Record the Domain Name for use later on. Confirm Free Space has at least 230 GiB. Confirm that the storage domain meets these etcd backend performance requirements , which you can measure by using the fio performance benchmarking tool . In the data center details, click the Clusters tab. Find the RHV cluster where you plan to install OpenShift Container Platform. Record the cluster name for use later on. Inspect the RHV host resources. In the RHV Administration Portal, click Compute > Clusters . Click the cluster where you plan to install OpenShift Container Platform. In the cluster details, click the Hosts tab. Inspect the hosts and confirm they have a combined total of at least 28 Logical CPU Cores available exclusively for the OpenShift Container Platform cluster. Record the number of available Logical CPU Cores for use later on. Confirm that these CPU cores are distributed so that each of the seven virtual machines created during installation can have four cores. Confirm that, all together, the hosts have 112 GiB of Max free Memory for scheduling new virtual machines distributed to meet the requirements for each of the following OpenShift Container Platform machines: 16 GiB required for the bootstrap machine 16 GiB required for each of the three control plane machines 16 GiB for each of the three compute machines Record the amount of Max free Memory for scheduling new virtual machines for use later on. Verify that the virtual network for installing OpenShift Container Platform has access to the RHV Manager's REST API. From a virtual machine on this network, use curl to reach the RHV Manager's REST API: USD curl -k -u <username>@<profile>:<password> \ 1 https://<engine-fqdn>/ovirt-engine/api 2 1 For <username> , specify the user name of an RHV account with privileges to create and manage an OpenShift Container Platform cluster on RHV. For <profile> , specify the login profile, which you can get by going to the RHV Administration Portal login page and reviewing the Profile dropdown list. For <password> , specify the password for that user name. 2 For <engine-fqdn> , specify the fully qualified domain name of the RHV environment. For example: USD curl -k -u ocpadmin@internal:pw123 \ https://rhv-env.virtlab.example.com/ovirt-engine/api 4.5. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. Firewall Configure your firewall so your cluster has access to required sites. See also: Red Hat Virtualization Manager firewall requirements Host firewall requirements Load balancers Configure one or preferably two layer-4 load balancers: Provide load balancing for ports 6443 and 22623 on the control plane and bootstrap machines. Port 6443 provides access to the Kubernetes API server and must be reachable both internally and externally. Port 22623 must be accessible to nodes within the cluster. Provide load balancing for port 443 and 80 for machines that run the Ingress router, which are usually compute nodes in the default configuration. Both ports must be accessible from within and outside the cluster. DNS Configure infrastructure-provided DNS to allow the correct resolution of the main components and services. If you use only one load balancer, these DNS records can point to the same IP address. Create DNS records for api.<cluster_name>.<base_domain> (internal and external resolution) and api-int.<cluster_name>.<base_domain> (internal resolution) that point to the load balancer for the control plane machines. Create a DNS record for *.apps.<cluster_name>.<base_domain> that points to the load balancer for the Ingress router. For example, ports 443 and 80 of the compute machines. 4.5.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 4.5.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 4.1. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 4.2. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 4.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. 4.6. Setting up the installation machine To run the binary openshift-install installation program and Ansible scripts, set up the RHV Manager or an Red Hat Enterprise Linux (RHEL) computer with network access to the RHV environment and the REST API on the Manager. Procedure Update or install Python3 and Ansible. For example: # dnf update python3 ansible Install the python3-ovirt-engine-sdk4 package to get the Python Software Development Kit. Install the ovirt.image-template Ansible role. On the RHV Manager and other Red Hat Enterprise Linux (RHEL) machines, this role is distributed as the ovirt-ansible-image-template package. For example, enter: # dnf install ovirt-ansible-image-template Install the ovirt.vm-infra Ansible role. On the RHV Manager and other RHEL machines, this role is distributed as the ovirt-ansible-vm-infra package. # dnf install ovirt-ansible-vm-infra Create an environment variable and assign an absolute or relative path to it. For example, enter: USD export ASSETS_DIR=./wrk Note The installation program uses this variable to create a directory where it saves important installation-related files. Later, the installation process reuses this variable to locate those asset files. Avoid deleting this assets directory; it is required for uninstalling the cluster. 4.7. Installing OpenShift Container Platform on RHV in insecure mode By default, the installer creates a CA certificate, prompts you for confirmation, and stores the certificate to use during installation. You do not need to create or install one manually. Although it is not recommended, you can override this functionality and install OpenShift Container Platform without verifying a certificate by installing OpenShift Container Platform on RHV in insecure mode. Warning Installing in insecure mode is not recommended, because it enables a potential attacker to perform a Man-in-the-Middle attack and capture sensitive credentials on the network. Procedure Create a file named ~/.ovirt/ovirt-config.yaml . Add the following content to ovirt-config.yaml : ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: "" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true 1 Specify the hostname or address of your oVirt engine. 2 Specify the fully qualified domain name of your oVirt engine. 3 Specify the admin password for your oVirt engine. Run the installer. 4.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 4.10. Downloading the Ansible playbooks Download the Ansible playbooks for installing OpenShift Container Platform version 4.12 on RHV. Procedure On your installation machine, run the following commands: USD mkdir playbooks USD cd playbooks USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/bootstrap.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/common-auth.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/create-templates-and-vms.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/inventory.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/masters.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/retire-bootstrap.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/retire-masters.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/retire-workers.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/workers.yml' steps After you download these Ansible playbooks, you must also create the environment variable for the assets directory and customize the inventory.yml file before you create an installation configuration file by running the installation program. 4.11. The inventory.yml file You use the inventory.yml file to define and create elements of the OpenShift Container Platform cluster you are installing. This includes elements such as the Red Hat Enterprise Linux CoreOS (RHCOS) image, virtual machine templates, bootstrap machine, control plane nodes, and worker nodes. You also use inventory.yml to destroy the cluster. The following inventory.yml example shows you the parameters and their default values. The quantities and numbers in these default values meet the requirements for running a production OpenShift Container Platform cluster in a RHV environment. Example inventory.yml file --- all: vars: ovirt_cluster: "Default" ocp: assets_dir: "{{ lookup('env', 'ASSETS_DIR') }}" ovirt_config_path: "{{ lookup('env', 'HOME') }}/.ovirt/ovirt-config.yaml" # --- # {op-system} section # --- rhcos: image_url: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.12/latest/rhcos-openstack.x86_64.qcow2.gz" local_cmp_image_path: "/tmp/rhcos.qcow2.gz" local_image_path: "/tmp/rhcos.qcow2" # --- # Profiles section # --- control_plane: cluster: "{{ ovirt_cluster }}" memory: 16GiB sockets: 4 cores: 1 template: rhcos_tpl operating_system: "rhcos_x64" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab compute: cluster: "{{ ovirt_cluster }}" memory: 16GiB sockets: 4 cores: 1 template: worker_rhcos_tpl operating_system: "rhcos_x64" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab # --- # Virtual machines section # --- vms: - name: "{{ metadata.infraID }}-bootstrap" ocp_type: bootstrap profile: "{{ control_plane }}" type: server - name: "{{ metadata.infraID }}-master0" ocp_type: master profile: "{{ control_plane }}" - name: "{{ metadata.infraID }}-master1" ocp_type: master profile: "{{ control_plane }}" - name: "{{ metadata.infraID }}-master2" ocp_type: master profile: "{{ control_plane }}" - name: "{{ metadata.infraID }}-worker0" ocp_type: worker profile: "{{ compute }}" - name: "{{ metadata.infraID }}-worker1" ocp_type: worker profile: "{{ compute }}" - name: "{{ metadata.infraID }}-worker2" ocp_type: worker profile: "{{ compute }}" Important Enter values for parameters whose descriptions begin with "Enter." Otherwise, you can use the default value or replace it with a new value. General section ovirt_cluster : Enter the name of an existing RHV cluster in which to install the OpenShift Container Platform cluster. ocp.assets_dir : The path of a directory the openshift-install installation program creates to store the files that it generates. ocp.ovirt_config_path : The path of the ovirt-config.yaml file the installation program generates, for example, ./wrk/install-config.yaml . This file contains the credentials required to interact with the REST API of the Manager. Red Hat Enterprise Linux CoreOS (RHCOS) section image_url : Enter the URL of the RHCOS image you specified for download. local_cmp_image_path : The path of a local download directory for the compressed RHCOS image. local_image_path : The path of a local directory for the extracted RHCOS image. Profiles section This section consists of two profiles: control_plane : The profile of the bootstrap and control plane nodes. compute : The profile of workers nodes in the compute plane. These profiles have the following parameters. The default values of the parameters meet the minimum requirements for running a production cluster. You can increase or customize these values to meet your workload requirements. cluster : The value gets the cluster name from ovirt_cluster in the General Section. memory : The amount of memory, in GB, for the virtual machine. sockets : The number of sockets for the virtual machine. cores : The number of cores for the virtual machine. template : The name of the virtual machine template. If plan to install multiple clusters, and these clusters use templates that contain different specifications, prepend the template name with the ID of the cluster. operating_system : The type of guest operating system in the virtual machine. With oVirt/RHV version 4.4, this value must be rhcos_x64 so the value of Ignition script can be passed to the VM. type : Enter server as the type of the virtual machine. Important You must change the value of the type parameter from high_performance to server . disks : The disk specifications. The control_plane and compute nodes can have different storage domains. size : The minimum disk size. name : Enter the name of a disk connected to the target cluster in RHV. interface : Enter the interface type of the disk you specified. storage_domain : Enter the storage domain of the disk you specified. nics : Enter the name and network the virtual machines use. You can also specify the virtual network interface profile. By default, NICs obtain their MAC addresses from the oVirt/RHV MAC pool. Virtual machines section This final section, vms , defines the virtual machines you plan to create and deploy in the cluster. By default, it provides the minimum number of control plane and worker nodes for a production environment. vms contains three required elements: name : The name of the virtual machine. In this case, metadata.infraID prepends the virtual machine name with the infrastructure ID from the metadata.yml file. ocp_type : The role of the virtual machine in the OpenShift Container Platform cluster. Possible values are bootstrap , master , worker . profile : The name of the profile from which each virtual machine inherits specifications. Possible values in this example are control_plane or compute . You can override the value a virtual machine inherits from its profile. To do this, you add the name of the profile attribute to the virtual machine in inventory.yml and assign it an overriding value. To see an example of this, examine the name: "{{ metadata.infraID }}-bootstrap" virtual machine in the preceding inventory.yml example: It has a type attribute whose value, server , overrides the value of the type attribute this virtual machine would otherwise inherit from the control_plane profile. Metadata variables For virtual machines, metadata.infraID prepends the name of the virtual machine with the infrastructure ID from the metadata.json file you create when you build the Ignition files. The playbooks use the following code to read infraID from the specific file located in the ocp.assets_dir . --- - name: include metadata.json vars include_vars: file: "{{ ocp.assets_dir }}/metadata.json" name: metadata ... 4.12. Specifying the RHCOS image settings Update the Red Hat Enterprise Linux CoreOS (RHCOS) image settings of the inventory.yml file. Later, when you run this file one of the playbooks, it downloads a compressed Red Hat Enterprise Linux CoreOS (RHCOS) image from the image_url URL to the local_cmp_image_path directory. The playbook then uncompresses the image to the local_image_path directory and uses it to create oVirt/RHV templates. Procedure Locate the RHCOS image download page for the version of OpenShift Container Platform you are installing, such as Index of /pub/openshift-v4/dependencies/rhcos/latest/latest . From that download page, copy the URL of an OpenStack qcow2 image, such as https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.12/latest/rhcos-openstack.x86_64.qcow2.gz . Edit the inventory.yml playbook you downloaded earlier. In it, paste the URL as the value for image_url . For example: rhcos: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.12/latest/rhcos-openstack.x86_64.qcow2.gz" 4.13. Creating the install config file You create an installation configuration file by running the installation program, openshift-install , and responding to its prompts with information you specified or gathered earlier. When you finish responding to the prompts, the installation program creates an initial version of the install-config.yaml file in the assets directory you specified earlier, for example, ./wrk/install-config.yaml The installation program also creates a file, USDHOME/.ovirt/ovirt-config.yaml , that contains all the connection parameters that are required to reach the Manager and use its REST API. NOTE: The installation process does not use values you supply for some parameters, such as Internal API virtual IP and Ingress virtual IP , because you have already configured them in your infrastructure DNS. It also uses the values you supply for parameters in inventory.yml , like the ones for oVirt cluster , oVirt storage , and oVirt network . And uses a script to remove or replace these same values from install-config.yaml with the previously mentioned virtual IPs . Procedure Run the installation program: USD openshift-install create install-config --dir USDASSETS_DIR Respond to the installation program's prompts with information about your system. Example output ? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********> For Internal API virtual IP and Ingress virtual IP , supply the IP addresses you specified when you configured the DNS service. Together, the values you enter for the oVirt cluster and Base Domain prompts form the FQDN portion of URLs for the REST API and any applications you create, such as https://api.ocp4.example.org:6443/ and https://console-openshift-console.apps.ocp4.example.org . You can get the pull secret from the Red Hat OpenShift Cluster Manager . 4.14. Customizing install-config.yaml Here, you use three Python scripts to override some of the installation program's default behaviors: By default, the installation program uses the machine API to create nodes. To override this default behavior, you set the number of compute nodes to zero replicas. Later, you use Ansible playbooks to create the compute nodes. By default, the installation program sets the IP range of the machine network for nodes. To override this default behavior, you set the IP range to match your infrastructure. By default, the installation program sets the platform to ovirt . However, installing a cluster on user-provisioned infrastructure is more similar to installing a cluster on bare metal. Therefore, you delete the ovirt platform section from install-config.yaml and change the platform to none . Instead, you use inventory.yml to specify all of the required settings. Note These snippets work with Python 3 and Python 2. Procedure Set the number of compute nodes to zero replicas: USD python3 -c 'import os, yaml path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"] conf = yaml.safe_load(open(path)) conf["compute"][0]["replicas"] = 0 open(path, "w").write(yaml.dump(conf, default_flow_style=False))' Set the IP range of the machine network. For example, to set the range to 172.16.0.0/16 , enter: USD python3 -c 'import os, yaml path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"] conf = yaml.safe_load(open(path)) conf["networking"]["machineNetwork"][0]["cidr"] = "172.16.0.0/16" open(path, "w").write(yaml.dump(conf, default_flow_style=False))' Remove the ovirt section and change the platform to none : USD python3 -c 'import os, yaml path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"] conf = yaml.safe_load(open(path)) platform = conf["platform"] del platform["ovirt"] platform["none"] = {} open(path, "w").write(yaml.dump(conf, default_flow_style=False))' Warning Red Hat Virtualization does not currently support installation with user-provisioned infrastructure on the oVirt platform. Therefore, you must set the platform to none , allowing OpenShift Container Platform to identify each node as a bare-metal node and the cluster as a bare-metal cluster. This is the same as installing a cluster on any platform , and has the following limitations: There will be no cluster provider so you must manually add each machine and there will be no node scaling capabilities. The oVirt CSI driver will not be installed and there will be no CSI capabilities. 4.15. Generate manifest files Use the installation program to generate a set of manifest files in the assets directory. The command to generate the manifest files displays a warning message before it consumes the install-config.yaml file. If you plan to reuse the install-config.yaml file, create a backup copy of it before you back it up before you generate the manifest files. Procedure Optional: Create a backup copy of the install-config.yaml file: USD cp install-config.yaml install-config.yaml.backup Generate a set of manifests in your assets directory: USD openshift-install create manifests --dir USDASSETS_DIR This command displays the following messages. Example output INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings The command generates the following manifest files: Example output USD tree . └── wrk ├── manifests │ ├── 04-openshift-machine-config-operator.yaml │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── etcd-ca-bundle-configmap.yaml │ ├── etcd-client-secret.yaml │ ├── etcd-host-service-endpoints.yaml │ ├── etcd-host-service.yaml │ ├── etcd-metric-client-secret.yaml │ ├── etcd-metric-serving-ca-configmap.yaml │ ├── etcd-metric-signer-secret.yaml │ ├── etcd-namespace.yaml │ ├── etcd-service.yaml │ ├── etcd-serving-ca-configmap.yaml │ ├── etcd-signer-secret.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml └── openshift-install-manifests.yaml steps Make control plane nodes non-schedulable. 4.16. Making control-plane nodes non-schedulable Because you are manually creating and deploying the control plane machines, you must configure a manifest file to make the control plane nodes non-schedulable. Procedure To make the control plane nodes non-schedulable, enter: USD python3 -c 'import os, yaml path = "%s/manifests/cluster-scheduler-02-config.yml" % os.environ["ASSETS_DIR"] data = yaml.safe_load(open(path)) data["spec"]["mastersSchedulable"] = False open(path, "w").write(yaml.dump(data, default_flow_style=False))' 4.17. Building the Ignition files To build the Ignition files from the manifest files you just generated and modified, you run the installation program. This action creates a Red Hat Enterprise Linux CoreOS (RHCOS) machine, initramfs , which fetches the Ignition files and performs the configurations needed to create a node. In addition to the Ignition files, the installation program generates the following: An auth directory that contains the admin credentials for connecting to the cluster with the oc and kubectl utilities. A metadata.json file that contains information such as the OpenShift Container Platform cluster name, cluster ID, and infrastructure ID for the current installation. The Ansible playbooks for this installation process use the value of infraID as a prefix for the virtual machines they create. This prevents naming conflicts when there are multiple installations in the same oVirt/RHV cluster. Note Certificates in Ignition configuration files expire after 24 hours. Complete the cluster installation and keep the cluster running in a non-degraded state for 24 hours so that the first certificate rotation can finish. Procedure To build the Ignition files, enter: USD openshift-install create ignition-configs --dir USDASSETS_DIR Example output USD tree . └── wrk ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign 4.18. Creating templates and virtual machines After confirming the variables in the inventory.yml , you run the first Ansible provisioning playbook, create-templates-and-vms.yml . This playbook uses the connection parameters for the RHV Manager from USDHOME/.ovirt/ovirt-config.yaml and reads metadata.json in the assets directory. If a local Red Hat Enterprise Linux CoreOS (RHCOS) image is not already present, the playbook downloads one from the URL you specified for image_url in inventory.yml . It extracts the image and uploads it to RHV to create templates. The playbook creates a template based on the control_plane and compute profiles in the inventory.yml file. If these profiles have different names, it creates two templates. When the playbook finishes, the virtual machines it creates are stopped. You can get information from them to help configure other infrastructure elements. For example, you can get the virtual machines' MAC addresses to configure DHCP to assign permanent IP addresses to the virtual machines. Procedure In inventory.yml , under the control_plane and compute variables, change both instances of type: high_performance to type: server . Optional: If you plan to perform multiple installations to the same cluster, create different templates for each OpenShift Container Platform installation. In the inventory.yml file, prepend the value of template with infraID . For example: control_plane: cluster: "{{ ovirt_cluster }}" memory: 16GiB sockets: 4 cores: 1 template: "{{ metadata.infraID }}-rhcos_tpl" operating_system: "rhcos_x64" ... Create the templates and virtual machines: USD ansible-playbook -i inventory.yml create-templates-and-vms.yml 4.19. Creating the bootstrap machine You create a bootstrap machine by running the bootstrap.yml playbook. This playbook starts the bootstrap virtual machine, and passes it the bootstrap.ign Ignition file from the assets directory. The bootstrap node configures itself so it can serve Ignition files to the control plane nodes. To monitor the bootstrap process, you use the console in the RHV Administration Portal or connect to the virtual machine by using SSH. Procedure Create the bootstrap machine: USD ansible-playbook -i inventory.yml bootstrap.yml Connect to the bootstrap machine using a console in the Administration Portal or SSH. Replace <bootstrap_ip> with the bootstrap node IP address. To use SSH, enter: USD ssh core@<boostrap.ip> Collect bootkube.service journald unit logs for the release image service from the bootstrap node: [core@ocp4-lk6b4-bootstrap ~]USD journalctl -b -f -u release-image.service -u bootkube.service Note The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. 4.20. Creating the control plane nodes You create the control plane nodes by running the masters.yml playbook. This playbook passes the master.ign Ignition file to each of the virtual machines. The Ignition file contains a directive for the control plane node to get the Ignition from a URL such as https://api-int.ocp4.example.org:22623/config/master . The port number in this URL is managed by the load balancer, and is accessible only inside the cluster. Procedure Create the control plane nodes: USD ansible-playbook -i inventory.yml masters.yml While the playbook creates your control plane, monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete --dir USDASSETS_DIR Example output INFO API v1.25.0 up INFO Waiting up to 40m0s for bootstrapping to complete... When all the pods on the control plane nodes and etcd are up and running, the installation program displays the following output. Example output INFO It is now safe to remove the bootstrap resources 4.21. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=USDASSETS_DIR/auth/kubeconfig The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 4.22. Removing the bootstrap machine After the wait-for command shows that the bootstrap process is complete, you must remove the bootstrap virtual machine to free up compute, memory, and storage resources. Also, remove settings for the bootstrap machine from the load balancer directives. Procedure To remove the bootstrap machine from the cluster, enter: USD ansible-playbook -i inventory.yml retire-bootstrap.yml Remove settings for the bootstrap machine from the load balancer directives. 4.23. Creating the worker nodes and completing the installation Creating worker nodes is similar to creating control plane nodes. However, worker nodes workers do not automatically join the cluster. To add them to the cluster, you review and approve the workers' pending CSRs (Certificate Signing Requests). After approving the first requests, you continue approving CSR until all of the worker nodes are approved. When you complete this process, the worker nodes become Ready and can have pods scheduled to run on them. Finally, monitor the command line to see when the installation process completes. Procedure Create the worker nodes: USD ansible-playbook -i inventory.yml workers.yml To list all of the CSRs, enter: USD oc get csr -A Eventually, this command displays one CSR per node. For example: Example output NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2lnxd 63m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master0.ocp4.example.org Approved,Issued csr-hff4q 64m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-hsn96 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master2.ocp4.example.org Approved,Issued csr-m724n 6m2s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-p4dz2 60m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-t9vfj 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master1.ocp4.example.org Approved,Issued csr-tggtr 61m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-wcbrf 7m6s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending To filter the list and see only pending CSRs, enter: USD watch "oc get csr -A | grep pending -i" This command refreshes the output every two seconds and displays only pending CSRs. For example: Example output Every 2.0s: oc get csr -A | grep pending -i csr-m724n 10m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-wcbrf 11m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending Inspect each pending request. For example: Example output USD oc describe csr csr-m724n Example output Name: csr-m724n Labels: <none> Annotations: <none> CreationTimestamp: Sun, 19 Jul 2020 15:59:37 +0200 Requesting User: system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Signer: kubernetes.io/kube-apiserver-client-kubelet Status: Pending Subject: Common Name: system:node:ocp4-lk6b4-worker1.ocp4.example.org Serial Number: Organization: system:nodes Events: <none> If the CSR information is correct, approve the request: USD oc adm certificate approve csr-m724n Wait for the installation process to finish: USD openshift-install wait-for install-complete --dir USDASSETS_DIR --log-level debug When the installation completes, the command line displays the URL of the OpenShift Container Platform web console and the administrator user name and password. 4.24. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service | [
"curl -k -u <username>@<profile>:<password> \\ 1 https://<engine-fqdn>/ovirt-engine/api 2",
"curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api",
"dnf update python3 ansible",
"dnf install ovirt-ansible-image-template",
"dnf install ovirt-ansible-vm-infra",
"export ASSETS_DIR=./wrk",
"ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: \"\" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir playbooks",
"cd playbooks",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/bootstrap.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/common-auth.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/create-templates-and-vms.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/inventory.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/masters.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/retire-bootstrap.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/retire-masters.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/retire-workers.yml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/ovirt/workers.yml'",
"--- all: vars: ovirt_cluster: \"Default\" ocp: assets_dir: \"{{ lookup('env', 'ASSETS_DIR') }}\" ovirt_config_path: \"{{ lookup('env', 'HOME') }}/.ovirt/ovirt-config.yaml\" # --- # {op-system} section # --- rhcos: image_url: \"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.12/latest/rhcos-openstack.x86_64.qcow2.gz\" local_cmp_image_path: \"/tmp/rhcos.qcow2.gz\" local_image_path: \"/tmp/rhcos.qcow2\" # --- # Profiles section # --- control_plane: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: rhcos_tpl operating_system: \"rhcos_x64\" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab compute: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: worker_rhcos_tpl operating_system: \"rhcos_x64\" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab # --- # Virtual machines section # --- vms: - name: \"{{ metadata.infraID }}-bootstrap\" ocp_type: bootstrap profile: \"{{ control_plane }}\" type: server - name: \"{{ metadata.infraID }}-master0\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-master1\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-master2\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-worker0\" ocp_type: worker profile: \"{{ compute }}\" - name: \"{{ metadata.infraID }}-worker1\" ocp_type: worker profile: \"{{ compute }}\" - name: \"{{ metadata.infraID }}-worker2\" ocp_type: worker profile: \"{{ compute }}\"",
"--- - name: include metadata.json vars include_vars: file: \"{{ ocp.assets_dir }}/metadata.json\" name: metadata",
"rhcos: \"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.12/latest/rhcos-openstack.x86_64.qcow2.gz\"",
"openshift-install create install-config --dir USDASSETS_DIR",
"? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>",
"? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>",
"python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) conf[\"compute\"][0][\"replicas\"] = 0 open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'",
"python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) conf[\"networking\"][\"machineNetwork\"][0][\"cidr\"] = \"172.16.0.0/16\" open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'",
"python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) platform = conf[\"platform\"] del platform[\"ovirt\"] platform[\"none\"] = {} open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'",
"cp install-config.yaml install-config.yaml.backup",
"openshift-install create manifests --dir USDASSETS_DIR",
"INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings",
"tree . └── wrk ├── manifests │ ├── 04-openshift-machine-config-operator.yaml │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── etcd-ca-bundle-configmap.yaml │ ├── etcd-client-secret.yaml │ ├── etcd-host-service-endpoints.yaml │ ├── etcd-host-service.yaml │ ├── etcd-metric-client-secret.yaml │ ├── etcd-metric-serving-ca-configmap.yaml │ ├── etcd-metric-signer-secret.yaml │ ├── etcd-namespace.yaml │ ├── etcd-service.yaml │ ├── etcd-serving-ca-configmap.yaml │ ├── etcd-signer-secret.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml └── openshift-install-manifests.yaml",
"python3 -c 'import os, yaml path = \"%s/manifests/cluster-scheduler-02-config.yml\" % os.environ[\"ASSETS_DIR\"] data = yaml.safe_load(open(path)) data[\"spec\"][\"mastersSchedulable\"] = False open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"openshift-install create ignition-configs --dir USDASSETS_DIR",
"tree . └── wrk ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"control_plane: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: \"{{ metadata.infraID }}-rhcos_tpl\" operating_system: \"rhcos_x64\"",
"ansible-playbook -i inventory.yml create-templates-and-vms.yml",
"ansible-playbook -i inventory.yml bootstrap.yml",
"ssh core@<boostrap.ip>",
"[core@ocp4-lk6b4-bootstrap ~]USD journalctl -b -f -u release-image.service -u bootkube.service",
"ansible-playbook -i inventory.yml masters.yml",
"openshift-install wait-for bootstrap-complete --dir USDASSETS_DIR",
"INFO API v1.25.0 up INFO Waiting up to 40m0s for bootstrapping to complete",
"INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=USDASSETS_DIR/auth/kubeconfig",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"ansible-playbook -i inventory.yml retire-bootstrap.yml",
"ansible-playbook -i inventory.yml workers.yml",
"oc get csr -A",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2lnxd 63m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master0.ocp4.example.org Approved,Issued csr-hff4q 64m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-hsn96 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master2.ocp4.example.org Approved,Issued csr-m724n 6m2s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-p4dz2 60m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-t9vfj 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master1.ocp4.example.org Approved,Issued csr-tggtr 61m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-wcbrf 7m6s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"watch \"oc get csr -A | grep pending -i\"",
"Every 2.0s: oc get csr -A | grep pending -i csr-m724n 10m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-wcbrf 11m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc describe csr csr-m724n",
"Name: csr-m724n Labels: <none> Annotations: <none> CreationTimestamp: Sun, 19 Jul 2020 15:59:37 +0200 Requesting User: system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Signer: kubernetes.io/kube-apiserver-client-kubelet Status: Pending Subject: Common Name: system:node:ocp4-lk6b4-worker1.ocp4.example.org Serial Number: Organization: system:nodes Events: <none>",
"oc adm certificate approve csr-m724n",
"openshift-install wait-for install-complete --dir USDASSETS_DIR --log-level debug"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_rhv/installing-rhv-user-infra |
Part I. Basic System Configuration | Part I. Basic System Configuration This part covers basic post-installation tasks and basic system administration tasks such as keyboard configuration, date and time configuration, managing users and groups, and gaining privileges. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/part-basic_system_configuration |
Chapter 2. Configuring ModSecurity on RHEL | Chapter 2. Configuring ModSecurity on RHEL When you install Red Hat JBoss Core Services on Red Hat Enterprise Linux (RHEL), you can configure the ModSecurity module to function as a web application firewall (WAF) for the Apache HTTP Server. Note JBCS 2.4.57 does not currently provide an archive file distribution of the Apache HTTP Server for RHEL 9. 2.1. ModSecurity dependencies on RHEL ModSecurity has several dependencies to function successfully. Some of these dependencies are already included as a part of Red Hat JBoss Core Services. The following table provides a list of ModSecurity dependencies: Dependency Part of JBCS on RHEL? Apache Portable Runtimes (APR) Yes APR-Util Yes mod_unique_id Yes libcurl Yes Perl-Compatible Regular Expressions (PCRE) Yes libxml2 No Note On RHEL, Red Hat JBoss Core Services includes all of these dependencies except the libxml2 library. 2.2. ModSecurity installation on RHEL The ModSecurity module is included as part of a Red Hat JBoss Core Services installation. You can follow the procedures in the Red Hat JBoss Core Services Apache HTTP Server Installation Guide to download and install the Apache HTTP Server for your operating system. Additional resources Red Hat JBoss Core Services Apache HTTP Server Installation Guide 2.3. Loading ModSecurity You can load the ModSecurity module by using the LoadModule command. Procedure To load the ModSecurity module, enter the following command: 2.4. Configuring the rules directory on RHEL ModSecurity functionality requires that you create rules that the system uses. Apache HTTP Server provides a preconfigured mod_security.conf.sample file in the HTTPD_HOME /modsecurity.d directory. To use ModSecurity rules, you must modify the mod_security.conf.sample file with settings that are appropriate for your environment. You can store the ModSecurity rules in the modsecurity.d directory or the modsecurity.d/activated_rules subdirectory. Procedure Go to the HTTPD_HOME /modsecurity.d directory. Rename the mod_security.conf.sample file to mod_security.conf : Open the mod_security.conf file and specify parameters for all the configuration directives that you want to use with the ModSecurity rules. 2.5. Key ModSecurity configuration options You can use key ModSecurity configuration options to improve the performance of regular expressions, investigate ModSecurity 2.6 phase one moving to phase two hook, and allow use of certain directives in .htaccess files. enable-pcre-jit Enables Just-In-Time (JIT) compiler support in the Perl-Compatible Regular Expressions (PCRE) library 8.20 or later to improve the performance of regular expressions. enable-request-early Enables testing of the ModSecurity 2.6 move from phase one to phase two hook enable-htaccess-config Enables use of directives in .htaccess files when AllowOverride Options is set | [
"LoadModule security2_module modules/mod_security2.so",
"mv mod_security.conf.sample ./mod_security.conf"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_modsecurity_guide/assembly_configuring-modsecurity-on-rhel_jbcs-mod_sec-guide |
Chapter 3. Standalone upgrade | Chapter 3. Standalone upgrade In general, Red Hat Quay supports upgrades from a prior (N-1) minor version only. For example, upgrading directly from Red Hat Quay 3.0.5 to the latest version of 3.5 is not supported. Instead, users would have to upgrade as follows: 3.0.5 3.1.3 3.1.3 3.2.2 3.2.2 3.3.4 3.3.4 3.4.z 3.4.z 3.5.z This is required to ensure that any necessary database migrations are done correctly and in the right order during the upgrade. In some cases, Red Hat Quay supports direct, single-step upgrades from prior (N-2, N-3) minor versions. This exception to the normal, prior minor version-only, upgrade simplifies the upgrade procedure for customers on older releases. The following upgrade paths are supported for Red Hat Quay 3.10: 3.7.z 3.10.z 3.8.z 3.10.z 3.9.z 3.10.z For users wanting to upgrade the Red Hat Quay Operator, see Upgrading the Red Hat Quay Operator Overview . This document describes the steps needed to perform each individual upgrade. Determine your current version and then follow the steps in sequential order, starting with your current version and working up to your desired target version. Upgrade to 3.10.z from 3.9.z Upgrade to 3.10.z from 3.9.z Upgrade to 3.10.z from 3.7.z Upgrade to 3.9.z from 3.8.z Upgrade to 3.9.z from 3.7.z Upgrade to 3.8.z from 3.7.z Upgrade to 3.7.z from 3.6.z Upgrade to 3.7.z from 3.5.z Upgrade to 3.7.z from 3.4.z Upgrade to 3.7.z from 3.3.z Upgrade to 3.6.z from 3.5.z Upgrade to 3.6.z from 3.4.z Upgrade to 3.6.z from 3.3.z Upgrade to 3.5.z from 3.4.z Upgrade to 3.4.z from 3.3.4 Upgrade to 3.3.4 from 3.2.2 Upgrade to 3.2.2 from 3.1.3 Upgrade to 3.1.3 from 3.0.5 Upgrade to 3.0.5 from 2.9.5 See the Red Hat Quay Release Notes for information on features for individual releases. The general procedure for a manual upgrade consists of the following steps: Stop the Quay and Clair containers. Backup the database and image storage (optional but recommended). Start Clair using the new version of the image. Wait until Clair is ready to accept connections before starting the new version of Quay. 3.1. Accessing images Images for Quay 3.4.0 and later are available from registry.redhat.io and registry.access.redhat.com , with authentication set up as described in Red Hat Container Registry Authentication . Images for Quay 3.3.4 and earlier are available from quay.io , with authentication set up as described in Accessing Red Hat Quay without a CoreOS login . 3.2. Upgrade to 3.10.z from 3.9.z 3.2.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.10.9 Clair: registry.redhat.io/quay/clair-rhel8:v3.10.9 PostgreSQL: registry.redhat.io/rhel8/postgresql-13:1-109 Redis: registry.redhat.io/rhel8/redis-6:1-110 3.3. Upgrade to 3.10.z from 3.8.z 3.3.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.10.9 Clair: registry.redhat.io/quay/clair-rhel8:v3.10.9 PostgreSQL: registry.redhat.io/rhel8/postgresql-13:1-109 Redis: registry.redhat.io/rhel8/redis-6:1-110 3.4. Upgrade to 3.10.z from 3.7.z 3.4.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.10.9 Clair: registry.redhat.io/quay/clair-rhel8:v3.10.9 PostgreSQL: registry.redhat.io/rhel8/postgresql-13:1-109 Redis: registry.redhat.io/rhel8/redis-6:1-110 3.5. Upgrade to 3.9.z from 3.8.z If you are upgrading your standalone Red Hat Quay deployment from 3.8.z 3.9, it is highly recommended that you upgrade PostgreSQL from version 10 13. To upgrade PostgreSQL from 10 13, you must bring down your PostgreSQL 10 database and run a migration script to initiate the process. Use the following procedure to upgrade PostgreSQL from 10 13 on a standalone Red Hat Quay deployment. Procedure Enter the following command to scale down the Red Hat Quay container: USD sudo podman stop <quay_container_name> Optional. If you are using Clair, enter the following command to stop the Clair container: USD sudo podman stop <clair_container_id> Run the Podman process from SCLOrg's Data Migration procedure, which allows for data migration from a remote PostgreSQL server: USD sudo podman run -d --name <migration_postgresql_database> 1 -e POSTGRESQL_MIGRATION_REMOTE_HOST=172.17.0.2 \ 2 -e POSTGRESQL_MIGRATION_ADMIN_PASSWORD=remoteAdminP@ssword \ -v </host/data/directory:/var/lib/pgsql/data:Z> 3 [ OPTIONAL_CONFIGURATION_VARIABLES ] rhel8/postgresql-13 1 The name of your PostgreSQL 13 migration database. 2 Your current Red Hat Quay PostgreSQL 13 database container IP address. Can obtained by running the following command: sudo podman inspect -f "{{.NetworkSettings.IPAddress}}" postgresql-quay . 3 You must specify a different volume mount point than the one from your initial PostgreSQL 10 deployment, and modify the access control lists for said directory. For example: USD mkdir -p /host/data/directory USD setfacl -m u:26:-wx /host/data/directory This prevents data from being overwritten by the new container. Optional. If you are using Clair, repeat the step for the Clair PostgreSQL database container. Stop the PostgreSQL 10 container: USD sudo podman stop <postgresql_container_name> After completing the PostgreSQL migration, run the PostgreSQL 13 container, using the new data volume mount from Step 3, for example, </host/data/directory:/var/lib/postgresql/data> : USD sudo podman run -d --rm --name postgresql-quay \ -e POSTGRESQL_USER=<username> \ -e POSTGRESQL_PASSWORD=<password> \ -e POSTGRESQL_DATABASE=<quay_database_name> \ -e POSTGRESQL_ADMIN_PASSWORD=<admin_password> \ -p 5432:5432 \ -v </host/data/directory:/var/lib/pgsql/data:Z> \ registry.redhat.io/rhel8/postgresql-13:1-109 Optional. If you are using Clair, repeat the step for the Clair PostgreSQL database container. Start the Red Hat Quay container: USD sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay \ -v /home/<quay_user>/quay-poc/config:/conf/stack:Z \ -v /home/<quay_user>/quay-poc/storage:/datastorage:Z \ {productrepo}/{quayimage}:{productminv} Optional. Restart the Clair container, for example: USD sudo podman run -d --name clairv4 \ -p 8081:8081 -p 8088:8088 \ -e CLAIR_CONF=/clair/config.yaml \ -e CLAIR_MODE=combo \ registry.redhat.io/quay/clair-rhel8:v3.9.0 For more information, see Data Migration . 3.5.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.9.0 Clair: registry.redhat.io/quay/clair-rhel8:v3.9.0 PostgreSQL: registry.redhat.io/rhel8/postgresql-13:1-109 Redis: registry.redhat.io/rhel8/redis-6:1-110 3.6. Upgrade to 3.9.z from 3.7.z If you are upgrading your standalone Red Hat Quay deployment from 3.7.z 3.9, it is highly recommended that you upgrade PostgreSQL from version 10 13. To upgrade PostgreSQL from 10 13, you must bring down your PostgreSQL 10 database and run a migration script to initiate the process: Note When upgrading from Red Hat Quay 3.7 to 3.9, you might receive the following error: pg_dumpall: error: query failed: ERROR: xlog flush request 1/B446CCD8 is not satisfied --- flushed only to 1/B0013858 . As a workaround to this issue, you can delete the quayregistry-clair-postgres-upgrade job on your OpenShift Container Platform deployment, which should resolve the issue. 3.6.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.9.0 Clair: registry.redhat.io/quay/clair-rhel8:v3.9.0 PostgreSQL: registry.redhat.io/rhel8/postgresql-13:1-109 Redis: registry.redhat.io/rhel8/redis-6:1-110 3.7. Upgrade to 3.8.z from 3.7.z 3.7.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.8.0 Clair: registry.redhat.io/quay/clair-rhel8:v3.8.0 PostgreSQL: registry.redhat.io/rhel8/postgresql-13:1-109 Redis: registry.redhat.io/rhel8/redis-6:1-110 3.8. Upgrade to 3.7.z from 3.6.z 3.8.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.7.0 Clair: registry.redhat.io/quay/clair-rhel8:v3.7.0 PostgreSQL: registry.redhat.io/rhel8/postgresql-13:1-109 Redis: registry.redhat.io/rhel8/redis-6:1-110 3.9. Upgrade to 3.7.z from 3.5.z 3.9.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.7.0 Clair: registry.redhat.io/quay/clair-rhel8:v3.7.0 PostgreSQL: registry.redhat.io/rhel8/postgresql-13:1-109 Redis: registry.redhat.io/rhel8/redis-6:1-110 3.10. Upgrade to 3.7.z from 3.4.z 3.10.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.7.0 Clair: registry.redhat.io/quay/clair-rhel8:v3.7.0 PostgreSQL: registry.redhat.io/rhel8/postgresql-13:1-109 Redis: registry.redhat.io/rhel8/redis-6:1-110 3.11. Upgrade to 3.7.z from 3.3.z Upgrading to Red Hat Quay 3.7 from 3.3. is unsupported. Users must first upgrade to 3.6 from 3.3, and then upgrade to 3.7. For more information, see Upgrade to 3.6.z from 3.3.z . 3.12. Upgrade to 3.6.z from 3.5.z 3.12.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.6.0 Clair: registry.redhat.io/quay/clair-rhel8:v3.6.0 PostgreSQL: registry.redhat.io/rhel8/postgresql-13:1-109 Redis: registry.redhat.io/rhel8/redis-6:1-110 3.13. Upgrade to 3.6.z from 3.4.z Note Red Hat Quay 3.6 supports direct, single-step upgrade from 3.4.z. This exception to the normal, prior minor version-only, upgrade simplifies the upgrade procedure for customers on older releases. Upgrading to Red Hat Quay 3.6 from 3.4.z requires a database migration which does not support downgrading back to a prior version of Red Hat Quay. Please back up your database before performing this migration. Users will also need to configure a completely new Clair v4 instance to replace the old Clair v2 when upgrading from 3.4.z. For instructions on configuring Clair v4, see Setting up Clair on a non-OpenShift Red Hat Quay deployment . 3.13.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.6.0 Clair: registry.redhat.io/quay/clair-rhel8:v3.6.0 PostgreSQL: registry.redhat.io/rhel8/postgresql-13:1-109 Redis: registry.redhat.io/rhel8/redis-6:1-110 3.14. Upgrade to 3.6.z from 3.3.z Note Red Hat Quay 3.6 supports direct, single-step upgrade from 3.3.z. This exception to the normal, prior minor version-only, upgrade simplifies the upgrade procedure for customers on older releases. Upgrading to Red Hat Quay 3.6.z from 3.3.z requires a database migration which does not support downgrading back to a prior version of Red Hat Quay. Please back up your database before performing this migration. Users will also need to configure a completely new Clair v4 instance to replace the old Clair v2 when upgrading from 3.3.z. For instructions on configuring Clair v4, see Setting up Clair on a non-OpenShift Red Hat Quay deployment . 3.14.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.6.0 Clair: registry.redhat.io/quay/clair-rhel8:v3.6.0 PostgreSQL: registry.redhat.io/rhel8/postgresql-13:1-109 Redis: registry.redhat.io/rhel8/redis-6:1-110 3.14.2. Swift configuration when upgrading from 3.3.z to 3.6 When upgrading from Red Hat Quay 3.3.z to 3.6.z, some users might receive the following error: Switch auth v3 requires tenant_id (string) in os_options . As a workaround, you can manually update your DISTRIBUTED_STORAGE_CONFIG to add the os_options and tenant_id parameters: DISTRIBUTED_STORAGE_CONFIG: brscale: - SwiftStorage - auth_url: http://****/v3 auth_version: "3" os_options: tenant_id: **** project_name: ocp-base user_domain_name: Default storage_path: /datastorage/registry swift_container: ocp-svc-quay-ha swift_password: ***** swift_user: ***** 3.15. Upgrade to 3.5.7 from 3.4.z 3.15.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.5.7 Clair: registry.redhat.io/quay/clair-rhel8:v3.10.9 PostgreSQL: registry.redhat.io/rhel8/postgresql-13:1-109 Redis: registry.redhat.io/rhel8/redis-6:1-110) 3.16. Upgrade to 3.4.6 from 3.3.z Upgrading to Quay 3.4 requires a database migration which does not support downgrading back to a prior version of Quay. Please back up your database before performing this migration. 3.16.1. Target images Quay: registry.redhat.io/quay/quay-rhel8:v3.4.6 Clair: registry.redhat.io/quay/clair-rhel8:v3.10.9 PostgreSQL: registry.redhat.io/rhel8/postgresql-13:1-109 Redis: registry.redhat.io/rhel8/redis-6:1-110) 3.17. Upgrade to 3.3.4 from 3.2.z 3.17.1. Target images Quay: quay.io/redhat/quay:v3.3.4 Clair: registry.redhat.io/quay/clair-rhel8:v3.10.9 PostgreSQL: rhscl/postgresql-96-rhel7 Redis: registry.access.redhat.com/rhscl/redis-32-rhel7 3.18. Upgrade to 3.2.2 from 3.1.z Once your cluster is running any Red Hat Quay 3.1.z version, to upgrade your cluster to 3.2.2 you must bring down your entire cluster and make a small change to the configuration before bringing it back up with the 3.2.2 version. Warning Once you set the value of DATABASE_SECRET_KEY in this procedure, do not ever change it. If you do so, then existing robot accounts, API tokens, etc. cannot be used anymore. You would have to create a new robot account and API tokens to use with Quay. Take all hosts in the Red Hat Quay cluster out of service. Generate some random data to use as a database secret key. For example: Add a new DATABASE_SECRET_KEY field to your config.yaml file. For example: Note For an OpenShift installation, the config.yaml file is stored as a secret. Bring up one Quay container to complete the migration to 3.2.2. Once the migration is done, make sure the same config.yaml is available on all nodes and bring up the new quay 3.2.2 service on those nodes. Start 3.0.z versions of quay-builder and Clair to replace any instances of those containers you want to return to your cluster. 3.18.1. Target images Quay: quay.io/redhat/quay:v3.2.2 Clair: registry.redhat.io/quay/clair-rhel8:v3.10.9 PostgreSQL: rhscl/postgresql-96-rhel7 Redis: registry.access.redhat.com/rhscl/redis-32-rhel7 3.19. Upgrade to 3.1.3 from 3.0.z 3.19.1. Target images Quay: quay.io/redhat/quay:v3.1.3 Clair: registry.redhat.io/quay/clair-rhel8:v3.10.9 PostgreSQL: rhscl/postgresql-96-rhel7 Redis: registry.access.redhat.com/rhscl/redis-32-rhel7 3.20. Upgrade to 3.0.5 from 2.9.5 For the 2.9.5 to 3.0.5 upgrade, you can either do the whole upgrade with Red Hat Quay down (synchronous upgrade) or only bring down Red Hat Quay for a few minutes and have the bulk of the upgrade continue with Red Hat Quay running (background upgrade). A background upgrade could take longer to run the upgrade depending on how many tags need to be processed. However, there is less total downtime. The downside of a background upgrade is that you will not have access to the latest features until the upgrade completes. The cluster runs from the Quay v3 container in v2 compatibility mode until the upgrade is complete. 3.20.1. Overview of upgrade Follow the procedure below if you are starting with a Red Hat Quay 2.y.z cluster. Before upgrading to the latest Red Hat Quay 3.x version, you must first migrate that cluster to 3.0.5, as described here . Once your cluster is running 3.0.5, you can then upgrade to the latest 3.x version by sequentially upgrading to each minor version in turn. For example: 3.0.5 3.1.3 3.1.3 3.2.2 3.2.2 3.3.4 3.3.4 3.4.z Before beginning your Red Hat Quay 2.y.z to 3.0 upgrade, please note the following: Synchronous upgrade : For a synchronous upgrade, expect less than one hour of total downtime for small installations. Consider a small installation to contain a few thousand container image tags or fewer. For that size installation, you could probably get by with just a couple hours of scheduled downtime. The entire Red Hat Quay service is down for the duration, so if you were to try a synchronous upgrade on a registry with millions of tags, you could potentially be down for several days. Background upgrade : For a background upgrade (also called a compatibility mode upgrade), after a short shutdown your Red Hat Quay cluster upgrade runs in the background. For large Red Hat Quay registries, this could take weeks to complete, but the cluster continues to operate in v2 mode for the duration of the upgrade. As a point of reference, one Red Hat Quay v3 upgrade took four days to process approximately 30 million tags across six machines. Full features on completion : Before you have access to features associated with Docker version 2, schema 2 changes (such as support for containers of different architectures), the entire migration must complete. Other v3 features are immediately available when you switch over. Upgrade complete : When the upgrade is complete, you need to set V3_UPGRADE_MODE: complete in the Red Hat Quay config.yaml file for the new features to be available. All new Red Hat Quay v3 installations automatically have that set. 3.20.2. Prerequisites To assure the best results, we recommend the following prerequisites: Back up your Red Hat Quay database before starting the upgrade (doing regular backups is a general best practice). A good time to do this is right after you have taken down the Red Hat Quay cluster to do the upgrade. Back up your storage (also a general best practice). Upgrade your current Red Hat Quay 2.y.z setup to the latest 2.9.z version (currently 2.9.5) before starting the v3 upgrade. To do that: While the Red Hat Quay cluster is still running, take one node and change the Quay container on that system to a Quay container that is running the latest 2.9.z version. Wait for all the database migrations to run, bringing the database up to the latest 2.9.z version. This should only take a few minutes to a half an hour. Once that is done, replace the Quay container on all the existing nodes with the same latest 2.9.z version. With the entire Red Hat Quay cluster on the new version, you can proceed to the v3 upgrade. 3.20.3. Choosing upgrade type Choose between a synchronous upgrade (complete the upgrade in downtime) and a background upgrade (complete the upgrade while Red Hat Quay is still running). Both of these major-release upgrades require that the Red Hat Quay cluster be down for at least a short period of time. Regardless of which upgrade type you choose, during the time that the Red Hat Quay cluster is down, if you are using builder and Clair images, you need to also upgrade to those new images: Builder : quay.io/redhat/quay-builder:v3.0.5 Clair : quay.io/redhat/clair-jwt:v3.0.5 Both of those images are available from the registry.redhat.io/quay repository. 3.20.4. Running a synchronous upgrade To run a synchronous upgrade, where your whole cluster is down for the entire upgrade, do the following: Take down your entire Red Hat Quay cluster, including any quay-builder and Clair containers. Add the following setting to the config.yaml file on all nodes: V3_UPGRADE_MODE: complete Pull and start up the v3 container on a single node and wait for however long it takes to do the upgrade (it will take a few minutes). Use the following container or later: Quay : quay.io/redhat/quay:v3.0.5 Note that the Quay container comes up on ports 8080 and 8443 for Red Hat Quay 3, instead of 80 and 443, as they did for Red Hat Quay 2. Therefore, we recommend remapping 8080 and 8443 into 80 and 443, respectively, as shown in this example: After the upgrade completes, bring the Red Hat Quay 3 container up on all other nodes. Start 3.0.z versions of quay-builder and Clair to replace any instances of those containers you want to return to your cluster. Verify that Red Hat Quay is working, including pushes and pulls of containers compatible with Docker version 2, schema 2. This can include windows container images and images of different computer architectures (arm, ppc, etc.). 3.20.5. Running a background upgrade To run a background upgrade, you need only bring down your cluster for a short period of time on two occasions. When you bring the cluster back up after the first downtime, the quay v3 container runs in v2 compatibility mode as it backfills the database. This background process can take hours or even days to complete. Background upgrades are recommended for large installations where downtime of more than a few hours would be a problem. For this type of upgrade, you put Red Hat Quay into a compatibility mode, where you have a Quay 3 container running, but it is running on the old data model while the upgrade completes. Here's what you do: Pull the Red Hat Quay 3 container to all the nodes. Use the following container or later: quay.io/redhat/quay:v3.0.5 Take down your entire Red Hat Quay cluster, including any quay-builder and Clair containers. Edit the config.yaml file on each node and set the upgrade mode to background as follows: V3_UPGRADE_MODE: background Bring the Red Hat Quay 3 container up on a single node and wait for the migrations to complete (should take a few minutes maximum). Here is an example of that command: Note that the Quay container comes up on ports 8080 and 8443 for Red Hat Quay 3, instead of 80 and 443, as they did for Red Hat Quay 2. Therefore, we recommend remapping 8080 and 8443 into 80 and 443, respectively, as shown in this example: Bring the Red Hat Quay 3 container up on all the other nodes. Monitor the /upgradeprogress API endpoint until it reports done enough to move to the step (the status reaches 99%). For example, view https://myquay.example.com/upgradeprogress or use some other tool to query the API. Once the background process is far enough along you have to schedule another maintenance window. During your scheduled maintenance, take the entire Red Hat Quay cluster down. Edit the config.yaml file on each node and set the upgrade mode to complete as follows: V3_UPGRADE_MODE: complete Bring Red Hat Quay back up on one node to have it do a final check. Once the final check is done, bring Red Hat Quay v3 back up on all the other nodes. Start 3.0.z versions of quay-builder and Clair to replace any instances of those containers you want to return to your cluster. Verify Quay is working, including pushes and pulls of containers compatible with Docker version 2, schema 2. This can include windows container images and images of different computer architectures (arm, ppc, etc.). 3.20.6. Target images Quay: quay.io/redhat/quay:v3.0.5 Clair: quay.io/redhat/clair-jwt:v3.0.5 Redis: registry.access.redhat.com/rhscl/redis-32-rhel7 PostgreSQL: rhscl/postgresql-96-rhel7 Builder: quay.io/redhat/quay-builder:v3.0.5 | [
"sudo podman stop <quay_container_name>",
"sudo podman stop <clair_container_id>",
"sudo podman run -d --name <migration_postgresql_database> 1 -e POSTGRESQL_MIGRATION_REMOTE_HOST=172.17.0.2 \\ 2 -e POSTGRESQL_MIGRATION_ADMIN_PASSWORD=remoteAdminP@ssword -v </host/data/directory:/var/lib/pgsql/data:Z> 3 [ OPTIONAL_CONFIGURATION_VARIABLES ] rhel8/postgresql-13",
"mkdir -p /host/data/directory",
"setfacl -m u:26:-wx /host/data/directory",
"sudo podman stop <postgresql_container_name>",
"sudo podman run -d --rm --name postgresql-quay -e POSTGRESQL_USER=<username> -e POSTGRESQL_PASSWORD=<password> -e POSTGRESQL_DATABASE=<quay_database_name> -e POSTGRESQL_ADMIN_PASSWORD=<admin_password> -p 5432:5432 -v </host/data/directory:/var/lib/pgsql/data:Z> registry.redhat.io/rhel8/postgresql-13:1-109",
"sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v /home/<quay_user>/quay-poc/config:/conf/stack:Z -v /home/<quay_user>/quay-poc/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv}",
"sudo podman run -d --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo registry.redhat.io/quay/clair-rhel8:v3.9.0",
"DISTRIBUTED_STORAGE_CONFIG: brscale: - SwiftStorage - auth_url: http://****/v3 auth_version: \"3\" os_options: tenant_id: **** project_name: ocp-base user_domain_name: Default storage_path: /datastorage/registry swift_container: ocp-svc-quay-ha swift_password: ***** swift_user: *****",
"openssl rand -hex 48 2d023adb9c477305348490aa0fd9c",
"DATABASE_SECRET_KEY: \"2d023adb9c477305348490aa0fd9c\"",
"docker run --restart=always -p 80:8080 -p 443:8443 --sysctl net.core.somaxconn=4096 --privileged=true -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d quay.io/redhat/quay:v3.0.5",
"docker run --restart=always -p 80:8080 -p 443:8443 --sysctl net.core.somaxconn=4096 --privileged=true -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d quay.io/redhat/quay:v3.0.5",
"V3_UPGRADE_MODE: complete"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/upgrade_red_hat_quay/standalone-upgrade |
5.364. xorg-x11-drv-mga | 5.364. xorg-x11-drv-mga 5.364.1. RHEA-2012:0940 - xorg-x11-drv-mga enhancement update Updated xorg-x11-drv-mga packages that add an enhancement are now available for Red Hat Enterprise Linux 6. The xorg-x11-drv-mga packages provide a video driver for Matrox G-series chipsets for the X.Org implementation of the X Window System. Enhancement BZ# 657580 RandR 1.2 support for G200-based graphics chipsets has been added. It allows dynamic reconfiguration of display settings to match the currently plugged in monitor. This is particularly important on servers, as they often start with no monitor attached, having it attached later in runtime. All users of xorg-x11-drv-mga are advised to upgrade to these updated packages, which add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/xorg-x11-drv-mga |
Chapter 43. Json Serialize Action | Chapter 43. Json Serialize Action Serialize payload to JSON 43.1. Configuration Options The json-serialize-action Kamelet does not specify any configuration option. 43.2. Dependencies At runtime, the json-serialize-action Kamelet relies upon the presence of the following dependencies: camel:kamelet camel:core camel:jackson 43.3. Usage This section describes how you can use the json-serialize-action . 43.3.1. Knative Action You can use the json-serialize-action Kamelet as an intermediate step in a Knative binding. json-serialize-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: json-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-serialize-action sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 43.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 43.3.1.2. Procedure for using the cluster CLI Save the json-serialize-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f json-serialize-action-binding.yaml 43.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step json-serialize-action channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 43.3.2. Kafka Action You can use the json-serialize-action Kamelet as an intermediate step in a Kafka binding. json-serialize-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: json-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-serialize-action sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 43.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 43.3.2.2. Procedure for using the cluster CLI Save the json-serialize-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f json-serialize-action-binding.yaml 43.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step json-serialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 43.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/json-serialize-action.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: json-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-serialize-action sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f json-serialize-action-binding.yaml",
"kamel bind timer-source?message=Hello --step json-serialize-action channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: json-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-serialize-action sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f json-serialize-action-binding.yaml",
"kamel bind timer-source?message=Hello --step json-serialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/json-serialize-action |
Appendix B. Custom Network Properties | Appendix B. Custom Network Properties B.1. Explanation of bridge_opts Parameters Table B.1. bridge_opts parameters Parameter Description forward_delay Sets the time, in deciseconds, a bridge will spend in the listening and learning states. If no switching loop is discovered in this time, the bridge will enter forwarding state. This allows time to inspect the traffic and layout of the network before normal network operation. gc_timer Sets the garbage collection time, in deciseconds, after which the forwarding database is checked and cleared of timed-out entries. group_addr Set to zero when sending a general query. Set to the IP multicast address when sending a group-specific query, or group-and-source-specific query. group_fwd_mask Enables bridge to forward link local group addresses. Changing this value from the default will allow non-standard bridging behavior. hash_elasticity The maximum chain length permitted in the hash table.Does not take effect until the new multicast group is added. If this cannot be satisfied after rehashing, a hash collision occurs and snooping is disabled. hash_max The maximum amount of buckets in the hash table. This takes effect immediately and cannot be set to a value less than the current number of multicast group entries. Value must be a power of two. hello_time Sets the time interval, in deciseconds, between sending 'hello' messages, announcing bridge position in the network topology. Applies only if this bridge is the Spanning Tree root bridge. hello_timer Time, in deciseconds, since last 'hello' message was sent. max_age Sets the maximum time, in deciseconds, to receive a 'hello' message from another root bridge before that bridge is considered dead and takeover begins. multicast_last_member_count Sets the number of 'last member' queries sent to the multicast group after receiving a 'leave group' message from a host. multicast_last_member_interval Sets the time, in deciseconds, between 'last member' queries. multicast_membership_interval Sets the time, in deciseconds, that a bridge will wait to hear from a member of a multicast group before it stops sending multicast traffic to the host. multicast_querier Sets whether the bridge actively runs a multicast querier or not. When a bridge receives a 'multicast host membership' query from another network host, that host is tracked based on the time that the query was received plus the multicast query interval time. If the bridge later attempts to forward traffic for that multicast membership, or is communicating with a querying multicast router, this timer confirms the validity of the querier. If valid, the multicast traffic is delivered via the bridge's existing multicast membership table; if no longer valid, the traffic is sent via all bridge ports.Broadcast domains with, or expecting, multicast memberships should run at least one multicast querier for improved performance. multicast_querier_interval Sets the maximum time, in deciseconds, between last 'multicast host membership' query received from a host to ensure it is still valid. multicast_query_use_ifaddr Boolean. Defaults to '0', in which case the querier uses 0.0.0.0 as source address for IPv4 messages. Changing this sets the bridge IP as the source address. multicast_query_interval Sets the time, in deciseconds, between query messages sent by the bridge to ensure validity of multicast memberships. At this time, or if the bridge is asked to send a multicast query for that membership, the bridge checks its own multicast querier state based on the time that a check was requested plus multicast_query_interval. If a multicast query for this membership has been sent within the last multicast_query_interval, it is not sent again. multicast_query_response_interval Length of time, in deciseconds, a host is allowed to respond to a query once it has been sent.Must be less than or equal to the value of the multicast_query_interval. multicast_router Allows you to enable or disable ports as having multicast routers attached. A port with one or more multicast routers will receive all multicast traffic. A value of 0 disables completely, a value of 1 enables the system to automatically detect the presence of routers based on queries, and a value of 2 enables ports to always receive all multicast traffic. multicast_snooping Toggles whether snooping is enabled or disabled. Snooping allows the bridge to listen to the network traffic between routers and hosts to maintain a map to filter multicast traffic to the appropriate links.This option allows the user to re-enable snooping if it was automatically disabled due to hash collisions, however snooping will not be re-enabled if the hash collision has not been resolved. multicast_startup_query_count Sets the number of queries sent out at startup to determine membership information. multicast_startup_query_interval Sets the time, in deciseconds, between queries sent out at startup to determine membership information. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/appe-Custom_Network_Properties |
Chapter 2. Product features | Chapter 2. Product features Red Hat OpenShift AI provides several features for data scientists and IT operations administrators. 2.1. Features for data scientists Containers While tools such as JupyterLab already offer intuitive ways for data scientists to develop models on their machines, there are always inherent complexities involved with collaboration and sharing work. Moreover, using specialized hardware such as powerful GPUs can be very expensive when you have to buy and maintain your own. The Jupyter environment that is included with OpenShift AI lets you take your development environment anywhere you need it to be. Because all of the workloads are run as containers, collaboration is as easy as sharing an image with your team members, or even simply adding it to the list of default containers that they can use. As a result, GPUs and large amounts of memory are significantly more accessible, since you are no longer limited by what your laptop can support. Integration with third-party machine learning tools We have all run into situations where our favorite tools or services do not play well with one another. OpenShift AI is designed with flexibility in mind. You can use a wide range of open source and third-party tools with OpenShift AI. These tools support the complete machine learning lifecycle, from data engineering and feature extraction to model deployment and management. Collaboration on notebooks with Git Use Jupyter's Git interface to work collaboratively with others, and keep good track of the changes to your code. Securely built notebook images Choose from a default set of notebook images that are pre-configured with the tools and libraries that you need for model development. Software stacks, especially those involved in machine learning, tend to be complex systems. There are many modules and libraries in the Python ecosystem that can be used, so determining which versions of what libraries to use can be very challenging. OpenShift AI includes many packaged notebook images that have been built with insight from data scientists and recommendation engines. You can start new projects quickly on the right foot without worrying about downloading unproven and possibly insecure images from random upstream repositories. Custom workbench images In addition to workbench images provided and supported by Red Hat and independent software vendors (ISVs), you can configure custom workbench images that cater to your project's specific requirements. Data science pipelines OpenShift AI supports data science pipelines 2.0, for an efficient way of running your data science workloads. You can standardize and automate machine learning workflows that enable you to develop and deploy your data science models. Model serving As a data scientist, you can deploy your trained machine-learning models to serve intelligent applications in production. Deploying or serving a model makes the model's functions available as a service endpoint that can be used for testing or integration into applications. You have much control over how this serving is performed. Optimize your data science models with accelerators If you work with large data sets, you can optimize the performance of your data science models in OpenShift AI with NVIDIA graphics processing units (GPUs) or Intel Gaudi AI accelerators. Accelerators enable you to scale your work, reduce latency, and increase productivity. 2.2. Features for IT Operations administrators Manage users with an identity provider OpenShift AI supports the same authentication systems as your OpenShift cluster. By default, OpenShift AI is accessible to all users listed in your identity provider and those users do not need a separate set of credentials to access OpenShift AI. Optionally, you can limit the set of users who have access by creating an OpenShift group that specifies a subset of users. You can also create an OpenShift group that identifies the list of users who have administrator access to OpenShift AI. Manage resources with OpenShift Use your existing OpenShift knowledge to configure and manage resources for your OpenShift AI users. Control Red Hat usage data collection Choose whether to allow Red Hat to collect data about OpenShift AI usage in your cluster. Usage data collection is enabled by default when you install OpenShift AI on your OpenShift cluster. Apply autoscaling to your cluster to reduce usage costs Use the cluster autoscaler to adjust the size of your cluster to meet its current needs and optimize costs. Manage resource usage by stopping idle notebooks Reduce resource usage in your OpenShift AI deployment by automatically stopping notebook servers that have been idle for a period of time. Implement model-serving runtimes OpenShift AI provides support for model-serving runtimes. A model-serving runtime provides integration with a specified model server and the model frameworks that it supports. By default, OpenShift AI includes the OpenVINO Model Server runtime. However, if this runtime doesn't meet your needs (for example, if it doesn't support a particular model framework), you can add your own custom runtimes. Install in a disconnected environment OpenShift AI Self-Managed supports installation in a disconnected environment. Disconnected clusters are on a restricted network, typically behind a firewall and unable to reach the Internet. In this case, clusters cannot access the remote registries where Red Hat provided OperatorHub sources reside. In this case, you deploy the OpenShift AI Operator to a disconnected environment by using a private registry in which you have mirrored (copied) the relevant images. Manage accelerators Enable NVIDIA graphics processing units (GPUs) or Intel Gaudi AI accelerators in OpenShift AI and allow your data scientists to use compute-heavy workloads. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/introduction_to_red_hat_openshift_ai_cloud_service/product-features_intro |
Creating and Managing Service Accounts | Creating and Managing Service Accounts Red Hat Customer Portal 1 Create and manage service accounts Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/creating_and_managing_service_accounts/index |
23.8. Time Zone Configuration | 23.8. Time Zone Configuration Set your time zone by selecting the city closest to your computer's physical location. Click on the map to zoom in to a particular geographical region of the world. Specify a time zone even if you plan to use NTP (Network Time Protocol) to maintain the accuracy of the system clock. From here there are two ways for you to select your time zone: Using your mouse, click on the interactive map to select a specific city (represented by a yellow dot). A red X appears indicating your selection. You can also scroll through the list at the bottom of the screen to select your time zone. Using your mouse, click on a location to highlight your selection. Select System clock uses UTC . The system clock is a piece of hardware on your computer system. Red Hat Enterprise Linux uses the timezone setting to determine the offset between the local time and UTC on the system clock. This behavior is standard for systems that use UNIX, Linux, and similar operating systems. Click to proceed. Note To change your time zone configuration after you have completed the installation, use the Time and Date Properties Tool . Type the system-config-date command in a shell prompt to launch the Time and Date Properties Tool . If you are not root, it prompts you for the root password to continue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-timezone-s390 |
7.293. yum-rhn-plugin | 7.293. yum-rhn-plugin 7.293.1. RHBA-2013:0389 - yum-rhn-plugin bug fix update Updated yum-rhn-plugin packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The yum-rhn-plugin packages make it possible to receive content from Red Hat Network in yum. Bug Fixes BZ#789092 Previously, yum-rhn-plugin ignored the timeout value set for yum. In some scenarios with slow networking, this could cause yum to timeout when communicating with Red Hat Network. Now, yum-rhnplugin abides by the timeout set for all yum repositories. BZ#802636 Previously, the check-update utility could in certain cases incorrectly return a 0 error code if an error occurred. With this update, "1" is returned if an error occurs. BZ#824193 Prior to this update, applying automatic updates with the yum-rhn-plugin utility on Red Hat Enterprise Linux 6 system could fail with an "empty transaction" error message. This was because the cached version of yum-rhn-plugin metadata was not up-to-date. With this update, yum-rhn-plugin downloads new metadata if available, ensuring that all packages are available for download. BZ# 830219 Previously, the messaging in yum-rhn-plugin was specific only to Red Hat Network Classic scenarios. This update clarifies what source yum-rhn-plugin is receiving updates from to reduce confusion. BZ# 831234 Prior to this update, yum-rhn-plugin did not correctly try the alternate server URLs provided if the first option failed. This update ensures that fail-over situations are handled correctly. All users of yum-rhn-plugin are advised to upgrade to these updated packages which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/yum-rhn-plugin |
Chapter 24. All configuration | Chapter 24. All configuration 24.1. Cache Value cache Defines the cache mechanism for high-availability. By default in production mode, a ispn cache is used to create a cluster between multiple server nodes. By default in development mode, a local cache disables clustering and is intended for development and testing purposes. CLI: --cache Env: KC_CACHE ispn (default), local cache-config-file Defines the file from which cache configuration should be loaded from. The configuration file is relative to the conf/ directory. CLI: --cache-config-file Env: KC_CACHE_CONFIG_FILE cache-embedded-authorization-max-count The maximum number of entries that can be stored in-memory by the authorization cache. CLI: --cache-embedded-authorization-max-count Env: KC_CACHE_EMBEDDED_AUTHORIZATION_MAX_COUNT cache-embedded-client-sessions-max-count The maximum number of entries that can be stored in-memory by the clientSessions cache. CLI: --cache-embedded-client-sessions-max-count Env: KC_CACHE_EMBEDDED_CLIENT_SESSIONS_MAX_COUNT Available only when embedded Infinispan clusters configured cache-embedded-keys-max-count The maximum number of entries that can be stored in-memory by the keys cache. CLI: --cache-embedded-keys-max-count Env: KC_CACHE_EMBEDDED_KEYS_MAX_COUNT cache-embedded-mtls-enabled Encrypts the network communication between Keycloak servers. CLI: --cache-embedded-mtls-enabled Env: KC_CACHE_EMBEDDED_MTLS_ENABLED true , false (default) cache-embedded-mtls-key-store-file The Keystore file path. The Keystore must contain the certificate to use by the TLS protocol. By default, it lookup cache-mtls-keystore.p12 under conf/ directory. CLI: --cache-embedded-mtls-key-store-file Env: KC_CACHE_EMBEDDED_MTLS_KEY_STORE_FILE cache-embedded-mtls-key-store-password The password to access the Keystore. CLI: --cache-embedded-mtls-key-store-password Env: KC_CACHE_EMBEDDED_MTLS_KEY_STORE_PASSWORD cache-embedded-mtls-trust-store-file The Truststore file path. It should contain the trusted certificates or the Certificate Authority that signed the certificates. By default, it lookup cache-mtls-truststore.p12 under conf/ directory. CLI: --cache-embedded-mtls-trust-store-file Env: KC_CACHE_EMBEDDED_MTLS_TRUST_STORE_FILE cache-embedded-mtls-trust-store-password The password to access the Truststore. CLI: --cache-embedded-mtls-trust-store-password Env: KC_CACHE_EMBEDDED_MTLS_TRUST_STORE_PASSWORD cache-embedded-offline-client-sessions-max-count The maximum number of entries that can be stored in-memory by the offlineClientSessions cache. CLI: --cache-embedded-offline-client-sessions-max-count Env: KC_CACHE_EMBEDDED_OFFLINE_CLIENT_SESSIONS_MAX_COUNT Available only when embedded Infinispan clusters configured cache-embedded-offline-sessions-max-count The maximum number of entries that can be stored in-memory by the offlineSessions cache. CLI: --cache-embedded-offline-sessions-max-count Env: KC_CACHE_EMBEDDED_OFFLINE_SESSIONS_MAX_COUNT Available only when embedded Infinispan clusters configured cache-embedded-realms-max-count The maximum number of entries that can be stored in-memory by the realms cache. CLI: --cache-embedded-realms-max-count Env: KC_CACHE_EMBEDDED_REALMS_MAX_COUNT cache-embedded-sessions-max-count The maximum number of entries that can be stored in-memory by the sessions cache. CLI: --cache-embedded-sessions-max-count Env: KC_CACHE_EMBEDDED_SESSIONS_MAX_COUNT Available only when embedded Infinispan clusters configured cache-embedded-users-max-count The maximum number of entries that can be stored in-memory by the users cache. CLI: --cache-embedded-users-max-count Env: KC_CACHE_EMBEDDED_USERS_MAX_COUNT cache-metrics-histograms-enabled Enable histograms for metrics for the embedded caches. CLI: --cache-metrics-histograms-enabled Env: KC_CACHE_METRICS_HISTOGRAMS_ENABLED Available only when metrics are enabled true , false (default) cache-remote-host The hostname of the remote server for the remote store configuration. It replaces the host attribute of remote-server tag of the configuration specified via XML file (see cache-config-file option.). If the option is specified, cache-remote-username and cache-remote-password are required as well and the related configuration in XML file should not be present. CLI: --cache-remote-host Env: KC_CACHE_REMOTE_HOST cache-remote-password The password for the authentication to the remote server for the remote store. It replaces the password attribute of digest tag of the configuration specified via XML file (see cache-config-file option.). If the option is specified, cache-remote-username is required as well and the related configuration in XML file should not be present. CLI: --cache-remote-password Env: KC_CACHE_REMOTE_PASSWORD Available only when remote host is set cache-remote-port The port of the remote server for the remote store configuration. It replaces the port attribute of remote-server tag of the configuration specified via XML file (see cache-config-file option.). CLI: --cache-remote-port Env: KC_CACHE_REMOTE_PORT Available only when remote host is set 11222 (default) cache-remote-tls-enabled Enable TLS support to communicate with a secured remote Infinispan server. Recommended to be enabled in production. CLI: --cache-remote-tls-enabled Env: KC_CACHE_REMOTE_TLS_ENABLED Available only when remote host is set true (default), false cache-remote-username The username for the authentication to the remote server for the remote store. It replaces the username attribute of digest tag of the configuration specified via XML file (see cache-config-file option.). If the option is specified, cache-remote-password is required as well and the related configuration in XML file should not be present. CLI: --cache-remote-username Env: KC_CACHE_REMOTE_USERNAME Available only when remote host is set cache-stack Define the default stack to use for cluster communication and node discovery. This option only takes effect if cache is set to ispn . Default: udp. CLI: --cache-stack Env: KC_CACHE_STACK tcp , udp , kubernetes , ec2 , azure , google , or any 24.2. Config Value config-keystore Specifies a path to the KeyStore Configuration Source. CLI: --config-keystore Env: KC_CONFIG_KEYSTORE config-keystore-password Specifies a password to the KeyStore Configuration Source. CLI: --config-keystore-password Env: KC_CONFIG_KEYSTORE_PASSWORD config-keystore-type Specifies a type of the KeyStore Configuration Source. CLI: --config-keystore-type Env: KC_CONFIG_KEYSTORE_TYPE PKCS12 (default) 24.3. Database Value db 🛠 The database vendor. CLI: --db Env: KC_DB dev-file (default), dev-mem , mariadb , mssql , mysql , oracle , postgres db-driver 🛠 The fully qualified class name of the JDBC driver. If not set, a default driver is set accordingly to the chosen database. CLI: --db-driver Env: KC_DB_DRIVER db-password The password of the database user. CLI: --db-password Env: KC_DB_PASSWORD db-pool-initial-size The initial size of the connection pool. CLI: --db-pool-initial-size Env: KC_DB_POOL_INITIAL_SIZE db-pool-max-size The maximum size of the connection pool. CLI: --db-pool-max-size Env: KC_DB_POOL_MAX_SIZE 100 (default) db-pool-min-size The minimal size of the connection pool. CLI: --db-pool-min-size Env: KC_DB_POOL_MIN_SIZE db-schema The database schema to be used. CLI: --db-schema Env: KC_DB_SCHEMA db-url The full database JDBC URL. If not provided, a default URL is set based on the selected database vendor. For instance, if using postgres , the default JDBC URL would be jdbc:postgresql://localhost/keycloak . CLI: --db-url Env: KC_DB_URL db-url-database Sets the database name of the default JDBC URL of the chosen vendor. If the db-url option is set, this option is ignored. CLI: --db-url-database Env: KC_DB_URL_DATABASE db-url-host Sets the hostname of the default JDBC URL of the chosen vendor. If the db-url option is set, this option is ignored. CLI: --db-url-host Env: KC_DB_URL_HOST db-url-port Sets the port of the default JDBC URL of the chosen vendor. If the db-url option is set, this option is ignored. CLI: --db-url-port Env: KC_DB_URL_PORT db-url-properties Sets the properties of the default JDBC URL of the chosen vendor. Make sure to set the properties accordingly to the format expected by the database vendor, as well as appending the right character at the beginning of this property value. If the db-url option is set, this option is ignored. CLI: --db-url-properties Env: KC_DB_URL_PROPERTIES db-username The username of the database user. CLI: --db-username Env: KC_DB_USERNAME 24.4. Transaction Value transaction-xa-enabled 🛠 If set to true, XA datasources will be used. CLI: --transaction-xa-enabled Env: KC_TRANSACTION_XA_ENABLED true , false (default) 24.5. Feature Value features 🛠 Enables a set of one or more features. CLI: --features Env: KC_FEATURES account-api[:v1] , account[:v3] , admin-api[:v1] , admin-fine-grained-authz[:v1] , admin[:v2] , authorization[:v1] , cache-embedded-remote-store[:v1] , ciba[:v1] , client-policies[:v1] , client-secret-rotation[:v1] , client-types[:v1] , clusterless[:v1] , declarative-ui[:v1] , device-flow[:v1] , docker[:v1] , dpop[:v1] , dynamic-scopes[:v1] , fips[:v1] , hostname[:v2] , impersonation[:v1] , kerberos[:v1] , login[:v2,v1] , multi-site[:v1] , oid4vc-vci[:v1] , opentelemetry[:v1] , organization[:v1] , par[:v1] , passkeys[:v1] , persistent-user-sessions[:v1] , preview , recovery-codes[:v1] , scripts[:v1] , step-up-authentication[:v1] , token-exchange[:v1] , transient-users[:v1] , update-email[:v1] , web-authn[:v1] features-disabled 🛠 Disables a set of one or more features. CLI: --features-disabled Env: KC_FEATURES_DISABLED account , account-api , admin , admin-api , admin-fine-grained-authz , authorization , cache-embedded-remote-store , ciba , client-policies , client-secret-rotation , client-types , clusterless , declarative-ui , device-flow , docker , dpop , dynamic-scopes , fips , impersonation , kerberos , login , multi-site , oid4vc-vci , opentelemetry , organization , par , passkeys , persistent-user-sessions , preview , recovery-codes , scripts , step-up-authentication , token-exchange , transient-users , update-email , web-authn 24.6. Hostname v2 Value hostname Address at which is the server exposed. Can be a full URL, or just a hostname. When only hostname is provided, scheme, port and context path are resolved from the request. CLI: --hostname Env: KC_HOSTNAME Available only when hostname:v2 feature is enabled hostname-admin Address for accessing the administration console. Use this option if you are exposing the administration console using a reverse proxy on a different address than specified in the hostname option. CLI: --hostname-admin Env: KC_HOSTNAME_ADMIN Available only when hostname:v2 feature is enabled hostname-backchannel-dynamic Enables dynamic resolving of backchannel URLs, including hostname, scheme, port and context path. Set to true if your application accesses Keycloak via a private network. If set to true, hostname option needs to be specified as a full URL. CLI: --hostname-backchannel-dynamic Env: KC_HOSTNAME_BACKCHANNEL_DYNAMIC Available only when hostname:v2 feature is enabled true , false (default) hostname-debug Toggles the hostname debug page that is accessible at /realms/master/hostname-debug. CLI: --hostname-debug Env: KC_HOSTNAME_DEBUG Available only when hostname:v2 feature is enabled true , false (default) hostname-strict Disables dynamically resolving the hostname from request headers. Should always be set to true in production, unless your reverse proxy overwrites the Host header. If enabled, the hostname option needs to be specified. CLI: --hostname-strict Env: KC_HOSTNAME_STRICT Available only when hostname:v2 feature is enabled true (default), false 24.7. HTTP(S) Value http-enabled Enables the HTTP listener. CLI: --http-enabled Env: KC_HTTP_ENABLED true , false (default) http-host The used HTTP Host. CLI: --http-host Env: KC_HTTP_HOST 0.0.0.0 (default) http-max-queued-requests Maximum number of queued HTTP requests. Use this to shed load in an overload situation. Excess requests will return a "503 Server not Available" response. CLI: --http-max-queued-requests Env: KC_HTTP_MAX_QUEUED_REQUESTS http-metrics-histograms-enabled Enables a histogram with default buckets for the duration of HTTP server requests. CLI: --http-metrics-histograms-enabled Env: KC_HTTP_METRICS_HISTOGRAMS_ENABLED Available only when metrics are enabled true , false (default) http-metrics-slos Service level objectives for HTTP server requests. Use this instead of the default histogram, or use it in combination to add additional buckets. Specify a list of comma-separated values defined in milliseconds. Example with buckets from 5ms to 10s: 5,10,25,50,250,500,1000,2500,5000,10000 CLI: --http-metrics-slos Env: KC_HTTP_METRICS_SLOS Available only when metrics are enabled http-pool-max-threads The maximum number of threads. If this is not specified then it will be automatically sized to the greater of 4 * the number of available processors and 50. For example if there are 4 processors the max threads will be 50. If there are 48 processors it will be 192. CLI: --http-pool-max-threads Env: KC_HTTP_POOL_MAX_THREADS http-port The used HTTP port. CLI: --http-port Env: KC_HTTP_PORT 8080 (default) http-relative-path 🛠 Set the path relative to / for serving resources. The path must start with a / . CLI: --http-relative-path Env: KC_HTTP_RELATIVE_PATH / (default) https-certificate-file The file path to a server certificate or certificate chain in PEM format. CLI: --https-certificate-file Env: KC_HTTPS_CERTIFICATE_FILE https-certificate-key-file The file path to a private key in PEM format. CLI: --https-certificate-key-file Env: KC_HTTPS_CERTIFICATE_KEY_FILE https-certificates-reload-period Interval on which to reload key store, trust store, and certificate files referenced by https-* options. May be a java.time.Duration value, an integer number of seconds, or an integer followed by one of [ms, h, m, s, d]. Must be greater than 30 seconds. Use -1 to disable. CLI: --https-certificates-reload-period Env: KC_HTTPS_CERTIFICATES_RELOAD_PERIOD 1h (default) https-cipher-suites The cipher suites to use. If none is given, a reasonable default is selected. CLI: --https-cipher-suites Env: KC_HTTPS_CIPHER_SUITES https-client-auth 🛠 Configures the server to require/request client authentication. CLI: --https-client-auth Env: KC_HTTPS_CLIENT_AUTH none (default), request , required https-key-store-file The key store which holds the certificate information instead of specifying separate files. CLI: --https-key-store-file Env: KC_HTTPS_KEY_STORE_FILE https-key-store-password The password of the key store file. CLI: --https-key-store-password Env: KC_HTTPS_KEY_STORE_PASSWORD password (default) https-key-store-type The type of the key store file. If not given, the type is automatically detected based on the file extension. If fips-mode is set to strict and no value is set, it defaults to BCFKS . CLI: --https-key-store-type Env: KC_HTTPS_KEY_STORE_TYPE https-port The used HTTPS port. CLI: --https-port Env: KC_HTTPS_PORT 8443 (default) https-protocols The list of protocols to explicitly enable. CLI: --https-protocols Env: KC_HTTPS_PROTOCOLS [TLSv1.3,TLSv1.2] (default) https-trust-store-file The trust store which holds the certificate information of the certificates to trust. CLI: --https-trust-store-file Env: KC_HTTPS_TRUST_STORE_FILE https-trust-store-password The password of the trust store file. CLI: --https-trust-store-password Env: KC_HTTPS_TRUST_STORE_PASSWORD https-trust-store-type The type of the trust store file. If not given, the type is automatically detected based on the file extension. If fips-mode is set to strict and no value is set, it defaults to BCFKS . CLI: --https-trust-store-type Env: KC_HTTPS_TRUST_STORE_TYPE 24.8. Health Value health-enabled 🛠 If the server should expose health check endpoints. If enabled, health checks are available at the /health , /health/ready and /health/live endpoints. CLI: --health-enabled Env: KC_HEALTH_ENABLED true , false (default) 24.9. Management Value http-management-port Port of the management interface. Relevant only when something is exposed on the management interface - see the guide for details. CLI: --http-management-port Env: KC_HTTP_MANAGEMENT_PORT 9000 (default) http-management-relative-path 🛠 Set the path relative to / for serving resources from management interface. The path must start with a / . If not given, the value is inherited from HTTP options. Relevant only when something is exposed on the management interface - see the guide for details. CLI: --http-management-relative-path Env: KC_HTTP_MANAGEMENT_RELATIVE_PATH / (default) https-management-certificate-file The file path to a server certificate or certificate chain in PEM format for the management server. If not given, the value is inherited from HTTP options. Relevant only when something is exposed on the management interface - see the guide for details. CLI: --https-management-certificate-file Env: KC_HTTPS_MANAGEMENT_CERTIFICATE_FILE https-management-certificate-key-file The file path to a private key in PEM format for the management server. If not given, the value is inherited from HTTP options. Relevant only when something is exposed on the management interface - see the guide for details. CLI: --https-management-certificate-key-file Env: KC_HTTPS_MANAGEMENT_CERTIFICATE_KEY_FILE https-management-client-auth 🛠 Configures the management interface to require/request client authentication. If not given, the value is inherited from HTTP options. Relevant only when something is exposed on the management interface - see the guide for details. CLI: --https-management-client-auth Env: KC_HTTPS_MANAGEMENT_CLIENT_AUTH none (default), request , required https-management-key-store-file The key store which holds the certificate information instead of specifying separate files for the management server. If not given, the value is inherited from HTTP options. Relevant only when something is exposed on the management interface - see the guide for details. CLI: --https-management-key-store-file Env: KC_HTTPS_MANAGEMENT_KEY_STORE_FILE https-management-key-store-password The password of the key store file for the management server. If not given, the value is inherited from HTTP options. Relevant only when something is exposed on the management interface - see the guide for details. CLI: --https-management-key-store-password Env: KC_HTTPS_MANAGEMENT_KEY_STORE_PASSWORD password (default) legacy-observability-interface 🛠 If metrics/health endpoints should be exposed on the main HTTP server (not recommended). If set to true, the management interface is disabled. CLI: --legacy-observability-interface Env: KC_LEGACY_OBSERVABILITY_INTERFACE DEPRECATED. true , false (default) 24.10. Metrics Value metrics-enabled 🛠 If the server should expose metrics. If enabled, metrics are available at the /metrics endpoint. CLI: --metrics-enabled Env: KC_METRICS_ENABLED true , false (default) 24.11. Proxy Value proxy-headers The proxy headers that should be accepted by the server. Misconfiguration might leave the server exposed to security vulnerabilities. Takes precedence over the deprecated proxy option. CLI: --proxy-headers Env: KC_PROXY_HEADERS forwarded , xforwarded proxy-protocol-enabled Whether the server should use the HA PROXY protocol when serving requests from behind a proxy. When set to true, the remote address returned will be the one from the actual connecting client. CLI: --proxy-protocol-enabled Env: KC_PROXY_PROTOCOL_ENABLED true , false (default) proxy-trusted-addresses A comma separated list of trusted proxy addresses. If set, then proxy headers from other addresses will be ignored. By default all addresses are trusted. A trusted proxy address is specified as an IP address (IPv4 or IPv6) or Classless Inter-Domain Routing (CIDR) notation. Available only when proxy-headers is set. CLI: --proxy-trusted-addresses Env: KC_PROXY_TRUSTED_ADDRESSES 24.12. Vault Value vault 🛠 Enables a vault provider. CLI: --vault Env: KC_VAULT file , keystore vault-dir If set, secrets can be obtained by reading the content of files within the given directory. CLI: --vault-dir Env: KC_VAULT_DIR vault-file Path to the keystore file. CLI: --vault-file Env: KC_VAULT_FILE vault-pass Password for the vault keystore. CLI: --vault-pass Env: KC_VAULT_PASS vault-type Specifies the type of the keystore file. CLI: --vault-type Env: KC_VAULT_TYPE PKCS12 (default) 24.13. Logging Value log Enable one or more log handlers in a comma-separated list. CLI: --log Env: KC_LOG console , file , syslog log-console-color Enable or disable colors when logging to console. CLI: --log-console-color Env: KC_LOG_CONSOLE_COLOR Available only when Console log handler is activated true , false (default) log-console-format The format of unstructured console log entries. If the format has spaces in it, escape the value using "<format>". CLI: --log-console-format Env: KC_LOG_CONSOLE_FORMAT Available only when Console log handler is activated %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n (default) log-console-include-trace Include tracing information in the console log. If the log-console-format option is specified, this option has no effect. CLI: --log-console-include-trace Env: KC_LOG_CONSOLE_INCLUDE_TRACE Available only when Console log handler and Tracing is activated true (default), false log-console-level Set the log level for the console handler. It specifies the most verbose log level for logs shown in the output. It respects levels specified in the log-level option, which represents the maximal verbosity for the whole logging system. For more information, check the Logging guide. CLI: --log-console-level Env: KC_LOG_CONSOLE_LEVEL Available only when Console log handler is activated off , fatal , error , warn , info , debug , trace , all (default) log-console-output Set the log output to JSON or default (plain) unstructured logging. CLI: --log-console-output Env: KC_LOG_CONSOLE_OUTPUT Available only when Console log handler is activated default (default), json log-file Set the log file path and filename. CLI: --log-file Env: KC_LOG_FILE Available only when File log handler is activated data/log/keycloak.log (default) log-file-format Set a format specific to file log entries. CLI: --log-file-format Env: KC_LOG_FILE_FORMAT Available only when File log handler is activated %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n (default) log-file-include-trace Include tracing information in the file log. If the log-file-format option is specified, this option has no effect. CLI: --log-file-include-trace Env: KC_LOG_FILE_INCLUDE_TRACE Available only when File log handler and Tracing is activated true (default), false log-file-level Set the log level for the file handler. It specifies the most verbose log level for logs shown in the output. It respects levels specified in the log-level option, which represents the maximal verbosity for the whole logging system. For more information, check the Logging guide. CLI: --log-file-level Env: KC_LOG_FILE_LEVEL Available only when File log handler is activated off , fatal , error , warn , info , debug , trace , all (default) log-file-output Set the log output to JSON or default (plain) unstructured logging. CLI: --log-file-output Env: KC_LOG_FILE_OUTPUT Available only when File log handler is activated default (default), json log-level The log level of the root category or a comma-separated list of individual categories and their levels. For the root category, you don't need to specify a category. CLI: --log-level Env: KC_LOG_LEVEL [info] (default) log-syslog-app-name Set the app name used when formatting the message in RFC5424 format. CLI: --log-syslog-app-name Env: KC_LOG_SYSLOG_APP_NAME Available only when Syslog is activated keycloak (default) log-syslog-endpoint Set the IP address and port of the Syslog server. CLI: --log-syslog-endpoint Env: KC_LOG_SYSLOG_ENDPOINT Available only when Syslog is activated localhost:514 (default) log-syslog-format Set a format specific to Syslog entries. CLI: --log-syslog-format Env: KC_LOG_SYSLOG_FORMAT Available only when Syslog is activated %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n (default) log-syslog-include-trace Include tracing information in the Syslog. If the log-syslog-format option is specified, this option has no effect. CLI: --log-syslog-include-trace Env: KC_LOG_SYSLOG_INCLUDE_TRACE Available only when Syslog handler and Tracing is activated true (default), false log-syslog-level Set the log level for the Syslog handler. It specifies the most verbose log level for logs shown in the output. It respects levels specified in the log-level option, which represents the maximal verbosity for the whole logging system. For more information, check the Logging guide. CLI: --log-syslog-level Env: KC_LOG_SYSLOG_LEVEL Available only when Syslog is activated off , fatal , error , warn , info , debug , trace , all (default) log-syslog-max-length Set the maximum length, in bytes, of the message allowed to be sent. The length includes the header and the message. If not set, the default value is 2048 when log-syslog-type is rfc5424 (default) and 1024 when log-syslog-type is rfc3164. CLI: --log-syslog-max-length Env: KC_LOG_SYSLOG_MAX_LENGTH Available only when Syslog is activated log-syslog-output Set the Syslog output to JSON or default (plain) unstructured logging. CLI: --log-syslog-output Env: KC_LOG_SYSLOG_OUTPUT Available only when Syslog is activated default (default), json log-syslog-protocol Set the protocol used to connect to the Syslog server. CLI: --log-syslog-protocol Env: KC_LOG_SYSLOG_PROTOCOL Available only when Syslog is activated tcp (default), udp , ssl-tcp log-syslog-type Set the Syslog type used to format the sent message. CLI: --log-syslog-type Env: KC_LOG_SYSLOG_TYPE Available only when Syslog is activated rfc5424 (default), rfc3164 24.14. Tracing (Preview) Value tracing-compression Preview: OpenTelemetry compression method used to compress payloads. If unset, compression is disabled. CLI: --tracing-compression Env: KC_TRACING_COMPRESSION Available only when 'opentelemetry' feature and Tracing is enabled gzip , none (default) tracing-enabled 🛠 Preview: Enables the OpenTelemetry tracing. CLI: --tracing-enabled Env: KC_TRACING_ENABLED Available only when 'opentelemetry' feature is enabled true , false (default) tracing-endpoint Preview: OpenTelemetry endpoint to connect to. CLI: --tracing-endpoint Env: KC_TRACING_ENDPOINT Available only when 'opentelemetry' feature and Tracing is enabled http://localhost:4317 (default) tracing-jdbc-enabled 🛠 Preview: Enables the OpenTelemetry JDBC tracing. CLI: --tracing-jdbc-enabled Env: KC_TRACING_JDBC_ENABLED Available only when 'opentelemetry' feature and Tracing is enabled true (default), false tracing-protocol Preview: OpenTelemetry protocol used for the telemetry data. CLI: --tracing-protocol Env: KC_TRACING_PROTOCOL Available only when 'opentelemetry' feature and Tracing is enabled grpc (default), http/protobuf tracing-resource-attributes Preview: OpenTelemetry resource attributes present in the exported trace to characterize the telemetry producer. Values in format key1=val1,key2=val2 . For more information, check the Tracing guide. CLI: --tracing-resource-attributes Env: KC_TRACING_RESOURCE_ATTRIBUTES Available only when 'opentelemetry' feature and Tracing is enabled tracing-sampler-ratio Preview: OpenTelemetry sampler ratio. Probability that a span will be sampled. Expected double value in interval <0,1). CLI: --tracing-sampler-ratio Env: KC_TRACING_SAMPLER_RATIO Available only when 'opentelemetry' feature and Tracing is enabled 1.0 (default) tracing-sampler-type 🛠 Preview: OpenTelemetry sampler to use for tracing. CLI: --tracing-sampler-type Env: KC_TRACING_SAMPLER_TYPE Available only when 'opentelemetry' feature and Tracing is enabled always_on , always_off , traceidratio (default), parentbased_always_on , parentbased_always_off , parentbased_traceidratio tracing-service-name Preview: OpenTelemetry service name. Takes precedence over service.name defined in the tracing-resource-attributes property. CLI: --tracing-service-name Env: KC_TRACING_SERVICE_NAME Available only when 'opentelemetry' feature and Tracing is enabled keycloak (default) 24.15. Truststore Value tls-hostname-verifier The TLS hostname verification policy for out-going HTTPS and SMTP requests. CLI: --tls-hostname-verifier Env: KC_TLS_HOSTNAME_VERIFIER STRICT and WILDCARD have been deprecated, use DEFAULT instead. Deprecated values: STRICT , WILDCARD ANY , WILDCARD , STRICT , DEFAULT (default) truststore-paths List of pkcs12 (p12 or pfx file extensions), PEM files, or directories containing those files that will be used as a system truststore. CLI: --truststore-paths Env: KC_TRUSTSTORE_PATHS 24.16. Security Value fips-mode 🛠 Sets the FIPS mode. If non-strict is set, FIPS is enabled but on non-approved mode. For full FIPS compliance, set strict to run on approved mode. This option defaults to disabled when fips feature is disabled, which is by default. This option defaults to non-strict when fips feature is enabled. CLI: --fips-mode Env: KC_FIPS_MODE non-strict , strict 24.17. Export Value dir Set the path to a directory where files will be created with the exported data. CLI: --dir Env: KC_DIR file Set the path to a file that will be created with the exported data. To export more than 500 users, export to a directory with different files instead. CLI: --file Env: KC_FILE realm Set the name of the realm to export. If not set, all realms are going to be exported. CLI: --realm Env: KC_REALM users Set how users should be exported. CLI: --users Env: KC_USERS skip , realm_file , same_file , different_files (default) users-per-file Set the number of users per file. It is used only if users is set to different_files . Increasing this number leads to exponentially increasing export times. CLI: --users-per-file Env: KC_USERS_PER_FILE 50 (default) 24.18. Import Value dir Set the path to a directory where files will be read from. CLI: --dir Env: KC_DIR file Set the path to a file that will be read. CLI: --file Env: KC_FILE override Set if existing data should be overwritten. If set to false, data will be ignored. CLI: --override Env: KC_OVERRIDE true (default), false 24.19. Bootstrap Admin Value bootstrap-admin-client-id Client id for the temporary bootstrap admin service account. Used only when the master realm is created. Available only when bootstrap admin client secret is set. CLI: --bootstrap-admin-client-id Env: KC_BOOTSTRAP_ADMIN_CLIENT_ID temp-admin (default) bootstrap-admin-client-secret Client secret for the temporary bootstrap admin service account. Used only when the master realm is created. Use a non-CLI configuration option for this option if possible. CLI: --bootstrap-admin-client-secret Env: KC_BOOTSTRAP_ADMIN_CLIENT_SECRET bootstrap-admin-password Temporary bootstrap admin password. Used only when the master realm is created. Use a non-CLI configuration option for this option if possible. CLI: --bootstrap-admin-password Env: KC_BOOTSTRAP_ADMIN_PASSWORD bootstrap-admin-username Temporary bootstrap admin username. Used only when the master realm is created. Available only when bootstrap admin password is set. CLI: --bootstrap-admin-username Env: KC_BOOTSTRAP_ADMIN_USERNAME temp-admin (default) | null | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_configuration_guide/all-config- |
function::cmdline_arg | function::cmdline_arg Name function::cmdline_arg - Fetch a command line argument Synopsis Arguments n Argument to get (zero is the program itself) Description Returns argument the requested argument from the current process or the empty string when there are not that many arguments or there is a problem retrieving the argument. Argument zero is traditionally the command itself. | [
"cmdline_arg:string(n:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-cmdline-arg |
Chapter 6. Cluster Operators reference | Chapter 6. Cluster Operators reference This reference guide indexes the cluster Operators shipped by Red Hat that serve as the architectural foundation for OpenShift Container Platform. Cluster Operators are installed by default, unless otherwise noted, and are managed by the Cluster Version Operator (CVO). For more details on the control plane architecture, see Operators in OpenShift Container Platform . Cluster administrators can view cluster Operators in the OpenShift Container Platform web console from the Administration Cluster Settings page. Note Cluster Operators are not managed by Operator Lifecycle Manager (OLM) and OperatorHub. OLM and OperatorHub are part of the Operator Framework used in OpenShift Container Platform for installing and running optional add-on Operators . Some of the following cluster Operators can be disabled prior to installation. For more information see cluster capabilities . 6.1. Cluster Baremetal Operator Note The Cluster Baremetal Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Cluster Baremetal Operator (CBO) deploys all the components necessary to take a bare-metal server to a fully functioning worker node ready to run OpenShift Container Platform compute nodes. The CBO ensures that the metal3 deployment, which consists of the Bare Metal Operator (BMO) and Ironic containers, runs on one of the control plane nodes within the OpenShift Container Platform cluster. The CBO also listens for OpenShift Container Platform updates to resources that it watches and takes appropriate action. Project cluster-baremetal-operator Additional resources Bare-metal capability 6.2. Cloud Credential Operator Purpose The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with the specific permissions that are required for the cluster to run. By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in several different modes. If no mode is specified, or the credentialsMode parameter is set to an empty string ( "" ), the CCO operates in its default mode. Project openshift-cloud-credential-operator CRDs credentialsrequests.cloudcredential.openshift.io Scope: Namespaced CR: CredentialsRequest Validation: Yes Configuration objects No configuration required. Additional resources About the Cloud Credential Operator CredentialsRequest custom resource 6.3. Cluster Authentication Operator Purpose The Cluster Authentication Operator installs and maintains the Authentication custom resource in a cluster and can be viewed with: USD oc get clusteroperator authentication -o yaml Project cluster-authentication-operator 6.4. Cluster Autoscaler Operator Purpose The Cluster Autoscaler Operator manages deployments of the OpenShift Cluster Autoscaler using the cluster-api provider. Project cluster-autoscaler-operator CRDs ClusterAutoscaler : This is a singleton resource, which controls the configuration autoscaler instance for the cluster. The Operator only responds to the ClusterAutoscaler resource named default in the managed namespace, the value of the WATCH_NAMESPACE environment variable. MachineAutoscaler : This resource targets a node group and manages the annotations to enable and configure autoscaling for that group, the min and max size. Currently only MachineSet objects can be targeted. 6.5. Cloud Controller Manager Operator Purpose Note The status of this Operator is General Availability for Amazon Web Services (AWS), Google Cloud Platform (GCP), IBM Cloud(R), global Microsoft Azure, Microsoft Azure Stack Hub, Nutanix, Red Hat OpenStack Platform (RHOSP), and VMware vSphere. The Operator is available as a Technology Preview for IBM Power(R) Virtual Server. The Cloud Controller Manager Operator manages and updates the cloud controller managers deployed on top of OpenShift Container Platform. The Operator is based on the Kubebuilder framework and controller-runtime libraries. It is installed via the Cluster Version Operator (CVO). It contains the following components: Operator Cloud configuration observer By default, the Operator exposes Prometheus metrics through the metrics service. Project cluster-cloud-controller-manager-operator 6.6. Cluster CAPI Operator Note This Operator is available as a Technology Preview for Amazon Web Services (AWS), Google Cloud Platform (GCP), and Red Hat OpenStack Platform (RHOSP), VMware vSphere clusters. Purpose The Cluster CAPI Operator maintains the lifecycle of Cluster API resources. This Operator is responsible for all administrative tasks related to deploying the Cluster API project within an OpenShift Container Platform cluster. Project cluster-capi-operator CRDs awsmachines.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: awsmachine gcpmachines.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: gcpmachine openstackmachines.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: openstackmachine vspheremachines.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: vspheremachine awsmachinetemplates.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: awsmachinetemplate gcpmachinetemplates.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: gcpmachinetemplate openstackmachinetemplates.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: openstackmachinetemplate vspheremachinetemplates.infrastructure.cluster.x-k8s.io Scope: Namespaced CR: vspheremachinetemplate 6.7. Cluster Config Operator Purpose The Cluster Config Operator performs the following tasks related to config.openshift.io : Creates CRDs. Renders the initial custom resources. Handles migrations. Project cluster-config-operator 6.8. Cluster CSI Snapshot Controller Operator Note The Cluster CSI Snapshot Controller Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Cluster CSI Snapshot Controller Operator installs and maintains the CSI Snapshot Controller. The CSI Snapshot Controller is responsible for watching the VolumeSnapshot CRD objects and manages the creation and deletion lifecycle of volume snapshots. Project cluster-csi-snapshot-controller-operator Additional resources CSI snapshot controller capability 6.9. Cluster Image Registry Operator Purpose The Cluster Image Registry Operator manages a singleton instance of the OpenShift image registry. It manages all configuration of the registry, including creating storage. On initial start up, the Operator creates a default image-registry resource instance based on the configuration detected in the cluster. This indicates what cloud storage type to use based on the cloud provider. If insufficient information is available to define a complete image-registry resource, then an incomplete resource is defined and the Operator updates the resource status with information about what is missing. The Cluster Image Registry Operator runs in the openshift-image-registry namespace and it also manages the registry instance in that location. All configuration and workload resources for the registry reside in that namespace. Project cluster-image-registry-operator 6.10. Cluster Machine Approver Operator Purpose The Cluster Machine Approver Operator automatically approves the CSRs requested for a new worker node after cluster installation. Note For the control plane node, the approve-csr service on the bootstrap node automatically approves all CSRs during the cluster bootstrapping phase. Project cluster-machine-approver-operator 6.11. Cluster Monitoring Operator Purpose The Cluster Monitoring Operator (CMO) manages and updates the Prometheus-based cluster monitoring stack deployed on top of OpenShift Container Platform. Project openshift-monitoring CRDs alertmanagers.monitoring.coreos.com Scope: Namespaced CR: alertmanager Validation: Yes prometheuses.monitoring.coreos.com Scope: Namespaced CR: prometheus Validation: Yes prometheusrules.monitoring.coreos.com Scope: Namespaced CR: prometheusrule Validation: Yes servicemonitors.monitoring.coreos.com Scope: Namespaced CR: servicemonitor Validation: Yes Configuration objects USD oc -n openshift-monitoring edit cm cluster-monitoring-config 6.12. Cluster Network Operator Purpose The Cluster Network Operator installs and upgrades the networking components on an OpenShift Container Platform cluster. 6.13. Cluster Samples Operator Note The Cluster Samples Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Cluster Samples Operator manages the sample image streams and templates stored in the openshift namespace. On initial start up, the Operator creates the default samples configuration resource to initiate the creation of the image streams and templates. The configuration object is a cluster scoped object with the key cluster and type configs.samples . The image streams are the Red Hat Enterprise Linux CoreOS (RHCOS)-based OpenShift Container Platform image streams pointing to images on registry.redhat.io . Similarly, the templates are those categorized as OpenShift Container Platform templates. The Cluster Samples Operator deployment is contained within the openshift-cluster-samples-operator namespace. On start up, the install pull secret is used by the image stream import logic in the OpenShift image registry and API server to authenticate with registry.redhat.io . An administrator can create any additional secrets in the openshift namespace if they change the registry used for the sample image streams. If created, those secrets contain the content of a config.json for docker needed to facilitate image import. The image for the Cluster Samples Operator contains image stream and template definitions for the associated OpenShift Container Platform release. After the Cluster Samples Operator creates a sample, it adds an annotation that denotes the OpenShift Container Platform version that it is compatible with. The Operator uses this annotation to ensure that each sample matches the compatible release version. Samples outside of its inventory are ignored, as are skipped samples. Modifications to any samples that are managed by the Operator are allowed as long as the version annotation is not modified or deleted. However, on an upgrade, as the version annotation will change, those modifications can get replaced as the sample will be updated with the newer version. The Jenkins images are part of the image payload from the installation and are tagged into the image streams directly. The samples resource includes a finalizer, which cleans up the following upon its deletion: Operator-managed image streams Operator-managed templates Operator-generated configuration resources Cluster status resources Upon deletion of the samples resource, the Cluster Samples Operator recreates the resource using the default configuration. Project cluster-samples-operator Additional resources OpenShift samples capability 6.14. Cluster Storage Operator Note The Cluster Storage Operator is an optional cluster capability that can be disabled by cluster administrators during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Cluster Storage Operator sets OpenShift Container Platform cluster-wide storage defaults. It ensures a default storageclass exists for OpenShift Container Platform clusters. It also installs Container Storage Interface (CSI) drivers which enable your cluster to use various storage backends. Project cluster-storage-operator Configuration No configuration is required. Notes The storage class that the Operator creates can be made non-default by editing its annotation, but this storage class cannot be deleted as long as the Operator runs. Additional resources Storage capability 6.15. Cluster Version Operator Purpose Cluster Operators manage specific areas of cluster functionality. The Cluster Version Operator (CVO) manages the lifecycle of cluster Operators, many of which are installed in OpenShift Container Platform by default. The CVO also checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph by collecting the status of both the cluster version and its cluster Operators. This status includes the condition type, which informs you of the health and current state of the OpenShift Container Platform cluster. For more information regarding cluster version condition types, see "Understanding cluster version condition types". Project cluster-version-operator Additional resources Understanding cluster version condition types 6.16. Console Operator Note The Console Operator is an optional cluster capability that can be disabled by cluster administrators during installation. If you disable the Console Operator at installation, your cluster is still supported and upgradable. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Console Operator installs and maintains the OpenShift Container Platform web console on a cluster. The Console Operator is installed by default and automatically maintains a console. Project console-operator Additional resources Web console capability 6.17. Control Plane Machine Set Operator Note This Operator is available for Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Nutanix, and VMware vSphere. Purpose The Control Plane Machine Set Operator automates the management of control plane machine resources within an OpenShift Container Platform cluster. Project cluster-control-plane-machine-set-operator CRDs controlplanemachineset.machine.openshift.io Scope: Namespaced CR: ControlPlaneMachineSet Validation: Yes Additional resources About control plane machine sets ControlPlaneMachineSet custom resource 6.18. DNS Operator Purpose The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods that enables DNS-based Kubernetes Service discovery in OpenShift Container Platform. The Operator creates a working default deployment based on the cluster's configuration. The default cluster domain is cluster.local . Configuration of the CoreDNS Corefile or Kubernetes plugin is not yet supported. The DNS Operator manages CoreDNS as a Kubernetes daemon set exposed as a service with a static IP. CoreDNS runs on all nodes in the cluster. Project cluster-dns-operator 6.19. etcd cluster Operator Purpose The etcd cluster Operator automates etcd cluster scaling, enables etcd monitoring and metrics, and simplifies disaster recovery procedures. Project cluster-etcd-operator CRDs etcds.operator.openshift.io Scope: Cluster CR: etcd Validation: Yes Configuration objects USD oc edit etcd cluster 6.20. Ingress Operator Purpose The Ingress Operator configures and manages the OpenShift Container Platform router. Project openshift-ingress-operator CRDs clusteringresses.ingress.openshift.io Scope: Namespaced CR: clusteringresses Validation: No Configuration objects Cluster config Type Name: clusteringresses.ingress.openshift.io Instance Name: default View Command: USD oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml Notes The Ingress Operator sets up the router in the openshift-ingress project and creates the deployment for the router: USD oc get deployment -n openshift-ingress The Ingress Operator uses the clusterNetwork[].cidr from the network/cluster status to determine what mode (IPv4, IPv6, or dual stack) the managed Ingress Controller (router) should operate in. For example, if clusterNetwork contains only a v6 cidr , then the Ingress Controller operates in IPv6-only mode. In the following example, Ingress Controllers managed by the Ingress Operator will run in IPv4-only mode because only one cluster network exists and the network is an IPv4 cidr : USD oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}' Example output map[cidr:10.128.0.0/14 hostPrefix:23] 6.21. Insights Operator Note The Insights Operator is an optional cluster capability that cluster administrators can disable during installation. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce proactive insights recommendations about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators through Insights Advisor on console.redhat.com . Project insights-operator Configuration No configuration is required. Notes Insights Operator complements OpenShift Container Platform Telemetry. Additional resources Insights capability See About remote health monitoring for details about Insights Operator and Telemetry. 6.22. Kubernetes API Server Operator Purpose The Kubernetes API Server Operator manages and updates the Kubernetes API server deployed on top of OpenShift Container Platform. The Operator is based on the OpenShift Container Platform library-go framework and it is installed using the Cluster Version Operator (CVO). Project openshift-kube-apiserver-operator CRDs kubeapiservers.operator.openshift.io Scope: Cluster CR: kubeapiserver Validation: Yes Configuration objects USD oc edit kubeapiserver 6.23. Kubernetes Controller Manager Operator Purpose The Kubernetes Controller Manager Operator manages and updates the Kubernetes Controller Manager deployed on top of OpenShift Container Platform. The Operator is based on OpenShift Container Platform library-go framework and it is installed via the Cluster Version Operator (CVO). It contains the following components: Operator Bootstrap manifest renderer Installer based on static pods Configuration observer By default, the Operator exposes Prometheus metrics through the metrics service. Project cluster-kube-controller-manager-operator 6.24. Kubernetes Scheduler Operator Purpose The Kubernetes Scheduler Operator manages and updates the Kubernetes Scheduler deployed on top of OpenShift Container Platform. The Operator is based on the OpenShift Container Platform library-go framework and it is installed with the Cluster Version Operator (CVO). The Kubernetes Scheduler Operator contains the following components: Operator Bootstrap manifest renderer Installer based on static pods Configuration observer By default, the Operator exposes Prometheus metrics through the metrics service. Project cluster-kube-scheduler-operator Configuration The configuration for the Kubernetes Scheduler is the result of merging: a default configuration. an observed configuration from the spec schedulers.config.openshift.io . All of these are sparse configurations, invalidated JSON snippets which are merged to form a valid configuration at the end. 6.25. Kubernetes Storage Version Migrator Operator Purpose The Kubernetes Storage Version Migrator Operator detects changes of the default storage version, creates migration requests for resource types when the storage version changes, and processes migration requests. Project cluster-kube-storage-version-migrator-operator 6.26. Machine API Operator Purpose The Machine API Operator manages the lifecycle of specific purpose custom resource definitions (CRD), controllers, and RBAC objects that extend the Kubernetes API. This declares the desired state of machines in a cluster. Project machine-api-operator CRDs MachineSet Machine MachineHealthCheck 6.27. Machine Config Operator Purpose The Machine Config Operator manages and applies configuration and updates of the base operating system and container runtime, including everything between the kernel and kubelet. There are four components: machine-config-server : Provides Ignition configuration to new machines joining the cluster. machine-config-controller : Coordinates the upgrade of machines to the desired configurations defined by a MachineConfig object. Options are provided to control the upgrade for sets of machines individually. machine-config-daemon : Applies new machine configuration during update. Validates and verifies the state of the machine to the requested machine configuration. machine-config : Provides a complete source of machine configuration at installation, first start up, and updates for a machine. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Project openshift-machine-config-operator 6.28. Marketplace Operator Note The Marketplace Operator is an optional cluster capability that can be disabled by cluster administrators if it is not needed. For more information about optional cluster capabilities, see "Cluster capabilities" in Installing . Purpose The Marketplace Operator simplifies the process for bringing off-cluster Operators to your cluster by using a set of default Operator Lifecycle Manager (OLM) catalogs on the cluster. When the Marketplace Operator is installed, it creates the openshift-marketplace namespace. OLM ensures catalog sources installed in the openshift-marketplace namespace are available for all namespaces on the cluster. Project operator-marketplace Additional resources Marketplace capability 6.29. Node Tuning Operator Purpose The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator uses the Performance Profile controller to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator configures a performance profile to define node-level settings such as the following: Updating the kernel to kernel-rt. Choosing CPUs for housekeeping. Choosing CPUs for running workloads. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. Project cluster-node-tuning-operator Additional resources About low latency 6.30. OpenShift API Server Operator Purpose The OpenShift API Server Operator installs and maintains the openshift-apiserver on a cluster. Project openshift-apiserver-operator CRDs openshiftapiservers.operator.openshift.io Scope: Cluster CR: openshiftapiserver Validation: Yes 6.31. OpenShift Controller Manager Operator Purpose The OpenShift Controller Manager Operator installs and maintains the OpenShiftControllerManager custom resource in a cluster and can be viewed with: USD oc get clusteroperator openshift-controller-manager -o yaml The custom resource definition (CRD) openshiftcontrollermanagers.operator.openshift.io can be viewed in a cluster with: USD oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml Project cluster-openshift-controller-manager-operator 6.32. Operator Lifecycle Manager Operators Purpose Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their OpenShift Container Platform clusters. It is part of the Operator Framework , an open source toolkit designed to manage Operators in an effective, automated, and scalable way. Figure 6.1. Operator Lifecycle Manager workflow OLM runs by default in OpenShift Container Platform 4.17, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster. For developers, a self-service experience allows provisioning and configuring instances of databases, monitoring, and big data services without having to be subject matter experts, because the Operator has that knowledge baked into it. CRDs Operator Lifecycle Manager (OLM) is composed of two Operators: the OLM Operator and the Catalog Operator. Each of these Operators is responsible for managing the custom resource definitions (CRDs) that are the basis for the OLM framework: Table 6.1. CRDs managed by OLM and Catalog Operators Resource Short name Owner Description ClusterServiceVersion (CSV) csv OLM Application metadata: name, version, icon, required resources, installation, and so on. InstallPlan ip Catalog Calculated list of resources to be created to automatically install or upgrade a CSV. CatalogSource catsrc Catalog A repository of CSVs, CRDs, and packages that define an application. Subscription sub Catalog Used to keep CSVs up to date by tracking a channel in a package. OperatorGroup og OLM Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide. Each of these Operators is also responsible for creating the following resources: Table 6.2. Resources created by OLM and Catalog Operators Resource Owner Deployments OLM ServiceAccounts (Cluster)Roles (Cluster)RoleBindings CustomResourceDefinitions (CRDs) Catalog ClusterServiceVersions OLM Operator The OLM Operator is responsible for deploying applications defined by CSV resources after the required resources specified in the CSV are present in the cluster. The OLM Operator is not concerned with the creation of the required resources; you can choose to manually create these resources using the CLI or using the Catalog Operator. This separation of concern allows users incremental buy-in in terms of how much of the OLM framework they choose to leverage for their application. The OLM Operator uses the following workflow: Watch for cluster service versions (CSVs) in a namespace and check that requirements are met. If requirements are met, run the install strategy for the CSV. Note A CSV must be an active member of an Operator group for the install strategy to run. Catalog Operator The Catalog Operator is responsible for resolving and installing cluster service versions (CSVs) and the required resources they specify. It is also responsible for watching catalog sources for updates to packages in channels and upgrading them, automatically if desired, to the latest available versions. To track a package in a channel, you can create a Subscription object configuring the desired package, channel, and the CatalogSource object you want to use for pulling updates. When updates are found, an appropriate InstallPlan object is written into the namespace on behalf of the user. The Catalog Operator uses the following workflow: Connect to each catalog source in the cluster. Watch for unresolved install plans created by a user, and if found: Find the CSV matching the name requested and add the CSV as a resolved resource. For each managed or required CRD, add the CRD as a resolved resource. For each required CRD, find the CSV that manages it. Watch for resolved install plans and create all of the discovered resources for it, if approved by a user or automatically. Watch for catalog sources and subscriptions and create install plans based on them. Catalog Registry The Catalog Registry stores CSVs and CRDs for creation in a cluster and stores metadata about packages and channels. A package manifest is an entry in the Catalog Registry that associates a package identity with sets of CSVs. Within a package, channels point to a particular CSV. Because CSVs explicitly reference the CSV that they replace, a package manifest provides the Catalog Operator with all of the information that is required to update a CSV to the latest version in a channel, stepping through each intermediate version. Additional resources For more information, see the sections on understanding Operator Lifecycle Manager (OLM) . 6.33. OpenShift Service CA Operator Purpose The OpenShift Service CA Operator mints and manages serving certificates for Kubernetes services. Project openshift-service-ca-operator 6.34. vSphere Problem Detector Operator Purpose The vSphere Problem Detector Operator checks clusters that are deployed on vSphere for common installation and misconfiguration issues that are related to storage. Note The vSphere Problem Detector Operator is only started by the Cluster Storage Operator when the Cluster Storage Operator detects that the cluster is deployed on vSphere. Configuration No configuration is required. Notes The Operator supports OpenShift Container Platform installations on vSphere. The Operator uses the vsphere-cloud-credentials to communicate with vSphere. The Operator performs checks that are related to storage. Additional resources For more details, see Using the vSphere Problem Detector Operator . | [
"oc get clusteroperator authentication -o yaml",
"oc -n openshift-monitoring edit cm cluster-monitoring-config",
"oc edit etcd cluster",
"oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml",
"oc get deployment -n openshift-ingress",
"oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'",
"map[cidr:10.128.0.0/14 hostPrefix:23]",
"oc edit kubeapiserver",
"oc get clusteroperator openshift-controller-manager -o yaml",
"oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operators/cluster-operators-ref |
Appendix B. Provisioning FIPS-compliant hosts | Appendix B. Provisioning FIPS-compliant hosts Satellite supports provisioning hosts that comply with the National Institute of Standards and Technology's Security Requirements for Cryptographic Modules standard, reference number FIPS 140-2, referred to here as FIPS. To enable the provisioning of hosts that are FIPS-compliant, complete the following tasks: Change the provisioning password hashing algorithm for the operating system Create a host group and set a host group parameter to enable FIPS For more information, see Creating a Host Group in Managing hosts . The provisioned hosts have the FIPS-compliant settings applied. To confirm that these settings are enabled, complete the steps in Section B.3, "Verifying FIPS mode is enabled" . B.1. Changing the provisioning password hashing algorithm To provision FIPS-compliant hosts, you must first set the password hashing algorithm that you use in provisioning to SHA256. This configuration setting must be applied for each operating system you want to deploy as FIPS-compliant. Procedure Identify the Operating System IDs: Update each operating system's password hash value. Note that you cannot use a comma-separated list of values. B.2. Setting the FIPS-enabled parameter To provision a FIPS-compliant host, you must create a host group and set the host group parameter fips_enabled to true . If this is not set to true , or is absent, the FIPS-specific changes do not apply to the system. You can set this parameter when you provision a host or for a host group. To set this parameter when provisioning a host, append --parameters fips_enabled=true to the Hammer command. For more information, see the output of the command hammer hostgroup set-parameter --help . B.3. Verifying FIPS mode is enabled To verify these FIPS compliance changes have been successful, you must provision a host and check its configuration. Procedure Log in to the host as root or with an admin-level account. Enter the following command: A value of 1 confirms that FIPS mode is enabled. | [
"hammer os list",
"hammer os update --password-hash SHA256 --title \" My_Operating_System \"",
"hammer hostgroup set-parameter --hostgroup \" My_Host_Group \" --name fips_enabled --value \"true\"",
"cat /proc/sys/crypto/fips_enabled"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/provisioning_hosts/Provisioning_FIPS_Compliant_Hosts_provisioning |
Installing on VMware vSphere | Installing on VMware vSphere OpenShift Container Platform 4.18 Installing OpenShift Container Platform on vSphere Red Hat OpenShift Documentation Team | [
"platform: vsphere: hosts: - role: bootstrap 1 networkDevice: ipAddrs: - 192.168.204.10/24 2 gateway: 192.168.204.1 3 nameservers: 4 - 192.168.204.1 - role: control-plane networkDevice: ipAddrs: - 192.168.204.11/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 - role: control-plane networkDevice: ipAddrs: - 192.168.204.12/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 - role: control-plane networkDevice: ipAddrs: - 192.168.204.13/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1 - role: compute networkDevice: ipAddrs: - 192.168.204.14/24 gateway: 192.168.204.1 nameservers: - 192.168.204.1",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 platform: vsphere: 5 apiVIPs: - 10.0.0.1 failureDomains: 6 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<data_center>/host/<cluster>\" datacenter: <data_center> datastore: \"/<data_center>/datastore/<datastore>\" 7 networks: - <VM_Network_name> resourcePool: \"/<data_center>/host/<cluster>/Resources/<resourcePool>\" 8 folder: \"/<data_center_name>/vm/<folder_name>/<subfolder_name>\" tagIDs: 9 - <tag_id> 10 zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <data_center> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 11 fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<data_center_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<data_center_1>/host/<cluster1>",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <data_center_1_name> - <data_center_2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <data_center_1> computeCluster: \"/<data_center_1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<data_center_1>/datastore/<datastore1>\" resourcePool: \"/<data_center_1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<data_center_1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <data_center_2> computeCluster: \"/<data_center_2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<data_center_2>/datastore/<datastore2>\" resourcePool: \"/<data_center_2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<data_center_2>/vm/<folder2>\" ---",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"platform: vsphere: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 5 serviceNetwork: - 172.30.0.0/16 platform: vsphere: 6 apiVIPs: - 10.0.0.1 failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<data_center>/host/<cluster>\" datacenter: <data_center> datastore: \"/<data_center>/datastore/<datastore>\" 8 networks: - <VM_Network_name> resourcePool: \"/<data_center>/host/<cluster>/Resources/<resourcePool>\" 9 folder: \"/<data_center_name>/vm/<folder_name>/<subfolder_name>\" tagIDs: 10 - <tag_id> 11 zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <data_center> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 12 fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112",
"platform: vsphere: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6>",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<data_center_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<data_center_1>/host/<cluster1>",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <data_center_1_name> - <data_center_2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <data_center_1> computeCluster: \"/<data_center_1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<data_center_1>/datastore/<datastore1>\" resourcePool: \"/<data_center_1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<data_center_1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <data_center_2> computeCluster: \"/<data_center_2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<data_center_2>/datastore/<datastore2>\" resourcePool: \"/<data_center_2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<data_center_2>/vm/<folder2>\" ---",
"platform: vsphere: vcenters: failureDomains: - name: <failure_domain_name> region: <default_region_name> zone: <default_zone_name> server: <fully_qualified_domain_name> topology: datacenter: <data_center> computeCluster: \"/<data_center>/host/<cluster>\" networks: 1 - <VM_network1_name> - <VM_network2_name> - - <VM_network10_name>",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"platform: vsphere: nodeNetworking: external: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> internal: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> failureDomains: - name: <failure_domain_name> region: <default_region_name>",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full",
"ERROR Bootstrap failed to complete: timed out waiting for the condition ERROR Failed to wait for bootstrapping to complete. This error usually happens when there is a problem with control plane hosts that prevents the control plane operators from creating the control plane.",
"apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: generated-failure-domain nodeNetworking: external: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> internal: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6>",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"platform: vsphere: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"cd ~/clusterconfigs",
"cd manifests",
"touch cluster-network-avoid-workers-99-config.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:,",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: \"\"",
"sed -i \"s;mastersSchedulable: false;mastersSchedulable: true;g\" clusterconfigs/manifests/cluster-scheduler-02-config.yml",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 platform: vsphere: 5 apiVIPs: - 10.0.0.1 failureDomains: 6 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<data_center>/host/<cluster>\" datacenter: <data_center> datastore: \"/<data_center>/datastore/<datastore>\" 7 networks: - <VM_Network_name> resourcePool: \"/<data_center>/host/<cluster>/Resources/<resourcePool>\" 8 folder: \"/<data_center_name>/vm/<folder_name>/<subfolder_name>\" tagIDs: 9 - <tag_id> 10 zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <data_center> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 11 clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-vmware.x86_64.ova 12 fips: false pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 13 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 14 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 15 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<data_center_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<data_center_1>/host/<cluster1>",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <data_center_1_name> - <data_center_2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <data_center_1> computeCluster: \"/<data_center_1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<data_center_1>/datastore/<datastore1>\" resourcePool: \"/<data_center_1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<data_center_1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <data_center_2> computeCluster: \"/<data_center_2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<data_center_2>/datastore/<datastore2>\" resourcePool: \"/<data_center_2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<data_center_2>/vm/<folder2>\" ---",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"platform: vsphere: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"mkdir <installation_directory>",
"additionalTrustBundlePolicy: Proxyonly apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 0 3 controlPlane: 4 architecture: amd64 name: <parent_node> platform: {} replicas: 3 5 metadata: creationTimestamp: null name: test 6 networking: --- platform: vsphere: failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<data_center>/host/<cluster>\" datacenter: <data_center> 8 datastore: \"/<data_center>/datastore/<datastore>\" 9 networks: - <VM_Network_name> resourcePool: \"/<data_center>/host/<cluster>/Resources/<resourcePool>\" 10 folder: \"/<data_center_name>/vm/<folder_name>/<subfolder_name>\" 11 zone: <default_zone_name> vcenters: - datacenters: - <data_center> password: <password> 12 port: 443 server: <fully_qualified_domain_name> 13 user: [email protected] diskType: thin 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<data_center_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<data_center_1>/host/<cluster1>",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <data_center_1_name> - <data_center_2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <data_center_1> computeCluster: \"/<data_center_1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<data_center_1>/datastore/<datastore1>\" resourcePool: \"/<data_center_1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<data_center_1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <data_center_2> computeCluster: \"/<data_center_2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<data_center_2>/datastore/<datastore2>\" resourcePool: \"/<data_center_2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<data_center_2>/vm/<folder2>\" ---",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.18.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster -enable -anti-affinity master-0 master-1 master-2",
"govc cluster.rule.remove -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster",
"[13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyOtherCluster -enable -anti-affinity master-0 master-1 master-2",
"mkdir <installation_directory>",
"additionalTrustBundlePolicy: Proxyonly apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 0 3 controlPlane: 4 architecture: amd64 name: <parent_node> platform: {} replicas: 3 5 metadata: creationTimestamp: null name: test 6 networking: --- platform: vsphere: failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<data_center>/host/<cluster>\" datacenter: <data_center> 8 datastore: \"/<data_center>/datastore/<datastore>\" 9 networks: - <VM_Network_name> resourcePool: \"/<data_center>/host/<cluster>/Resources/<resourcePool>\" 10 folder: \"/<data_center_name>/vm/<folder_name>/<subfolder_name>\" 11 zone: <default_zone_name> vcenters: - datacenters: - <data_center> password: <password> 12 port: 443 server: <fully_qualified_domain_name> 13 user: [email protected] diskType: thin 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<data_center_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<data_center_1>/host/<cluster1>",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <data_center_1_name> - <data_center_2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <data_center_1> computeCluster: \"/<data_center_1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<data_center_1>/datastore/<datastore1>\" resourcePool: \"/<data_center_1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<data_center_1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <data_center_2> computeCluster: \"/<data_center_2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<data_center_2>/datastore/<datastore2>\" resourcePool: \"/<data_center_2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<data_center_2>/vm/<folder2>\" ---",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"ERROR Bootstrap failed to complete: timed out waiting for the condition ERROR Failed to wait for bootstrapping to complete. This error usually happens when there is a problem with control plane hosts that prevents the control plane operators from creating the control plane.",
"apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: generated-failure-domain nodeNetworking: external: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> internal: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6>",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.18.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster -enable -anti-affinity master-0 master-1 master-2",
"govc cluster.rule.remove -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster",
"[13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyOtherCluster -enable -anti-affinity master-0 master-1 master-2",
"mkdir <installation_directory>",
"additionalTrustBundlePolicy: Proxyonly apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 0 3 controlPlane: 4 architecture: amd64 name: <parent_node> platform: {} replicas: 3 5 metadata: creationTimestamp: null name: test 6 networking: --- platform: vsphere: failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<data_center>/host/<cluster>\" datacenter: <data_center> 8 datastore: \"/<data_center>/datastore/<datastore>\" 9 networks: - <VM_Network_name> resourcePool: \"/<data_center>/host/<cluster>/Resources/<resourcePool>\" 10 folder: \"/<data_center_name>/vm/<folder_name>/<subfolder_name>\" 11 zone: <default_zone_name> vcenters: - datacenters: - <data_center> password: <password> 12 port: 443 server: <fully_qualified_domain_name> 13 user: [email protected] diskType: thin 14 fips: false 15 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<data_center_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<data_center_1>/host/<cluster1>",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <data_center_1_name> - <data_center_2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <data_center_1> computeCluster: \"/<data_center_1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<data_center_1>/datastore/<datastore1>\" resourcePool: \"/<data_center_1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<data_center_1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <data_center_2> computeCluster: \"/<data_center_2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<data_center_2>/datastore/<datastore2>\" resourcePool: \"/<data_center_2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<data_center_2>/vm/<folder2>\" ---",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.18.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.18.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster -enable -anti-affinity master-0 master-1 master-2",
"govc cluster.rule.remove -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster",
"[13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyOtherCluster -enable -anti-affinity master-0 master-1 master-2",
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"oc scale deployment/vsphere-problem-detector-operator --replicas=0 -n openshift-cluster-storage-operator",
"oc -n openshift-cluster-storage-operator get pod -l name=vsphere-problem-detector-operator -w",
"NAME READY STATUS RESTARTS AGE vsphere-problem-detector-operator-77486bd645-9ntpb 1/1 Running 0 11s",
"oc get event -n openshift-cluster-storage-operator --sort-by={.metadata.creationTimestamp}",
"16m Normal Started pod/vsphere-problem-detector-operator-xxxxx Started container vsphere-problem-detector 16m Normal Created pod/vsphere-problem-detector-operator-xxxxx Created container vsphere-problem-detector 16m Normal LeaderElection configmap/vsphere-problem-detector-lock vsphere-problem-detector-operator-xxxxx became leader",
"oc logs deployment/vsphere-problem-detector-operator -n openshift-cluster-storage-operator",
"I0108 08:32:28.445696 1 operator.go:209] ClusterInfo passed I0108 08:32:28.451029 1 datastore.go:57] CheckStorageClasses checked 1 storage classes, 0 problems found I0108 08:32:28.451047 1 operator.go:209] CheckStorageClasses passed I0108 08:32:28.452160 1 operator.go:209] CheckDefaultDatastore passed I0108 08:32:28.480648 1 operator.go:271] CheckNodeDiskUUID:<host_name> passed I0108 08:32:28.480685 1 operator.go:271] CheckNodeProviderID:<host_name> passed",
"oc get nodes -o custom-columns=NAME:.metadata.name,PROVIDER_ID:.spec.providerID,UUID:.status.nodeInfo.systemUUID",
"/var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[<datastore>] 00000000-0000-0000-0000-000000000000/<cluster_id>-dynamic-pvc-00000000-0000-0000-0000-000000000000.vmdk",
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"networking: ovnKubernetesConfig: ipv4: internalJoinSubnet:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"platform: vsphere:",
"platform: vsphere: apiVIPs:",
"platform: vsphere: diskType:",
"platform: vsphere: failureDomains:",
"platform: vsphere: failureDomains: name:",
"platform: vsphere: failureDomains: region:",
"platform: vsphere: failureDomains: server:",
"platform: vsphere: failureDomains: zone:",
"platform: vsphere: failureDomains: topology: computeCluster:",
"platform: vsphere: failureDomains: topology: datacenter:",
"platform: vsphere: failureDomains: topology: datastore:",
"platform: vsphere: failureDomains: topology: folder:",
"platform: vsphere: failureDomains: topology: networks:",
"platform: vsphere: failureDomains: topology: resourcePool:",
"platform: vsphere: failureDomains: topology template:",
"platform: vsphere: ingressVIPs:",
"platform: vsphere: vcenters:",
"platform: vsphere: vcenters: datacenters:",
"platform: vsphere: vcenters: password:",
"platform: vsphere: vcenters: port:",
"platform: vsphere: vcenters: server:",
"platform: vsphere: vcenters: user:",
"platform: vsphere: apiVIP:",
"platform: vsphere: cluster:",
"platform: vsphere: datacenter:",
"platform: vsphere: defaultDatastore:",
"platform: vsphere: folder:",
"platform: vsphere: ingressVIP:",
"platform: vsphere: network:",
"platform: vsphere: password:",
"platform: vsphere: resourcePool:",
"platform: vsphere: username:",
"platform: vsphere: vCenter:",
"platform: vsphere: clusterOSImage:",
"platform: vsphere: osDisk: diskSizeGB:",
"platform: vsphere: cpus:",
"platform: vsphere: coresPerSocket:",
"platform: vsphere: memoryMB:",
"oc edit infrastructures.config.openshift.io cluster",
"spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: vSphere vsphere: vcenters: - datacenters: - <region_a_data_center> - <region_b_data_center> port: 443 server: <your_vcenter_server> failureDomains: - name: <failure_domain_1> region: <region_a> zone: <zone_a> server: <your_vcenter_server> topology: datacenter: <region_a_dc> computeCluster: \"</region_a_dc/host/zone_a_cluster>\" resourcePool: \"</region_a_dc/host/zone_a_cluster/Resources/resource_pool>\" datastore: \"</region_a_dc/datastore/datastore_a>\" networks: - port-group - name: <failure_domain_2> region: <region_a> zone: <zone_b> server: <your_vcenter_server> topology: computeCluster: </region_a_dc/host/zone_b_cluster> datacenter: <region_a_dc> datastore: </region_a_dc/datastore/datastore_a> networks: - port-group - name: <failure_domain_3> region: <region_b> zone: <zone_a> server: <your_vcenter_server> topology: computeCluster: </region_b_dc/host/zone_a_cluster> datacenter: <region_b_dc> datastore: </region_b_dc/datastore/datastore_b> networks: - port-group nodeNetworking: external: {} internal: {}",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/installing_on_vmware_vsphere/index |
Chapter 2. Restoring from a backup | Chapter 2. Restoring from a backup You can restore Red Hat Advanced Cluster Security for Kubernetes from an existing backup by using the roxctl command-line interface (CLI). Depending upon your requirements and the data you have backed up, you can restore from the following types of backups: Restore Central database from the Central database backup : Use this to recover from a database failure or data corruption event. It allows you to restore and recover the Central database to its earlier functional state. Restore Central from the Central deployment backup : Use this if you are migrating Central to another cluster or namespace. This option restores the configurations of your Central installation. 2.1. Restoring Central database by using the roxctl CLI You can use the roxctl CLI to restore Red Hat Advanced Cluster Security for Kubernetes by using the restore command. You require an API token or your administrator password to run this command. 2.1.1. Restoring by using an API token You can restore the entire database of RHACS by using an API token. Prerequisites You have a RHACS backup file. You have an API token with the administrator role. You have installed the roxctl CLI. Procedure Set the ROX_API_TOKEN and the ROX_ENDPOINT environment variables by running the following commands: USD export ROX_API_TOKEN=<api_token> USD export ROX_ENDPOINT=<address>:<port_number> Restore the Central database by running the following command: USD roxctl central db restore <backup_file> 1 1 For <backup_file> , specify the name of the backup file that you want to restore. 2.1.2. Restoring by using the administrator password You can restore the entire database of RHACS by using your administrator password. Prerequisites You have a RHACS backup file. You have the administrator password. You have installed the roxctl CLI. Procedure Set the ROX_ENDPOINT environment variable by running the following command: USD export ROX_ENDPOINT=<address>:<port_number> Restore the Central database by running the following command: USD roxctl -p <admin_password> \ 1 central db restore <backup_file> 2 1 For <admin_password> , specify the administrator password. 2 For <backup_file> , specify the name of the backup file that you want to restore. 2.1.3. Resuming the restore operation If your connection is interrupted during a restore operation or you need to go offline, you can resume the restore operation. If you do not have access to the machine running the resume operation, you can use the roxctl central db restore status command to check the status of an ongoing restore operation. If the connection is interrupted, the roxctl CLI automatically attempts to restore a task as soon as the connection is available again. The automatic connection retries depend on the duration specified by the timeout option. Use the --timeout option to specify the time in seconds, minutes or hours after which the roxctl CLI stops trying to resume a restore operation. If the option is not specified, the default timeout is 10 minutes. If a restore operation gets stuck or you want to cancel it, use the roxctl central db restore cancel command to cancel a running restore operation. If a restore operation is stuck, you have canceled it, or the time has expired, you can resume the restore by running the original command again. Important During interruptions, RHACS caches an ongoing restore operation for 24 hours. You can resume this operation by executing the original restore command again. The --timeout option only controls the client-side connection retries and has no effect on the server-side restore cache of 24 hours. You cannot resume restores across Central pod restarts. If a restore operation is interrupted, you must restart it within 24 hours and before restarting Central, otherwise RHACS cancels the restore operation. 2.2. Restoring Central deployment using the roxctl CLI You can restore your Central deployment to its original configuration by using the backups you made. You must first restore certificates by using the roxctl CLI, and then restore the Central deployment by running the Central installation scripts. 2.2.1. Restore certificates using the roxctl CLI Use the roxctl CLI to generate Kubernetes manifests to install the RHACS Central component to your cluster. Doing this allows you to ensure that authentication certificates for Secured clusters and the API tokens remain valid for the restored version. If you backed up another instance of RHACS Central, you can use the certificate files from that backup. Note With the roxctl CLI, you can not restore the entire Central deployment. Instead, first you use the roxctl CLI to generate new manifests using the certificates in your central data backup. Afterwards, you use those manifests to install Central. Prerequisites You must have the Red Hat Advanced Cluster Security for Kubernetes backup file. You must have installed the roxctl CLI. Procedure Run the interactive install command: USD roxctl central generate interactive For the following prompt, enter the path of the Red Hat Advanced Cluster Security for Kubernetes backup file: Enter path to the backup bundle from which to restore keys and certificates (optional): _<backup-file-path>_ For other following prompts, press Enter to accept the default value or enter custom values as required. On completion, the interactive install command creates a folder named central-bundle , which has the necessary YAML manifests and scripts to deploy Central. 2.2.2. Running the Central installation scripts After you run the interactive installer, you can run the setup.sh script to install Central. Procedure Run the setup.sh script to configure image registry access: USD ./central-bundle/central/scripts/setup.sh Create the necessary resources: USD oc create -R -f central-bundle/central Check the deployment progress: USD oc get pod -n stackrox -w After Central is running, find the RHACS portal IP address and open it in your browser. Depending on the exposure method you selected when answering the prompts, use one of the following methods to get the IP address. Exposure method Command Address Example Route oc -n stackrox get route central The address under the HOST/PORT column in the output https://central-stackrox.example.route Node Port oc get node -owide && oc -n stackrox get svc central-loadbalancer IP or hostname of any node, on the port shown for the service https://198.51.100.0:31489 Load Balancer oc -n stackrox get svc central-loadbalancer EXTERNAL-IP or hostname shown for the service, on port 443 https://192.0.2.0 None central-bundle/central/scripts/port-forward.sh 8443 https://localhost:8443 https://localhost:8443 Note If you have selected autogenerated password during the interactive install, you can run the following command to see it for logging into Central: USD cat central-bundle/password 2.3. Restore Central deployment using the RHACS Operator You can restore your Central deployment to its original configuration by using the RHACS Operator. To successfully restore, you need the backup of your Central custom resource, central-tls , and the administrator password. Prerequisites You must have the central-tls backup file. You must have the Central custom resource backup file. You must have the administrator password backup file. Procedure Use the central-tls backup file to create resources: USD oc apply -f central-tls.json Use the central-htpasswd backup file to create secrets: USD oc apply -f central-htpasswd.json Use the central-cr.yaml file to create the Central deployment: USD oc apply -f central-cr.yaml 2.4. Restore Central deployment using Helm You can restore your Central deployment to its original configuration by using Helm. To successfully restore, you need the backup of your Central custom resource, the central-tls secret, and the administrator password. Prerequisites You must have the Helm values backup file. You must have a Red Hat Advanced Cluster Security for Kubernetes backup file. You must have installed the roxctl CLI. Procedure Generate values-private.yaml from the RHACS database backup file: USD roxctl central generate k8s pvc --backup-bundle _<path-to-backup-file>_ --output-format "helm-values" Run the helm install command and specify your backup files: USD helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services -f central-values-backup.yaml -f central-bundle/values-private.yaml 2.5. Restoring central to another cluster or namespace You can use the backups of the RHACS Central database and the deployment to restore Central to another cluster or namespace. The following list provides a high-level overview of installation steps: Depending upon your installation method, you must first restore Central deployment by following the instructions in the following topics: Important Make sure to use the backed-up Central certificates so that secured clusters and API tokens issued by the old Central instance remain valid. If you are deploying to another namespace, you must change the namespace in backed-up resources or commands. Restoring Central deployment using the roxctl CLI Restore Central deployment using the RHACS Operator Restore Central deployment using Helm Restore Central database by following the instruction in the Restoring Central database by using the roxctl CLI topic. If you have an external DNS entry pointing to your old RHACS Central instance, you must reconfigure it to point to the new RHACS Central instance that you create. | [
"export ROX_API_TOKEN=<api_token>",
"export ROX_ENDPOINT=<address>:<port_number>",
"roxctl central db restore <backup_file> 1",
"export ROX_ENDPOINT=<address>:<port_number>",
"roxctl -p <admin_password> \\ 1 central db restore <backup_file> 2",
"roxctl central generate interactive",
"Enter path to the backup bundle from which to restore keys and certificates (optional): _<backup-file-path>_",
"./central-bundle/central/scripts/setup.sh",
"oc create -R -f central-bundle/central",
"oc get pod -n stackrox -w",
"cat central-bundle/password",
"oc apply -f central-tls.json",
"oc apply -f central-htpasswd.json",
"oc apply -f central-cr.yaml",
"roxctl central generate k8s pvc --backup-bundle _<path-to-backup-file>_ --output-format \"helm-values\"",
"helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services -f central-values-backup.yaml -f central-bundle/values-private.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/backup_and_restore/restore-acs |
20.3.4.4. Configuring ssh-agent with GNOME | 20.3.4.4. Configuring ssh-agent with GNOME The ssh-agent utility can be used to save your passphrase so that you do not have to enter it each time you initiate an ssh or scp connection. If you are using GNOME, the openssh-askpass-gnome package contains the application used to prompt you for your passphrase when you log in to GNOME and save it until you log out of GNOME. You will not have to enter your password or passphrase for any ssh or scp connection made during that GNOME session. If you are not using GNOME, refer to Section 20.3.4.5, "Configuring ssh-agent " . To save your passphrase during your GNOME session, follow the following steps: You will need to have the package openssh-askpass-gnome installed; you can use the command rpm -q openssh-askpass-gnome to determine if it is installed or not. If it is not installed, install it from your Red Hat Enterprise Linux CD-ROM set, from a Red Hat FTP mirror site, or using Red Hat Network. Select Main Menu Button (on the Panel) => Preferences => More Preferences => Sessions , and click on the Startup Programs tab. Click Add and enter /usr/bin/ssh-add in the Startup Command text area. Set it a priority to a number higher than any existing commands to ensure that it is executed last. A good priority number for ssh-add is 70 or higher. The higher the priority number, the lower the priority. If you have other programs listed, this one should have the lowest priority. Click Close to exit the program. Log out and then log back into GNOME; in other words, restart X. After GNOME is started, a dialog box will appear prompting you for your passphrase(s). Enter the passphrase requested. If you have both DSA and RSA key pairs configured, you will be prompted for both. From this point on, you should not be prompted for a password by ssh , scp , or sftp . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/generating_key_pairs-configuring_ssh_agent_with_gnome |
Chapter 10. Advanced migration options | Chapter 10. Advanced migration options You can automate your migrations and modify the MigPlan and MigrationController custom resources in order to perform large-scale migrations and to improve performance. 10.1. Terminology Table 10.1. MTC terminology Term Definition Source cluster Cluster from which the applications are migrated. Destination cluster [1] Cluster to which the applications are migrated. Replication repository Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. Host cluster Cluster on which the migration-controller pod and the web console are running. The host cluster is usually the destination cluster but this is not required. The host cluster does not require an exposed registry route for direct image migration. Remote cluster A remote cluster is usually the source cluster but this is not required. A remote cluster requires a Secret custom resource that contains the migration-controller service account token. A remote cluster requires an exposed secure registry route for direct image migration. Indirect migration Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. Direct volume migration Persistent volumes are copied directly from the source cluster to the destination cluster. Direct image migration Images are copied directly from the source cluster to the destination cluster. Stage migration Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. Cutover migration The application is stopped on the source cluster and its resources are migrated to the destination cluster. State migration Application state is migrated by copying specific persistent volume claims to the destination cluster. Rollback migration Rollback migration rolls back a completed migration. 1 Called the target cluster in the MTC web console. 10.2. Migrating applications by using the command line You can migrate applications with the MTC API by using the command line interface (CLI) in order to automate the migration. 10.2.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. 10.2.2. Creating a registry route for direct image migration For direct image migration, you must create a route to the exposed OpenShift image registry on all remote clusters. Prerequisites The OpenShift image registry must be exposed to external traffic on all remote clusters. The OpenShift Container Platform 4 registry is exposed by default. Procedure To create a route to an OpenShift Container Platform 4 registry, run the following command: USD oc create route passthrough --service=image-registry -n openshift-image-registry 10.2.3. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.13, the MTC inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 10.2.3.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 10.2.3.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 10.2.3.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 10.2.3.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 10.2.3.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 10.2.3.2.1. NetworkPolicy configuration 10.2.3.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 10.2.3.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 10.2.3.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 10.2.3.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 10.2.3.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 10.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 10.2.3.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration 10.2.4. Migrating an application by using the MTC API You can migrate an application from the command line by using the Migration Toolkit for Containers (MTC) API. Procedure Create a MigCluster CR manifest for the host cluster: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF Create a Secret object manifest for each remote cluster: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF 1 Specify the base64-encoded migration-controller service account (SA) token of the remote cluster. You can obtain the token by running the following command: USD oc sa get-token migration-controller -n openshift-migration | base64 -w 0 Create a MigCluster CR manifest for each remote cluster: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF 1 Specify the Cluster CR of the remote cluster. 2 Optional: For direct image migration, specify the exposed registry route. 3 SSL verification is enabled if false . CA certificates are not required or checked if true . 4 Specify the Secret object of the remote cluster. 5 Specify the URL of the remote cluster. Verify that all clusters are in a Ready state: USD oc describe MigCluster <cluster> Create a Secret object manifest for the replication repository: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF 1 Specify the key ID in base64 format. 2 Specify the secret key in base64 format. AWS credentials are base64-encoded by default. For other storage providers, you must encode your credentials by running the following command with each key: USD echo -n "<key>" | base64 -w 0 1 1 Specify the key ID or the secret key. Both keys must be base64-encoded. Create a MigStorage CR manifest for the replication repository: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF 1 Specify the bucket name. 2 Specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. 3 Specify the storage provider. 4 Optional: If you are copying data by using snapshots, specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. 5 Optional: If you are copying data by using snapshots, specify the storage provider. Verify that the MigStorage CR is in a Ready state: USD oc describe migstorage <migstorage> Create a MigPlan CR manifest: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF 1 Direct image migration is enabled if false . 2 Direct volume migration is enabled if false . 3 Specify the name of the MigStorage CR instance. 4 Specify one or more source namespaces. By default, the destination namespace has the same name. 5 Specify a destination namespace if it is different from the source namespace. 6 Specify the name of the source cluster MigCluster instance. Verify that the MigPlan instance is in a Ready state: USD oc describe migplan <migplan> -n openshift-migration Create a MigMigration CR manifest to start the migration defined in the MigPlan instance: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF 1 Specify the MigPlan CR name. 2 The pods on the source cluster are stopped before migration if true . 3 A stage migration, which copies most of the data without stopping the application, is performed if true . 4 A completed migration is rolled back if true . Verify the migration by watching the MigMigration CR progress: USD oc watch migmigration <migmigration> -n openshift-migration The output resembles the following: Example output Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration ... Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47 10.2.5. State migration You can perform repeatable, state-only migrations by using Migration Toolkit for Containers (MTC) to migrate persistent volume claims (PVCs) that constitute an application's state. You migrate specified PVCs by excluding other PVCs from the migration plan. You can map the PVCs to ensure that the source and the target PVCs are synchronized. Persistent volume (PV) data is copied to the target cluster. The PV references are not moved, and the application pods continue to run on the source cluster. State migration is specifically designed to be used in conjunction with external CD mechanisms, such as OpenShift Gitops. You can migrate application manifests using GitOps while migrating the state using MTC. If you have a CI/CD pipeline, you can migrate stateless components by deploying them on the target cluster. Then you can migrate stateful components by using MTC. You can perform a state migration between clusters or within the same cluster. Important State migration migrates only the components that constitute an application's state. If you want to migrate an entire namespace, use stage or cutover migration. Prerequisites The state of the application on the source cluster is persisted in PersistentVolumes provisioned through PersistentVolumeClaims . The manifests of the application are available in a central repository that is accessible from both the source and the target clusters. Procedure Migrate persistent volume data from the source to the target cluster. You can perform this step as many times as needed. The source application continues running. Quiesce the source application. You can do this by setting the replicas of workload resources to 0 , either directly on the source cluster or by updating the manifests in GitHub and re-syncing the Argo CD application. Clone application manifests to the target cluster. You can use Argo CD to clone the application manifests to the target cluster. Migrate the remaining volume data from the source to the target cluster. Migrate any new data created by the application during the state migration process by performing a final data migration. If the cloned application is in a quiesced state, unquiesce it. Switch the DNS record to the target cluster to re-direct user traffic to the migrated application. Note MTC 1.6 cannot quiesce applications automatically when performing state migration. It can only migrate PV data. Therefore, you must use your CD mechanisms for quiescing or unquiescing applications. MTC 1.7 introduces explicit Stage and Cutover flows. You can use staging to perform initial data transfers as many times as needed. Then you can perform a cutover, in which the source applications are quiesced automatically. Additional resources See Excluding PVCs from migration to select PVCs for state migration. See Mapping PVCs to migrate source PV data to provisioned PVCs on the destination cluster. See Migrating Kubernetes objects to migrate the Kubernetes objects that constitute an application's state. 10.3. Migration hooks You can add up to four migration hooks to a single migration plan, with each hook running at a different phase of the migration. Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration. A migration hook runs on a source or a target cluster at one of the following migration steps: PreBackup : Before resources are backed up on the source cluster. PostBackup : After resources are backed up on the source cluster. PreRestore : Before resources are restored on the target cluster. PostRestore : After resources are restored on the target cluster. You can create a hook by creating an Ansible playbook that runs with the default Ansible image or with a custom hook container. Ansible playbook The Ansible playbook is mounted on a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan custom resource. The job continues to run until it reaches the default limit of 6 retries or a successful completion. This continues even if the initial pod is evicted or killed. The default Ansible runtime image is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel7:1.8 . This image is based on the Ansible Runner image and includes python-openshift for Ansible Kubernetes resources and an updated oc binary. Custom hook container You can use a custom hook container instead of the default Ansible image. 10.3.1. Writing an Ansible playbook for a migration hook You can write an Ansible playbook to use as a migration hook. The hook is added to a migration plan by using the MTC web console or by specifying values for the spec.hooks parameters in the MigPlan custom resource (CR) manifest. The Ansible playbook is mounted onto a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan CR. The hook container uses a specified service account token so that the tasks do not require authentication before they run in the cluster. 10.3.1.1. Ansible modules You can use the Ansible shell module to run oc commands. Example shell module - hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces You can use kubernetes.core modules, such as k8s_info , to interact with Kubernetes resources. Example k8s_facts module - hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: "{{ lookup( 'env', 'HOSTNAME') }}" register: pods - name: Print pod name debug: msg: "{{ pods.resources[0].metadata.name }}" You can use the fail module to produce a non-zero exit status in cases where a non-zero exit status would not normally be produced, ensuring that the success or failure of a hook is detected. Hooks run as jobs and the success or failure status of a hook is based on the exit status of the job container. Example fail module - hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: "fail" fail: msg: "Cause a failure" when: do_fail 10.3.1.2. Environment variables The MigPlan CR name and migration namespaces are passed as environment variables to the hook container. These variables are accessed by using the lookup plugin. Example environment variables - hosts: localhost gather_facts: false tasks: - set_fact: namespaces: "{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}" - debug: msg: "{{ item }}" with_items: "{{ namespaces }}" - debug: msg: "{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}" 10.4. Migration plan options You can exclude, edit, and map components in the MigPlan custom resource (CR). 10.4.1. Excluding resources You can exclude resources, for example, image streams, persistent volumes (PVs), or subscriptions, from a Migration Toolkit for Containers (MTC) migration plan to reduce the resource load for migration or to migrate images or PVs with a different tool. By default, the MTC excludes service catalog resources and Operator Lifecycle Manager (OLM) resources from migration. These resources are parts of the service catalog API group and the OLM API group, neither of which is supported for migration at this time. Procedure Edit the MigrationController custom resource manifest: USD oc edit migrationcontroller <migration_controller> -n openshift-migration Update the spec section by adding parameters to exclude specific resources. For those resources that do not have their own exclusion parameters, add the additional_excluded_resources parameter: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2 ... 1 Add disable_image_migration: true to exclude image streams from the migration. imagestreams is added to the excluded_resources list in main.yml when the MigrationController pod restarts. 2 Add disable_pv_migration: true to exclude PVs from the migration plan. persistentvolumes and persistentvolumeclaims are added to the excluded_resources list in main.yml when the MigrationController pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan. 3 You can add OpenShift Container Platform resources that you want to exclude to the additional_excluded_resources list. Wait two minutes for the MigrationController pod to restart so that the changes are applied. Verify that the resource is excluded: USD oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1 The output contains the excluded resources: Example output name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims 10.4.2. Mapping namespaces If you map namespaces in the MigPlan custom resource (CR), you must ensure that the namespaces are not duplicated on the source or the destination clusters because the UID and GID ranges of the namespaces are copied during migration. Two source namespaces mapped to the same destination namespace spec: namespaces: - namespace_2 - namespace_1:namespace_2 If you want the source namespace to be mapped to a namespace of the same name, you do not need to create a mapping. By default, a source namespace and a target namespace have the same name. Incorrect namespace mapping spec: namespaces: - namespace_1:namespace_1 Correct namespace reference spec: namespaces: - namespace_1 10.4.3. Excluding persistent volume claims You select persistent volume claims (PVCs) for state migration by excluding the PVCs that you do not want to migrate. You exclude PVCs by setting the spec.persistentVolumes.pvc.selection.action parameter of the MigPlan custom resource (CR) after the persistent volumes (PVs) have been discovered. Prerequisites MigPlan CR is in a Ready state. Procedure Add the spec.persistentVolumes.pvc.selection.action parameter to the MigPlan CR and set it to skip : apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: ... selection: action: skip 10.4.4. Mapping persistent volume claims You can migrate persistent volume (PV) data from the source cluster to persistent volume claims (PVCs) that are already provisioned in the destination cluster in the MigPlan CR by mapping the PVCs. This mapping ensures that the destination PVCs of migrated applications are synchronized with the source PVCs. You map PVCs by updating the spec.persistentVolumes.pvc.name parameter in the MigPlan custom resource (CR) after the PVs have been discovered. Prerequisites MigPlan CR is in a Ready state. Procedure Update the spec.persistentVolumes.pvc.name parameter in the MigPlan CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1 1 Specify the PVC on the source cluster and the PVC on the destination cluster. If the destination PVC does not exist, it will be created. You can use this mapping to change the PVC name during migration. 10.4.5. Editing persistent volume attributes After you create a MigPlan custom resource (CR), the MigrationController CR discovers the persistent volumes (PVs). The spec.persistentVolumes block and the status.destStorageClasses block are added to the MigPlan CR. You can edit the values in the spec.persistentVolumes.selection block. If you change values outside the spec.persistentVolumes.selection block, the values are overwritten when the MigPlan CR is reconciled by the MigrationController CR. Note The default value for the spec.persistentVolumes.selection.storageClass parameter is determined by the following logic: If the source cluster PV is Gluster or NFS, the default is either cephfs , for accessMode: ReadWriteMany , or cephrbd , for accessMode: ReadWriteOnce . If the PV is neither Gluster nor NFS or if cephfs or cephrbd are not available, the default is a storage class for the same provisioner. If a storage class for the same provisioner is not available, the default is the default storage class of the destination cluster. You can change the storageClass value to the value of any name parameter in the status.destStorageClasses block of the MigPlan CR. If the storageClass value is empty, the PV will have no storage class after migration. This option is appropriate if, for example, you want to move the PV to an NFS volume on the destination cluster. Prerequisites MigPlan CR is in a Ready state. Procedure Edit the spec.persistentVolumes.selection values in the MigPlan CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs 1 Allowed values are move , copy , and skip . If only one action is supported, the default value is the supported action. If multiple actions are supported, the default value is copy . 2 Allowed values are snapshot and filesystem . Default value is filesystem . 3 The verify parameter is displayed if you select the verification option for file system copy in the MTC web console. You can set it to false . 4 You can change the default value to the value of any name parameter in the status.destStorageClasses block of the MigPlan CR. If no value is specified, the PV will have no storage class after migration. 5 Allowed values are ReadWriteOnce and ReadWriteMany . If this value is not specified, the default is the access mode of the source cluster PVC. You can only edit the access mode in the MigPlan CR. You cannot edit it by using the MTC web console. 10.4.6. Converting storage classes in the MTC web console You can convert the storage class of a persistent volume (PV) by migrating it within the same cluster. To do so, you must create and run a migration plan in the Migration Toolkit for Containers (MTC) web console. Prerequisites You must be logged in as a user with cluster-admin privileges on the cluster on which MTC is running. You must add the cluster to the MTC web console. Procedure In the left-side navigation pane of the OpenShift Container Platform web console, click Projects . In the list of projects, click your project. The Project details page opens. Click the DeploymentConfig name. Note the name of its running pod. Open the YAML tab of the project. Find the PVs and note the names of their corresponding persistent volume claims (PVCs). In the MTC web console, click Migration plans . Click Add migration plan . Enter the Plan name . The migration plan name must contain 3 to 63 lower-case alphanumeric characters ( a-z, 0-9 ) and must not contain spaces or underscores ( _ ). From the Migration type menu, select Storage class conversion . From the Source cluster list, select the desired cluster for storage class conversion. Click . The Namespaces page opens. Select the required project. Click . The Persistent volumes page opens. The page displays the PVs in the project, all selected by default. For each PV, select the desired target storage class. Click . The wizard validates the new migration plan and shows that it is ready. Click Close . The new plan appears on the Migration plans page. To start the conversion, click the options menu of the new plan. Under Migrations , two options are displayed, Stage and Cutover . Note Cutover migration updates PVC references in the applications. Stage migration does not update PVC references in the applications. Select the desired option. Depending on which option you selected, the Stage migration or Cutover migration notification appears. Click Migrate . Depending on which option you selected, the Stage started or Cutover started message appears. To see the status of the current migration, click the number in the Migrations column. The Migrations page opens. To see more details on the current migration and monitor its progress, select the migration from the Type column. The Migration details page opens. When the migration progresses to the DirectVolume step and the status of the step becomes Running Rsync Pods to migrate Persistent Volume data , you can click View details and see the detailed status of the copies. In the breadcrumb bar, click Stage or Cutover and wait for all steps to complete. Open the PersistentVolumeClaims tab of the OpenShift Container Platform web console. You can see new PVCs with the names of the initial PVCs but ending in new , which are using the target storage class. In the left-side navigation pane, click Pods . See that the pod of your project is running again. Additional resources For details about the move and copy actions, see MTC workflow . For details about the skip action, see Excluding PVCs from migration . For details about the file system and snapshot copy methods, see About data copy methods . 10.4.7. Performing a state migration of Kubernetes objects by using the MTC API After you migrate all the PV data, you can use the Migration Toolkit for Containers (MTC) API to perform a one-time state migration of Kubernetes objects that constitute an application. You do this by configuring MigPlan custom resource (CR) fields to provide a list of Kubernetes resources with an additional label selector to further filter those resources, and then performing a migration by creating a MigMigration CR. The MigPlan resource is closed after the migration. Note Selecting Kubernetes resources is an API-only feature. You must update the MigPlan CR and create a MigMigration CR for it by using the CLI. The MTC web console does not support migrating Kubernetes objects. Note After migration, the closed parameter of the MigPlan CR is set to true . You cannot create another MigMigration CR for this MigPlan CR. You add Kubernetes objects to the MigPlan CR by using one of the following options: Adding the Kubernetes objects to the includedResources section. When the includedResources field is specified in the MigPlan CR, the plan takes a list of group-kind as input. Only resources present in the list are included in the migration. Adding the optional labelSelector parameter to filter the includedResources in the MigPlan . When this field is specified, only resources matching the label selector are included in the migration. For example, you can filter a list of Secret and ConfigMap resources by using the label app: frontend as a filter. Procedure Update the MigPlan CR to include Kubernetes resources and, optionally, to filter the included resources by adding the labelSelector parameter: To update the MigPlan CR to include Kubernetes resources: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: "" - kind: <kind> group: "" 1 Specify the Kubernetes object, for example, Secret or ConfigMap . Optional: To filter the included resources by adding the labelSelector parameter: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: "" - kind: <kind> group: "" ... labelSelector: matchLabels: <label> 2 1 Specify the Kubernetes object, for example, Secret or ConfigMap . 2 Specify the label of the resources to migrate, for example, app: frontend . Create a MigMigration CR to migrate the selected Kubernetes resources. Verify that the correct MigPlan is referenced in migPlanRef : apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false 10.5. Migration controller options You can edit migration plan limits, enable persistent volume resizing, or enable cached Kubernetes clients in the MigrationController custom resource (CR) for large migrations and improved performance. 10.5.1. Increasing limits for large migrations You can increase the limits on migration objects and container resources for large migrations with the Migration Toolkit for Containers (MTC). Important You must test these changes before you perform a migration in a production environment. Procedure Edit the MigrationController custom resource (CR) manifest: USD oc edit migrationcontroller -n openshift-migration Update the following parameters: ... mig_controller_limits_cpu: "1" 1 mig_controller_limits_memory: "10Gi" 2 ... mig_controller_requests_cpu: "100m" 3 mig_controller_requests_memory: "350Mi" 4 ... mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7 ... 1 Specifies the number of CPUs available to the MigrationController CR. 2 Specifies the amount of memory available to the MigrationController CR. 3 Specifies the number of CPU units available for MigrationController CR requests. 100m represents 0.1 CPU units (100 * 1e-3). 4 Specifies the amount of memory available for MigrationController CR requests. 5 Specifies the number of persistent volumes that can be migrated. 6 Specifies the number of pods that can be migrated. 7 Specifies the number of namespaces that can be migrated. Create a migration plan that uses the updated parameters to verify the changes. If your migration plan exceeds the MigrationController CR limits, the MTC console displays a warning message when you save the migration plan. 10.5.2. Enabling persistent volume resizing for direct volume migration You can enable persistent volume (PV) resizing for direct volume migration to avoid running out of disk space on the destination cluster. When the disk usage of a PV reaches a configured level, the MigrationController custom resource (CR) compares the requested storage capacity of a persistent volume claim (PVC) to its actual provisioned capacity. Then, it calculates the space required on the destination cluster. A pv_resizing_threshold parameter determines when PV resizing is used. The default threshold is 3% . This means that PV resizing occurs when the disk usage of a PV is more than 97% . You can increase this threshold so that PV resizing occurs at a lower disk usage level. PVC capacity is calculated according to the following criteria: If the requested storage capacity ( spec.resources.requests.storage ) of the PVC is not equal to its actual provisioned capacity ( status.capacity.storage ), the greater value is used. If a PV is provisioned through a PVC and then subsequently changed so that its PV and PVC capacities no longer match, the greater value is used. Prerequisites The PVCs must be attached to one or more running pods so that the MigrationController CR can execute commands. Procedure Log in to the host cluster. Enable PV resizing by patching the MigrationController CR: USD oc patch migrationcontroller migration-controller -p '{"spec":{"enable_dvm_pv_resizing":true}}' \ 1 --type='merge' -n openshift-migration 1 Set the value to false to disable PV resizing. Optional: Update the pv_resizing_threshold parameter to increase the threshold: USD oc patch migrationcontroller migration-controller -p '{"spec":{"pv_resizing_threshold":41}}' \ 1 --type='merge' -n openshift-migration 1 The default value is 3 . When the threshold is exceeded, the following status message is displayed in the MigPlan CR status: status: conditions: ... - category: Warn durable: true lastTransitionTime: "2021-06-17T08:57:01Z" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: "False" type: PvCapacityAdjustmentRequired Note For AWS gp2 storage, this message does not appear unless the pv_resizing_threshold is 42% or greater because of the way gp2 calculates volume usage and size. ( BZ#1973148 ) 10.5.3. Enabling cached Kubernetes clients You can enable cached Kubernetes clients in the MigrationController custom resource (CR) for improved performance during migration. The greatest performance benefit is displayed when migrating between clusters in different regions or with significant network latency. Note Delegated tasks, for example, Rsync backup for direct volume migration or Velero backup and restore, however, do not show improved performance with cached clients. Cached clients require extra memory because the MigrationController CR caches all API resources that are required for interacting with MigCluster CRs. Requests that are normally sent to the API server are directed to the cache instead. The cache watches the API server for updates. You can increase the memory limits and requests of the MigrationController CR if OOMKilled errors occur after you enable cached clients. Procedure Enable cached clients by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_enable_cache", "value": true}]' Optional: Increase the MigrationController CR memory limits by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_limits_memory", "value": <10Gi>}]' Optional: Increase the MigrationController CR memory requests by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_requests_memory", "value": <350Mi>}]' | [
"oc create route passthrough --service=image-registry -n openshift-image-registry",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF",
"oc sa get-token migration-controller -n openshift-migration | base64 -w 0",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF",
"oc describe MigCluster <cluster>",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF",
"echo -n \"<key>\" | base64 -w 0 1",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF",
"oc describe migstorage <migstorage>",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF",
"oc describe migplan <migplan> -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF",
"oc watch migmigration <migmigration> -n openshift-migration",
"Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47",
"- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces",
"- hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: \"{{ lookup( 'env', 'HOSTNAME') }}\" register: pods - name: Print pod name debug: msg: \"{{ pods.resources[0].metadata.name }}\"",
"- hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: \"fail\" fail: msg: \"Cause a failure\" when: do_fail",
"- hosts: localhost gather_facts: false tasks: - set_fact: namespaces: \"{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}\" - debug: msg: \"{{ item }}\" with_items: \"{{ namespaces }}\" - debug: msg: \"{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}\"",
"oc edit migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2",
"oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1",
"name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims",
"spec: namespaces: - namespace_2 - namespace_1:namespace_2",
"spec: namespaces: - namespace_1:namespace_1",
"spec: namespaces: - namespace_1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: selection: action: skip",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\"",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\" labelSelector: matchLabels: <label> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false",
"oc edit migrationcontroller -n openshift-migration",
"mig_controller_limits_cpu: \"1\" 1 mig_controller_limits_memory: \"10Gi\" 2 mig_controller_requests_cpu: \"100m\" 3 mig_controller_requests_memory: \"350Mi\" 4 mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"enable_dvm_pv_resizing\":true}}' \\ 1 --type='merge' -n openshift-migration",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"pv_resizing_threshold\":41}}' \\ 1 --type='merge' -n openshift-migration",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-06-17T08:57:01Z\" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: \"False\" type: PvCapacityAdjustmentRequired",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_enable_cache\", \"value\": true}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_limits_memory\", \"value\": <10Gi>}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_requests_memory\", \"value\": <350Mi>}]'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/migration_toolkit_for_containers/advanced-migration-options-mtc |
40.4. Saving Data | 40.4. Saving Data Sometimes it is useful to save samples at a specific time. For example, when profiling an executable, it may be useful to gather different samples based on different input data sets. If the number of events to be monitored exceeds the number of counters available for the processor, multiple runs of OProfile can be used to collect data, saving the sample data to different files each time. To save the current set of sample files, execute the following command, replacing <name> with a unique descriptive name for the current session. The directory /var/lib/oprofile/samples/ name / is created and the current sample files are copied to it. | [
"opcontrol --save= <name>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/OProfile-Saving_Data |
Chapter 2. CSIDriver [storage.k8s.io/v1] | Chapter 2. CSIDriver [storage.k8s.io/v1] Description CSIDriver captures information about a Container Storage Interface (CSI) volume driver deployed on the cluster. Kubernetes attach detach controller uses this object to determine whether attach is required. Kubelet uses this object to determine whether pod information needs to be passed on mount. CSIDriver objects are non-namespaced. Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata. metadata.Name indicates the name of the CSI driver that this object refers to; it MUST be the same name returned by the CSI GetPluginName() call for that driver. The driver name must be 63 characters or less, beginning and ending with an alphanumeric character ([a-z0-9A-Z]) with dashes (-), dots (.), and alphanumerics between. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object CSIDriverSpec is the specification of a CSIDriver. 2.1.1. .spec Description CSIDriverSpec is the specification of a CSIDriver. Type object Property Type Description attachRequired boolean attachRequired indicates this CSI volume driver requires an attach operation (because it implements the CSI ControllerPublishVolume() method), and that the Kubernetes attach detach controller should call the attach volume interface which checks the volumeattachment status and waits until the volume is attached before proceeding to mounting. The CSI external-attacher coordinates with CSI volume driver and updates the volumeattachment status when the attach operation is complete. If the CSIDriverRegistry feature gate is enabled and the value is specified to false, the attach operation will be skipped. Otherwise the attach operation will be called. This field is immutable. fsGroupPolicy string fsGroupPolicy defines if the underlying volume supports changing ownership and permission of the volume before being mounted. Refer to the specific FSGroupPolicy values for additional details. This field is immutable. Defaults to ReadWriteOnceWithFSType, which will examine each volume to determine if Kubernetes should modify ownership and permissions of the volume. With the default policy the defined fsGroup will only be applied if a fstype is defined and the volume's access mode contains ReadWriteOnce. podInfoOnMount boolean podInfoOnMount indicates this CSI volume driver requires additional pod information (like podName, podUID, etc.) during mount operations, if set to true. If set to false, pod information will not be passed on mount. Default is false. The CSI driver specifies podInfoOnMount as part of driver deployment. If true, Kubelet will pass pod information as VolumeContext in the CSI NodePublishVolume() calls. The CSI driver is responsible for parsing and validating the information passed in as VolumeContext. The following VolumeConext will be passed if podInfoOnMount is set to true. This list might grow, but the prefix will be used. "csi.storage.k8s.io/pod.name": pod.Name "csi.storage.k8s.io/pod.namespace": pod.Namespace "csi.storage.k8s.io/pod.uid": string(pod.UID) "csi.storage.k8s.io/ephemeral": "true" if the volume is an ephemeral inline volume defined by a CSIVolumeSource, otherwise "false" "csi.storage.k8s.io/ephemeral" is a new feature in Kubernetes 1.16. It is only required for drivers which support both the "Persistent" and "Ephemeral" VolumeLifecycleMode. Other drivers can leave pod info disabled and/or ignore this field. As Kubernetes 1.15 doesn't support this field, drivers can only support one mode when deployed on such a cluster and the deployment determines which mode that is, for example via a command line parameter of the driver. This field is immutable. requiresRepublish boolean requiresRepublish indicates the CSI driver wants NodePublishVolume being periodically called to reflect any possible change in the mounted volume. This field defaults to false. Note: After a successful initial NodePublishVolume call, subsequent calls to NodePublishVolume should only update the contents of the volume. New mount points will not be seen by a running container. seLinuxMount boolean seLinuxMount specifies if the CSI driver supports "-o context" mount option. When "true", the CSI driver must ensure that all volumes provided by this CSI driver can be mounted separately with different -o context options. This is typical for storage backends that provide volumes as filesystems on block devices or as independent shared volumes. Kubernetes will call NodeStage / NodePublish with "-o context=xyz" mount option when mounting a ReadWriteOncePod volume used in Pod that has explicitly set SELinux context. In the future, it may be expanded to other volume AccessModes. In any case, Kubernetes will ensure that the volume is mounted only with a single SELinux context. When "false", Kubernetes won't pass any special SELinux mount options to the driver. This is typical for volumes that represent subdirectories of a bigger shared filesystem. Default is "false". storageCapacity boolean storageCapacity indicates that the CSI volume driver wants pod scheduling to consider the storage capacity that the driver deployment will report by creating CSIStorageCapacity objects with capacity information, if set to true. The check can be enabled immediately when deploying a driver. In that case, provisioning new volumes with late binding will pause until the driver deployment has published some suitable CSIStorageCapacity object. Alternatively, the driver can be deployed with the field unset or false and it can be flipped later when storage capacity information has been published. This field was immutable in Kubernetes ⇐ 1.22 and now is mutable. tokenRequests array tokenRequests indicates the CSI driver needs pods' service account tokens it is mounting volume for to do necessary authentication. Kubelet will pass the tokens in VolumeContext in the CSI NodePublishVolume calls. The CSI driver should parse and validate the following VolumeContext: "csi.storage.k8s.io/serviceAccount.tokens": { "<audience>": { "token": <token>, "expirationTimestamp": <expiration timestamp in RFC3339>, }, ... } Note: Audience in each TokenRequest should be different and at most one token is empty string. To receive a new token after expiry, RequiresRepublish can be used to trigger NodePublishVolume periodically. tokenRequests[] object TokenRequest contains parameters of a service account token. volumeLifecycleModes array (string) volumeLifecycleModes defines what kind of volumes this CSI volume driver supports. The default if the list is empty is "Persistent", which is the usage defined by the CSI specification and implemented in Kubernetes via the usual PV/PVC mechanism. The other mode is "Ephemeral". In this mode, volumes are defined inline inside the pod spec with CSIVolumeSource and their lifecycle is tied to the lifecycle of that pod. A driver has to be aware of this because it is only going to get a NodePublishVolume call for such a volume. For more information about implementing this mode, see https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html A driver can support one or more of these modes and more modes may be added in the future. This field is beta. This field is immutable. 2.1.2. .spec.tokenRequests Description tokenRequests indicates the CSI driver needs pods' service account tokens it is mounting volume for to do necessary authentication. Kubelet will pass the tokens in VolumeContext in the CSI NodePublishVolume calls. The CSI driver should parse and validate the following VolumeContext: "csi.storage.k8s.io/serviceAccount.tokens": { "<audience>": { "token": <token>, "expirationTimestamp": <expiration timestamp in RFC3339>, }, ... } Note: Audience in each TokenRequest should be different and at most one token is empty string. To receive a new token after expiry, RequiresRepublish can be used to trigger NodePublishVolume periodically. Type array 2.1.3. .spec.tokenRequests[] Description TokenRequest contains parameters of a service account token. Type object Required audience Property Type Description audience string audience is the intended audience of the token in "TokenRequestSpec". It will default to the audiences of kube apiserver. expirationSeconds integer expirationSeconds is the duration of validity of the token in "TokenRequestSpec". It has the same default value of "ExpirationSeconds" in "TokenRequestSpec". 2.2. API endpoints The following API endpoints are available: /apis/storage.k8s.io/v1/csidrivers DELETE : delete collection of CSIDriver GET : list or watch objects of kind CSIDriver POST : create a CSIDriver /apis/storage.k8s.io/v1/watch/csidrivers GET : watch individual changes to a list of CSIDriver. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/csidrivers/{name} DELETE : delete a CSIDriver GET : read the specified CSIDriver PATCH : partially update the specified CSIDriver PUT : replace the specified CSIDriver /apis/storage.k8s.io/v1/watch/csidrivers/{name} GET : watch changes to an object of kind CSIDriver. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/storage.k8s.io/v1/csidrivers Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CSIDriver Table 2.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 2.3. Body parameters Parameter Type Description body DeleteOptions schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CSIDriver Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK CSIDriverList schema 401 - Unauthorized Empty HTTP method POST Description create a CSIDriver Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.8. Body parameters Parameter Type Description body CSIDriver schema Table 2.9. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 201 - Created CSIDriver schema 202 - Accepted CSIDriver schema 401 - Unauthorized Empty 2.2.2. /apis/storage.k8s.io/v1/watch/csidrivers Table 2.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of CSIDriver. deprecated: use the 'watch' parameter with a list operation instead. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/storage.k8s.io/v1/csidrivers/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the CSIDriver Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CSIDriver Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 202 - Accepted CSIDriver schema 401 - Unauthorized Empty HTTP method GET Description read the specified CSIDriver Table 2.17. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CSIDriver Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.19. Body parameters Parameter Type Description body Patch schema Table 2.20. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 201 - Created CSIDriver schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CSIDriver Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. Body parameters Parameter Type Description body CSIDriver schema Table 2.23. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 201 - Created CSIDriver schema 401 - Unauthorized Empty 2.2.4. /apis/storage.k8s.io/v1/watch/csidrivers/{name} Table 2.24. Global path parameters Parameter Type Description name string name of the CSIDriver Table 2.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind CSIDriver. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/storage_apis/csidriver-storage-k8s-io-v1 |
3.5.2. Reading Values From Arrays | 3.5.2. Reading Values From Arrays You can also read values from an array the same way you would read the value of a variable. To do so, include the array_name [ index_expression ] statement as an element in a mathematical expression. For example: Example 3.13. Using Array Values in Simple Computations This example assumes that the array foo was built using the construct in Example 3.12, "Associating Timestamps to Process Names" (from Section 3.5.1, "Assigning an Associated Value" ). This sets a timestamp that will serve as a reference point , to be used in computing for delta . The construct in Example 3.13, "Using Array Values in Simple Computations" computes a value for the variable delta by subtracting the associated value of the key tid() from the current gettimeofday_s() . The construct does this by reading the value of tid() from the array. This particular construct is useful for determining the time between two events, such as the start and completion of a read operation. Note If the index_expression cannot find the unique key, it returns a value of 0 (for numerical operations, such as Example 3.13, "Using Array Values in Simple Computations" ) or a null/empty string value (for string operations) by default. | [
"delta = gettimeofday_s() - foo[tid()]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/arrayops-readvalues |
Specialized hardware and driver enablement | Specialized hardware and driver enablement OpenShift Container Platform 4.15 Learn about hardware enablement on OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"oc adm release info quay.io/openshift-release-dev/ocp-release:4.15.z-x86_64 --image-for=driver-toolkit",
"oc adm release info quay.io/openshift-release-dev/ocp-release:4.15.z-aarch64 --image-for=driver-toolkit",
"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b53883ca2bac5925857148c4a1abc300ced96c222498e3bc134fe7ce3a1dd404",
"podman pull --authfile=path/to/pullsecret.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:<SHA>",
"oc new-project simple-kmod-demo",
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: labels: app: simple-kmod-driver-container name: simple-kmod-driver-container namespace: simple-kmod-demo spec: {} --- apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: simple-kmod-driver-build name: simple-kmod-driver-build namespace: simple-kmod-demo spec: nodeSelector: node-role.kubernetes.io/worker: \"\" runPolicy: \"Serial\" triggers: - type: \"ConfigChange\" - type: \"ImageChange\" source: dockerfile: | ARG DTK FROM USD{DTK} as builder ARG KVER WORKDIR /build/ RUN git clone https://github.com/openshift-psap/simple-kmod.git WORKDIR /build/simple-kmod RUN make all install KVER=USD{KVER} FROM registry.redhat.io/ubi8/ubi-minimal ARG KVER # Required for installing `modprobe` RUN microdnf install kmod COPY --from=builder /lib/modules/USD{KVER}/simple-kmod.ko /lib/modules/USD{KVER}/ COPY --from=builder /lib/modules/USD{KVER}/simple-procfs-kmod.ko /lib/modules/USD{KVER}/ RUN depmod USD{KVER} strategy: dockerStrategy: buildArgs: - name: KMODVER value: DEMO # USD oc adm release info quay.io/openshift-release-dev/ocp-release:<cluster version>-x86_64 --image-for=driver-toolkit - name: DTK value: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:34864ccd2f4b6e385705a730864c04a40908e57acede44457a783d739e377cae - name: KVER value: 4.18.0-372.26.1.el8_6.x86_64 output: to: kind: ImageStreamTag name: simple-kmod-driver-container:demo",
"OCP_VERSION=USD(oc get clusterversion/version -ojsonpath={.status.desired.version})",
"DRIVER_TOOLKIT_IMAGE=USD(oc adm release info USDOCP_VERSION --image-for=driver-toolkit)",
"sed \"s#DRIVER_TOOLKIT_IMAGE#USD{DRIVER_TOOLKIT_IMAGE}#\" 0000-buildconfig.yaml.template > 0000-buildconfig.yaml",
"oc create -f 0000-buildconfig.yaml",
"apiVersion: v1 kind: ServiceAccount metadata: name: simple-kmod-driver-container --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: simple-kmod-driver-container rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: simple-kmod-driver-container roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: simple-kmod-driver-container subjects: - kind: ServiceAccount name: simple-kmod-driver-container userNames: - system:serviceaccount:simple-kmod-demo:simple-kmod-driver-container --- apiVersion: apps/v1 kind: DaemonSet metadata: name: simple-kmod-driver-container spec: selector: matchLabels: app: simple-kmod-driver-container template: metadata: labels: app: simple-kmod-driver-container spec: serviceAccount: simple-kmod-driver-container serviceAccountName: simple-kmod-driver-container containers: - image: image-registry.openshift-image-registry.svc:5000/simple-kmod-demo/simple-kmod-driver-container:demo name: simple-kmod-driver-container imagePullPolicy: Always command: [sleep, infinity] lifecycle: postStart: exec: command: [\"modprobe\", \"-v\", \"-a\" , \"simple-kmod\", \"simple-procfs-kmod\"] preStop: exec: command: [\"modprobe\", \"-r\", \"-a\" , \"simple-kmod\", \"simple-procfs-kmod\"] securityContext: privileged: true nodeSelector: node-role.kubernetes.io/worker: \"\"",
"oc create -f 1000-drivercontainer.yaml",
"oc get pod -n simple-kmod-demo",
"NAME READY STATUS RESTARTS AGE simple-kmod-driver-build-1-build 0/1 Completed 0 6m simple-kmod-driver-container-b22fd 1/1 Running 0 40s simple-kmod-driver-container-jz9vn 1/1 Running 0 40s simple-kmod-driver-container-p45cc 1/1 Running 0 40s",
"oc exec -it pod/simple-kmod-driver-container-p45cc -- lsmod | grep simple",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"apiVersion: v1 kind: Namespace metadata: name: openshift-nfd labels: name: openshift-nfd openshift.io/cluster-monitoring: \"true\"",
"oc create -f nfd-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd spec: targetNamespaces: - openshift-nfd",
"oc create -f nfd-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: \"stable\" installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f nfd-sub.yaml",
"oc project openshift-nfd",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance namespace: openshift-nfd spec: instance: \"\" # instance is empty by default topologyupdater: false # False by default operand: image: registry.redhat.io/openshift4/ose-node-feature-discovery-rhel9:v4.15 1 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]",
"oc apply -f <filename>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 11m nfd-master-hcn64 1/1 Running 0 60s nfd-master-lnnxx 1/1 Running 0 60s nfd-master-mp6hr 1/1 Running 0 60s nfd-worker-vgcz9 1/1 Running 0 60s nfd-worker-xqbws 1/1 Running 0 60s",
"skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:<openshift_version>",
"skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12",
"{ \"Digest\": \"sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef\", }",
"skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@<image_digest> docker://<mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest>",
"skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef docker://<your-mirror-registry>/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance spec: operand: image: <mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> 1 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]",
"oc apply -f <filename>",
"oc get nodefeaturediscovery nfd-instance -o yaml",
"oc get pods -n <nfd_namespace>",
"core: sleepInterval: 60s 1",
"core: sources: - system - custom",
"core: labelWhiteList: '^cpu-cpuid'",
"core: noPublish: true 1",
"sources: cpu: cpuid: attributeBlacklist: [MMX, MMXEXT]",
"sources: cpu: cpuid: attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL]",
"sources: kernel: kconfigFile: \"/path/to/kconfig\"",
"sources: kernel: configOpts: [NO_HZ, X86, DMI]",
"sources: pci: deviceClassWhitelist: [\"0200\", \"03\"]",
"sources: pci: deviceLabelFields: [class, vendor, device]",
"sources: usb: deviceClassWhitelist: [\"ef\", \"ff\"]",
"sources: pci: deviceLabelFields: [class, vendor]",
"source: custom: - name: \"my.custom.feature\" matchOn: - loadedKMod: [\"e1000e\"] - pciId: class: [\"0200\"] vendor: [\"8086\"]",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureRule metadata: name: example-rule spec: rules: - name: \"example rule\" labels: \"example-custom-feature\": \"true\" # Label is created if all of the rules below match matchFeatures: # Match if \"veth\" kernel module is loaded - feature: kernel.loadedmodule matchExpressions: veth: {op: Exists} # Match if any PCI device with vendor 8086 exists in the system - feature: pci.device matchExpressions: vendor: {op: In, value: [\"8086\"]}",
"oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.13.6/examples/nodefeaturerule.yaml",
"apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: name: node1 topologyPolicies: [\"SingleNUMANodeContainerLevel\"] zones: - name: node-0 type: Node resources: - name: cpu capacity: 20 allocatable: 16 available: 10 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 - name: node-1 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic2 capacity: 6 allocatable: 6 available: 6 - name: node-2 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3",
"podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help",
"nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key",
"nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt",
"nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt",
"nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml",
"nfd-topology-updater -no-publish",
"nfd-topology-updater -oneshot -no-publish",
"nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock",
"nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443",
"nfd-topology-updater -server-name-override=localhost",
"nfd-topology-updater -sleep-interval=1h",
"nfd-topology-updater -watch-namespace=rte",
"apiVersion: v1 kind: Namespace metadata: name: openshift-kmm",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0",
"oc create -f kmm-sub.yaml",
"oc get -n openshift-kmm deployments.apps kmm-operator-controller",
"NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller 1/1 1 1 97s",
"apiVersion: v1 kind: Namespace metadata: name: openshift-kmm",
"allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: false allowPrivilegedContainer: false allowedCapabilities: - NET_BIND_SERVICE apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: [] kind: SecurityContextConstraints metadata: name: restricted-v2 priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - ALL runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs seccompProfiles: - runtime/default supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret",
"oc apply -f kmm-security-constraint.yaml",
"oc adm policy add-scc-to-user kmm-security-constraint -z kmm-operator-controller -n openshift-kmm",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0",
"oc create -f kmm-sub.yaml",
"oc get -n openshift-kmm deployments.apps kmm-operator-controller",
"NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller 1/1 1 1 97s",
"oc edit configmap -n \"USDnamespace\" kmm-operator-manager-config",
"healthProbeBindAddress: :8081 job: gcDelay: 1h leaderElection: enabled: true resourceID: kmm.sigs.x-k8s.io webhook: disableHTTP2: true # CVE-2023-44487 port: 9443 metrics: enableAuthnAuthz: true disableHTTP2: true # CVE-2023-44487 bindAddress: 0.0.0.0:8443 secureServing: true worker: runAsUser: 0 seLinuxType: spc_t setFirmwareClassPath: /var/lib/firmware",
"oc delete pod -n \"<namespace>\" -l app.kubernetes.io/component=kmm",
"oc delete -k https://github.com/rh-ecosystem-edge/kernel-module-management/config/default",
"spec: moduleLoader: container: modprobe: moduleName: mod_a dirName: /opt firmwarePath: /firmware parameters: - param=1 modulesLoadingOrder: - mod_a - mod_b",
"oc adm policy add-scc-to-user privileged -z \"USD{serviceAccountName}\" [ -n \"USD{namespace}\" ]",
"spec: moduleLoader: container: modprobe: moduleName: mod_a inTreeModulesToRemove: [mod_a, mod_b]",
"spec: moduleLoader: container: kernelMappings: - literal: 6.0.15-300.fc37.x86_64 containerImage: \"some.registry/org/my-kmod:USD{KERNEL_FULL_VERSION}\" inTreeModulesToRemove: [<module_name>, <module_name>]",
"apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: <my_kmod> spec: moduleLoader: container: modprobe: moduleName: <my_kmod> 1 dirName: /opt 2 firmwarePath: /firmware 3 parameters: 4 - param=1 kernelMappings: 5 - literal: 6.0.15-300.fc37.x86_64 containerImage: some.registry/org/my-kmod:6.0.15-300.fc37.x86_64 - regexp: '^.+\\fc37\\.x86_64USD' 6 containerImage: \"some.other.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" - regexp: '^.+USD' 7 containerImage: \"some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" build: buildArgs: 8 - name: ARG_NAME value: <some_value> secrets: - name: <some_kubernetes_secret> 9 baseImageRegistryTLS: 10 insecure: false insecureSkipTLSVerify: false 11 dockerfileConfigMap: 12 name: <my_kmod_dockerfile> sign: certSecret: name: <cert_secret> 13 keySecret: name: <key_secret> 14 filesToSign: - /opt/lib/modules/USD{KERNEL_FULL_VERSION}/<my_kmod>.ko registryTLS: 15 insecure: false 16 insecureSkipTLSVerify: false serviceAccountName: <sa_module_loader> 17 devicePlugin: 18 container: image: some.registry/org/device-plugin:latest 19 env: - name: MY_DEVICE_PLUGIN_ENV_VAR value: SOME_VALUE volumeMounts: 20 - mountPath: /some/mountPath name: <device_plugin_volume> volumes: 21 - name: <device_plugin_volume> configMap: name: <some_configmap> serviceAccountName: <sa_device_plugin> 22 imageRepoSecret: 23 name: <secret_name> selector: node-role.kubernetes.io/worker: \"\"",
"ARG DTK_AUTO FROM USD{DTK_AUTO} as builder # Build steps # FROM ubi9/ubi ARG KERNEL_FULL_VERSION RUN dnf update && dnf install -y kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ Create the symbolic link RUN ln -s /lib/modules/USD{KERNEL_FULL_VERSION} /opt/lib/modules/USD{KERNEL_FULL_VERSION}/host RUN depmod -b /opt USD{KERNEL_FULL_VERSION}",
"depmod -b /opt USD{KERNEL_FULL_VERSION}+`.",
"apiVersion: v1 kind: ConfigMap metadata: name: kmm-ci-dockerfile data: dockerfile: | ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_FULL_VERSION WORKDIR /usr/src RUN [\"git\", \"clone\", \"https://github.com/rh-ecosystem-edge/kernel-module-management.git\"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_FULL_VERSION}/build make all FROM registry.redhat.io/ubi9/ubi-minimal ARG KERNEL_FULL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ RUN depmod -b /opt USD{KERNEL_FULL_VERSION}",
"- regexp: '^.+USD' containerImage: \"some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" build: buildArgs: 1 - name: ARG_NAME value: <some_value> secrets: 2 - name: <some_kubernetes_secret> 3 baseImageRegistryTLS: insecure: false 4 insecureSkipTLSVerify: false 5 dockerfileConfigMap: 6 name: <my_kmod_dockerfile> registryTLS: insecure: false 7 insecureSkipTLSVerify: false 8",
"ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_FULL_VERSION WORKDIR /usr/src RUN [\"git\", \"clone\", \"https://github.com/rh-ecosystem-edge/kernel-module-management.git\"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_FULL_VERSION}/build make all FROM ubi9/ubi-minimal ARG KERNEL_FULL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ RUN depmod -b /opt USD{KERNEL_FULL_VERSION}",
"openssl req -x509 -new -nodes -utf8 -sha256 -days 36500 -batch -config configuration_file.config -outform DER -out my_signing_key_pub.der -keyout my_signing_key.priv",
"oc create secret generic my-signing-key --from-file=key=<my_signing_key.priv>",
"oc create secret generic my-signing-key-pub --from-file=cert=<my_signing_key_pub.der>",
"cat sb_cert.priv | base64 -w 0 > my_signing_key2.base64",
"cat sb_cert.cer | base64 -w 0 > my_signing_key_pub.base64",
"apiVersion: v1 kind: Secret metadata: name: my-signing-key-pub namespace: default 1 type: Opaque data: cert: <base64_encoded_secureboot_public_key> --- apiVersion: v1 kind: Secret metadata: name: my-signing-key namespace: default 2 type: Opaque data: key: <base64_encoded_secureboot_private_key>",
"oc apply -f <yaml_filename>",
"oc get secret -o yaml <certificate secret name> | awk '/cert/{print USD2; exit}' | base64 -d | openssl x509 -inform der -text",
"oc get secret -o yaml <private key secret name> | awk '/key/{print USD2; exit}' | base64 -d",
"--- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module spec: moduleLoader: serviceAccountName: default container: modprobe: 1 moduleName: '<module_name>' kernelMappings: # the kmods will be deployed on all nodes in the cluster with a kernel that matches the regexp - regexp: '^.*\\.x86_64USD' # the container to produce containing the signed kmods containerImage: <image_name> 2 sign: # the image containing the unsigned kmods (we need this because we are not building the kmods within the cluster) unsignedImage: <image_name> 3 keySecret: # a secret holding the private secureboot key with the key 'key' name: <private_key_secret_name> certSecret: # a secret holding the public secureboot key with the key 'cert' name: <certificate_secret_name> filesToSign: # full path within the unsignedImage container to the kmod(s) to sign - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: # the name of a secret containing credentials to pull unsignedImage and push containerImage to the registry name: repo-pull-secret selector: kubernetes.io/arch: amd64",
"--- apiVersion: v1 kind: ConfigMap metadata: name: example-module-dockerfile namespace: <namespace> 1 data: Dockerfile: | ARG DTK_AUTO ARG KERNEL_VERSION FROM USD{DTK_AUTO} as builder WORKDIR /build/ RUN git clone -b main --single-branch https://github.com/rh-ecosystem-edge/kernel-module-management.git WORKDIR kernel-module-management/ci/kmm-kmod/ RUN make FROM registry.access.redhat.com/ubi9/ubi:latest ARG KERNEL_VERSION RUN yum -y install kmod && yum clean all RUN mkdir -p /opt/lib/modules/USD{KERNEL_VERSION} COPY --from=builder /build/kernel-module-management/ci/kmm-kmod/*.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN /usr/sbin/depmod -b /opt --- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module namespace: <namespace> 2 spec: moduleLoader: serviceAccountName: default 3 container: modprobe: moduleName: simple_kmod kernelMappings: - regexp: '^.*\\.x86_64USD' containerImage: <final_driver_container_name> build: dockerfileConfigMap: name: example-module-dockerfile sign: keySecret: name: <private_key_secret_name> certSecret: name: <certificate_secret_name> filesToSign: - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: 4 name: repo-pull-secret selector: # top-level selector kubernetes.io/arch: amd64",
"--- apiVersion: v1 kind: Namespace metadata: name: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub spec: channel: stable installPlanApproval: Automatic name: kernel-module-management-hub source: redhat-operators sourceNamespace: openshift-marketplace",
"apiVersion: hub.kmm.sigs.x-k8s.io/v1beta1 kind: ManagedClusterModule metadata: name: <my-mcm> # No namespace, because this resource is cluster-scoped. spec: moduleSpec: 1 selector: 2 node-wants-my-mcm: 'true' spokeNamespace: <some-namespace> 3 selector: 4 wants-my-mcm: 'true'",
"--- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: install-kmm spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-kmm spec: severity: high object-templates: - complianceType: mustonlyhave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-kmm - complianceType: mustonlyhave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kmm namespace: openshift-kmm spec: upgradeStrategy: Default - complianceType: mustonlyhave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: stable config: env: - name: KMM_MANAGED 1 value: \"1\" installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: kmm-module-manager rules: - apiGroups: [kmm.sigs.x-k8s.io] resources: [modules] verbs: [create, delete, get, list, patch, update, watch] - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: klusterlet-kmm subjects: - kind: ServiceAccount name: klusterlet-work-sa namespace: open-cluster-management-agent roleRef: kind: ClusterRole name: kmm-module-manager apiGroup: rbac.authorization.k8s.io --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: all-managed-clusters spec: clusterSelector: 2 matchExpressions: [] --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: install-kmm placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: all-managed-clusters subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-kmm",
"oc label node/<node_name> kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>-",
"oc label node/<node_name> kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>=<desired_version>",
"ProduceMachineConfig(machineConfigName, machineConfigPoolRef, kernelModuleImage, kernelModuleName string) (string, error)",
"kind: MachineConfigPool metadata: name: sfc spec: machineConfigSelector: 1 matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker, sfc]} nodeSelector: 2 matchLabels: node-role.kubernetes.io/sfc: \"\" paused: false maxUnavailable: 1",
"metadata: labels: machineconfiguration.opensfhit.io/role: master",
"metadata: labels: machineconfiguration.opensfhit.io/role: worker",
"modprobe: ERROR: could not insert '<your_kmod_name>': Required key not available",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 99-worker-kernel-args-firmware-path spec: kernelArguments: - 'firmware_class.path=/var/lib/firmware'",
"FROM registry.redhat.io/ubi9/ubi-minimal as builder Build the kmod RUN [\"mkdir\", \"/firmware\"] RUN [\"curl\", \"-o\", \"/firmware/firmware.bin\", \"https://artifacts.example.com/firmware.bin\"] FROM registry.redhat.io/ubi9/ubi-minimal Copy the kmod, install modprobe, run depmod COPY --from=builder /firmware /firmware",
"apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: my-kmod spec: moduleLoader: container: modprobe: moduleName: my-kmod # Required firmwarePath: /firmware 1",
"oc logs -fn openshift-kmm deployments/kmm-operator-controller",
"oc logs -fn openshift-kmm deployments/kmm-operator-webhook-server",
"oc logs -fn openshift-kmm-hub deployments/kmm-operator-hub-controller",
"oc logs -fn openshift-kmm deployments/kmm-operator-hub-webhook-server",
"oc describe modules.kmm.sigs.x-k8s.io kmm-ci-a [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal BuildCreated 2m29s kmm Build created for kernel 6.6.2-201.fc39.x86_64 Normal BuildSucceeded 63s kmm Build job succeeded for kernel 6.6.2-201.fc39.x86_64 Normal SignCreated 64s (x2 over 64s) kmm Sign created for kernel 6.6.2-201.fc39.x86_64 Normal SignSucceeded 57s kmm Sign job succeeded for kernel 6.6.2-201.fc39.x86_64",
"oc describe node my-node [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- [...] Normal ModuleLoaded 4m17s kmm Module default/kmm-ci-a loaded into the kernel Normal ModuleUnloaded 2s kmm Module default/kmm-ci-a unloaded from the kernel",
"export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm kmm-operator-controller -ojsonpath='{.spec.template.spec.containers[?(@.name==\"manager\")].env[?(@.name==\"RELATED_IMAGE_MUST_GATHER\")].value}') oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather",
"oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather",
"oc logs -fn openshift-kmm deployments/kmm-operator-controller",
"I0228 09:36:37.352405 1 request.go:682] Waited for 1.001998746s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/machine.openshift.io/v1beta1?timeout=32s I0228 09:36:40.767060 1 listener.go:44] kmm/controller-runtime/metrics \"msg\"=\"Metrics server is starting to listen\" \"addr\"=\"127.0.0.1:8080\" I0228 09:36:40.769483 1 main.go:234] kmm/setup \"msg\"=\"starting manager\" I0228 09:36:40.769907 1 internal.go:366] kmm \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"127.0.0.1\",\"Port\":8080,\"Zone\":\"\"} \"kind\"=\"metrics\" \"path\"=\"/metrics\" I0228 09:36:40.770025 1 internal.go:366] kmm \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"::\",\"Port\":8081,\"Zone\":\"\"} \"kind\"=\"health probe\" I0228 09:36:40.770128 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784396 1 leaderelection.go:258] successfully acquired lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784876 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1beta1.Module\" I0228 09:36:40.784925 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.DaemonSet\" I0228 09:36:40.784968 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Build\" I0228 09:36:40.785001 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Job\" I0228 09:36:40.785025 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Node\" I0228 09:36:40.785039 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" I0228 09:36:40.785458 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PodNodeModule\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Pod\" \"source\"=\"kind source: *v1.Pod\" I0228 09:36:40.786947 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1beta1.PreflightValidation\" I0228 09:36:40.787406 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1.Build\" I0228 09:36:40.787474 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1.Job\" I0228 09:36:40.787488 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1beta1.Module\" I0228 09:36:40.787603 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"NodeKernel\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Node\" \"source\"=\"kind source: *v1.Node\" I0228 09:36:40.787634 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"NodeKernel\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Node\" I0228 09:36:40.787680 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" I0228 09:36:40.785607 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"source\"=\"kind source: *v1.ImageStream\" I0228 09:36:40.787822 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" \"source\"=\"kind source: *v1beta1.PreflightValidationOCP\" I0228 09:36:40.787853 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" I0228 09:36:40.787879 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" \"source\"=\"kind source: *v1beta1.PreflightValidation\" I0228 09:36:40.787905 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" I0228 09:36:40.786489 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"PodNodeModule\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Pod\"",
"export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm-hub kmm-operator-hub-controller -ojsonpath='{.spec.template.spec.containers[?(@.name==\"manager\")].env[?(@.name==\"RELATED_IMAGE_MUST_GATHER\")].value}') oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather -u",
"oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather -u",
"oc logs -fn openshift-kmm-hub deployments/kmm-operator-hub-controller",
"I0417 11:34:08.807472 1 request.go:682] Waited for 1.023403273s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/tuned.openshift.io/v1?timeout=32s I0417 11:34:12.373413 1 listener.go:44] kmm-hub/controller-runtime/metrics \"msg\"=\"Metrics server is starting to listen\" \"addr\"=\"127.0.0.1:8080\" I0417 11:34:12.376253 1 main.go:150] kmm-hub/setup \"msg\"=\"Adding controller\" \"name\"=\"ManagedClusterModule\" I0417 11:34:12.376621 1 main.go:186] kmm-hub/setup \"msg\"=\"starting manager\" I0417 11:34:12.377690 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.378078 1 internal.go:366] kmm-hub \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"127.0.0.1\",\"Port\":8080,\"Zone\":\"\"} \"kind\"=\"metrics\" \"path\"=\"/metrics\" I0417 11:34:12.378222 1 internal.go:366] kmm-hub \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"::\",\"Port\":8081,\"Zone\":\"\"} \"kind\"=\"health probe\" I0417 11:34:12.395703 1 leaderelection.go:258] successfully acquired lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.396334 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1beta1.ManagedClusterModule\" I0417 11:34:12.396403 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.ManifestWork\" I0417 11:34:12.396430 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.Build\" I0417 11:34:12.396469 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.Job\" I0417 11:34:12.396522 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.ManagedCluster\" I0417 11:34:12.396543 1 controller.go:193] kmm-hub \"msg\"=\"Starting Controller\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" I0417 11:34:12.397175 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"source\"=\"kind source: *v1.ImageStream\" I0417 11:34:12.397221 1 controller.go:193] kmm-hub \"msg\"=\"Starting Controller\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" I0417 11:34:12.498335 1 filter.go:196] kmm-hub \"msg\"=\"Listing all ManagedClusterModules\" \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498570 1 filter.go:205] kmm-hub \"msg\"=\"Listed ManagedClusterModules\" \"count\"=0 \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498629 1 filter.go:238] kmm-hub \"msg\"=\"Adding reconciliation requests\" \"count\"=0 \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498687 1 filter.go:196] kmm-hub \"msg\"=\"Listing all ManagedClusterModules\" \"managedcluster\"=\"sno1-0\" I0417 11:34:12.498750 1 filter.go:205] kmm-hub \"msg\"=\"Listed ManagedClusterModules\" \"count\"=0 \"managedcluster\"=\"sno1-0\" I0417 11:34:12.498801 1 filter.go:238] kmm-hub \"msg\"=\"Adding reconciliation requests\" \"count\"=0 \"managedcluster\"=\"sno1-0\" I0417 11:34:12.501947 1 controller.go:227] kmm-hub \"msg\"=\"Starting workers\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"worker count\"=1 I0417 11:34:12.501948 1 controller.go:227] kmm-hub \"msg\"=\"Starting workers\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"worker count\"=1 I0417 11:34:12.502285 1 imagestream_reconciler.go:50] kmm-hub \"msg\"=\"registered imagestream info mapping\" \"ImageStream\"={\"name\":\"driver-toolkit\",\"namespace\":\"openshift\"} \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"dtkImage\"=\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df42b4785a7a662b30da53bdb0d206120cf4d24b45674227b16051ba4b7c3934\" \"name\"=\"driver-toolkit\" \"namespace\"=\"openshift\" \"osImageVersion\"=\"412.86.202302211547-0\" \"reconcileID\"=\"e709ff0a-5664-4007-8270-49b5dff8bae9\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/specialized_hardware_and_driver_enablement/index |
Chapter 2. Installation | Chapter 2. Installation This chapter guides you through the steps to install AMQ Core Protocol JMS in your environment. 2.1. Prerequisites You must have a subscription to access AMQ release files and repositories. To build programs with AMQ Core Protocol JMS, you must install Apache Maven . To use AMQ Core Protocol JMS, you must install Java. 2.2. Using the Red Hat Maven repository Configure your Maven environment to download the client library from the Red Hat Maven repository. Procedure Add the Red Hat repository to your Maven settings or POM file. For example configuration files, see Section B.1, "Using the online repository" . <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> Add the library dependency to your POM file. <dependency> <groupId>org.apache.activemq</groupId> <artifactId>artemis-jms-client</artifactId> <version>2.16.0.redhat-00022</version> </dependency> The client is now available in your Maven project. 2.3. Installing a local Maven repository As an alternative to the online repository, AMQ Core Protocol JMS can be installed to your local filesystem as a file-based Maven repository. Procedure Use your subscription to download the AMQ Broker 7.8.2 Maven repository .zip file. Extract the file contents into a directory of your choosing. On Linux or UNIX, use the unzip command to extract the file contents. USD unzip amq-broker-7.8.2-maven-repository.zip On Windows, right-click the .zip file and select Extract All . Configure Maven to use the repository in the maven-repository directory inside the extracted install directory. For more information, see Section B.2, "Using a local repository" . 2.4. Installing the examples Procedure Use your subscription to download the AMQ Broker 7.8.2 .zip file. Extract the file contents into a directory of your choosing. On Linux or UNIX, use the unzip command to extract the file contents. USD unzip amq-broker-7.8.2.zip On Windows, right-click the .zip file and select Extract All . When you extract the contents of the .zip file, a directory named amq-broker-7.8.2 is created. This is the top-level directory of the installation and is referred to as <install-dir> throughout this document. | [
"<repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository>",
"<dependency> <groupId>org.apache.activemq</groupId> <artifactId>artemis-jms-client</artifactId> <version>2.16.0.redhat-00022</version> </dependency>",
"unzip amq-broker-7.8.2-maven-repository.zip",
"unzip amq-broker-7.8.2.zip"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_core_protocol_jms_client/installation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.