title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_ibm_z_infrastructure/making-open-source-more-inclusive
|
8.5. Configuration Options for Using Short Names to Resolve and Authenticate Users and Groups
|
8.5. Configuration Options for Using Short Names to Resolve and Authenticate Users and Groups This section describes configuration options enabling you to use short user or group names instead of the user_name@domain or domain\user_name fully qualified names format to resolve and authenticate users and groups in an Active Directory (AD) environment. You can configure this: in Identity Management (IdM) that trusts AD on Red Hat Enterprise Linux joined to an AD using SSSD 8.5.1. How Domain Resolution Works You can use the domain resolution order option to specify the order in which a list of domains is searched to return a match for a given user name. You can set the option: on the server. See: Section 8.5.2.1, "Setting the Domain Resolution Order Globally" Section 8.5.2.2, "Setting the Domain Resolution Order for an ID view" on the client. See Section 8.5.3, "Configuring the Domain Resolution Order on an IdM Client" In environments with an Active Directory trust, applying one or both of the server-based options is recommended. From the perspective of a particular client, the domain resolution order option can be set in more than one of the three locations above. The order in which a client consults the three locations is: the local sssd.conf configuration the id view configuration the global IdM configuration Only the domain resolution order setting found first will be used. In environments in which Red Hat Enterprise Linux is directly integrated into an AD, you can only set the domain resolution order on the client. Note You must use qualified names if: A user name exists in multiple domains The SSSD configuration includes the default_domain_suffix option and you want to make a request towards a domain not specified with that option 8.5.2. Configuring the Domain Resolution Order on an Identity Management Server Select the server-based configuration if a large number of clients in a domain or subdomain should use an identical domain resolution order. 8.5.2.1. Setting the Domain Resolution Order Globally Select this option for setting the domain resolution order to all the clients in the trust. In order to do this, use the ipa config-mod command. For example, in an IdM domain that trusts an AD forest with multiple child domains: With the domain resolution order set in this way, users from both the IdM domain and from the trusted AD forest can log in using short names only. 8.5.2.2. Setting the Domain Resolution Order for an ID view Select this option to apply the setting to the clients in a specific domain. For example, on your subdomain server, server.idm.example.com , you observe many more logins from the subdomain2.ad.example.com subdomain than from subdomain1.ad.example.com . The global resolution order states, however, that the subdomain1.ad.example.com subdomain user database is tried out before subdomain2.ad.example.com when resolving user names. To set a different order for certain servers, set up a domain resolution order for a specific view: Create an ID view with the domain resolution order option set: Apply the view on the clients. For example: For further information on ID views, see Chapter 8, Using ID Views in Active Directory Environments . 8.5.3. Configuring the Domain Resolution Order on an IdM Client Set the domain resolution order on the client if you want to set it on a low number of clients or if the clients are directly connected to AD. Set the domain_resolution_order option, in the [sssd] section, in the /etc/sssd/sssd.conf file, for example: For further information on configuring the domain_resolution_order option, see the sssd.conf(5) man page.
|
[
"ipa config-mod --domain-resolution-order=' idm.example.com:ad.example.com:subdomain1.ad.example.com:subdomain2.ad.example.com ' Maximum username length: 32 Home directory base: /home Domain Resolution Order: idm.example.com:ad.example.com:subdomain1.ad.example.com:subdomain2.ad.example.com",
"ipa idview-add example_view --desc \" ID view for custom shortname resolution on server.idm.example.com \" --domain-resolution-order subdomain2.ad.example.com:subdomain1.ad.example.com --------------------------------- Added ID View \"example_view\" --------------------------------- ID View Name: example_view Description: ID view for custom shortname resolution on server.idm.example.com Domain Resolution Order: subdomain2.ad.example.com:subdomain1.ad.example.com",
"ipa idview-apply example_view --hosts server.idm.example.com ----------------------------------- Applied ID View \"example_view\" ----------------------------------- hosts: server.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------",
"domain_resolution_order = subdomain1.ad.example.com , subdomain2.ad.example.com"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/short-names
|
A.2. The dmsetup Command
|
A.2. The dmsetup Command The dmsetup command is a command line wrapper for communication with the Device Mapper. For general system information about LVM devices, you may find the info , ls , status , and deps options of the dmsetup command to be useful, as described in the following subsections. For information about additional options and capabilities of the dmsetup command, see the dmsetup (8) man page. A.2.1. The dmsetup info Command The dmsetup info device command provides summary information about Device Mapper devices. If you do not specify a device name, the output is information about all of the currently configured Device Mapper devices. If you specify a device, then this command yields information for that device only. The dmsetup info command provides information in the following categories: Name The name of the device. An LVM device is expressed as the volume group name and the logical volume name separated by a hyphen. A hyphen in the original name is translated to two hyphens. During standard LVM operations, you should not use the name of an LVM device in this format to specify an LVM device directly, but instead you should use the vg / lv alternative. State Possible device states are SUSPENDED , ACTIVE , and READ-ONLY . The dmsetup suspend command sets a device state to SUSPENDED . When a device is suspended, all I/O operations to that device stop. The dmsetup resume command restores a device state to ACTIVE . Read Ahead The number of data blocks that the system reads ahead for any open file on which read operations are ongoing. By default, the kernel chooses a suitable value automatically. You can change this value with the --readahead option of the dmsetup command. Tables present Possible states for this category are LIVE and INACTIVE . An INACTIVE state indicates that a table has been loaded which will be swapped in when a dmsetup resume command restores a device state to ACTIVE , at which point the table's state becomes LIVE . For information, see the dmsetup man page. Open count The open reference count indicates how many times the device is opened. A mount command opens a device. Event number The current number of events received. Issuing a dmsetup wait n command allows the user to wait for the n'th event, blocking the call until it is received. Major, minor Major and minor device number Number of targets The number of fragments that make up a device. For example, a linear device spanning 3 disks would have 3 targets. A linear device composed of the beginning and end of a disk, but not the middle would have 2 targets. UUID UUID of the device. The following example shows partial output for the dmsetup info command.
|
[
"dmsetup info Name: testgfsvg-testgfslv1 State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 0 Event number: 0 Major, minor: 253, 2 Number of targets: 2 UUID: LVM-K528WUGQgPadNXYcFrrf9LnPlUMswgkCkpgPIgYzSvigM7SfeWCypddNSWtNzc2N Name: VolGroup00-LogVol00 State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 253, 0 Number of targets: 1 UUID: LVM-tOcS1kqFV9drb0X1Vr8sxeYP0tqcrpdegyqj5lZxe45JMGlmvtqLmbLpBcenh2L3"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/dmsetup
|
function::jiffies
|
function::jiffies Name function::jiffies - Kernel jiffies count Synopsis Arguments None Description This function returns the value of the kernel jiffies variable. This value is incremented periodically by timer interrupts, and may wrap around a 32-bit or 64-bit boundary. See HZ .
|
[
"jiffies:long()"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-jiffies
|
Chapter 4. Administrator tasks
|
Chapter 4. Administrator tasks 4.1. Adding Operators to a cluster Cluster administrators can install Operators to an OpenShift Container Platform cluster by subscribing Operators to namespaces with OperatorHub. 4.1.1. About Operator installation with OperatorHub OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. As a user with the proper permissions, you can install an Operator from OperatorHub using the OpenShift Container Platform web console or CLI. During installation, you must determine the following initial settings for the Operator: Installation Mode Choose a specific namespace in which to install the Operator. Update Channel If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy You can choose automatic or manual updates. If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Understanding OperatorHub 4.1.2. Installing from OperatorHub using the web console You can install and subscribe to an Operator from OperatorHub using the OpenShift Container Platform web console. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type advanced to find the Advanced Cluster Management for Kubernetes Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. Select the Operator to display additional information. Note Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. Read the information about the Operator and click Install . On the Install Operator page: Select one of the following: All namespaces on the cluster (default) installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. This option is not always available. A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. Choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. Select an Update Channel (if more than one is available). Select Automatic or Manual approval strategy, as described earlier. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster. If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention. After the upgrade status of the subscription is Up to date , select Operators Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace. Note For the All namespaces... installation mode, the status resolves to InstallSucceeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces. If it does not: Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. 4.1.3. Installing from OperatorHub using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. Use the oc command to create or update a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. Install the oc command to your local system. Procedure View the list of Operators available to the cluster from OperatorHub: USD oc get packagemanifests -n openshift-marketplace Example output NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m ... Note the catalog for your desired Operator. Inspect your desired Operator to verify its supported install modes and available channels: USD oc describe packagemanifests <operator_name> -n openshift-marketplace An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group. The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces , then the openshift-operators namespace already has an appropriate Operator group in place. However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one. Note The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode. Create an OperatorGroup object YAML file, for example operatorgroup.yaml : Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace> Create the OperatorGroup object: USD oc apply -f operatorgroup.yaml Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml : Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar 1 For AllNamespaces install mode usage, specify the openshift-operators namespace. Otherwise, specify the relevant single namespace for SingleNamespace install mode usage. 2 Name of the channel to subscribe to. 3 Name of the Operator to subscribe to. 4 Name of the catalog source that provides the Operator. 5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. 6 The env parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. 7 The envFrom parameter defines a list of sources to populate Environment Variables in the container. 8 The volumes parameter defines a list of Volumes that must exist on the pod created by OLM. 9 The volumeMounts parameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. 10 The tolerations parameter defines a list of Tolerations for the pod created by OLM. 11 The resources parameter defines resource constraints for all the containers in the pod created by OLM. 12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM. Create the Subscription object: USD oc apply -f sub.yaml At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Additional resources About Operator groups 4.1.4. Installing a specific version of an Operator You can install a specific version of an Operator by setting the cluster service version (CSV) in a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions OpenShift CLI ( oc ) installed Procedure Create a Subscription object YAML file that subscribes a namespace to an Operator with a specific version by setting the startingCSV field. Set the installPlanApproval field to Manual to prevent the Operator from automatically upgrading if a later version exists in the catalog. For example, the following sub.yaml file can be used to install the Red Hat Quay Operator specifically to version 3.4.0: Subscription with a specific starting Operator version apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: quay-v3.4 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.4.0 2 1 Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. 2 Set a specific version of an Operator CSV. Create the Subscription object: USD oc apply -f sub.yaml Manually approve the pending install plan to complete the Operator installation. Additional resources Manually approving a pending Operator update 4.1.5. Pod placement of Operator workloads By default, Operator Lifecycle Manager (OLM) places pods on arbitrary worker nodes when installing an Operator or deploying Operand workloads. As an administrator, you can use projects with a combination of node selectors, taints, and tolerations to control the placement of Operators and Operands to specific nodes. Controlling pod placement of Operator and Operand workloads has the following prerequisites: Determine a node or set of nodes to target for the pods per your requirements. If available, note an existing label, such as node-role.kubernetes.io/app , that identifies the node or nodes. Otherwise, add a label, such as myoperator , by using a machine set or editing the node directly. You will use this label in a later step as the node selector on your project. If you want to ensure that only pods with a certain label are allowed to run on the nodes, while steering unrelated workloads to other nodes, add a taint to the node or nodes by using a machine set or editing the node directly. Use an effect that ensures that new pods that do not match the taint cannot be scheduled on the nodes. For example, a myoperator:NoSchedule taint ensures that new pods that do not match the taint are not scheduled onto that node, but existing pods on the node are allowed to remain. Create a project that is configured with a default node selector and, if you added a taint, a matching toleration. At this point, the project you created can be used to steer pods towards the specified nodes in the following scenarios: For Operator pods Administrators can create a Subscription object in the project. As a result, the Operator pods are placed on the specified nodes. For Operand pods Using an installed Operator, users can create an application in the project, which places the custom resource (CR) owned by the Operator in the project. As a result, the Operand pods are placed on the specified nodes, unless the Operator is deploying cluster-wide objects or resources in other namespaces, in which case this customized pod placement does not apply. Additional resources Adding taints and tolerations manually to nodes or with machine sets Creating project-wide node selectors Creating a project with a node selector and toleration 4.2. Updating installed Operators As a cluster administrator, you can update Operators that have been previously installed using Operator Lifecycle Manager (OLM) on your OpenShift Container Platform cluster. 4.2.1. Preparing for an Operator update The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel. The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). Note You cannot change installed Operators to a channel that is older than the current channel. Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators: Red Hat OpenShift Container Platform Operator Update Information Checker You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of OpenShift Container Platform. Cluster Version Operator-based Operators are not included. 4.2.2. Changing the update channel for an Operator You can change the update channel for an Operator by using the OpenShift Container Platform web console. Tip If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators . Click the name of the Operator you want to change the update channel for. Click the Subscription tab. Click the name of the update channel under Channel . Click the newer update channel that you want to change to, then click Save . For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab. 4.2.3. Manually approving a pending Operator update If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending update display a status with Upgrade available . Click the name of the Operator you want to update. Click the Subscription tab. Any update requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for update. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . 4.3. Deleting Operators from a cluster The following describes how to delete Operators that were previously installed using Operator Lifecycle Manager (OLM) on your OpenShift Container Platform cluster. 4.3.1. Deleting Operators from a cluster using the web console Cluster administrators can delete installed Operators from a selected namespace by using the web console. Prerequisites Access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions. Procedure Navigate to the Operators Installed Operators page. Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it. On the right side of the Operator Details page, select Uninstall Operator from the Actions list. An Uninstall Operator? dialog box is displayed. Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates. Note This action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs. 4.3.2. Deleting Operators from a cluster using the CLI Cluster administrators can delete installed Operators from a selected namespace by using the CLI. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. oc command installed on workstation. Procedure Check the current version of the subscribed Operator (for example, jaeger ) in the currentCSV field: USD oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV Example output currentCSV: jaeger-operator.v1.8.2 Delete the subscription (for example, jaeger ): USD oc delete subscription jaeger -n openshift-operators Example output subscription.operators.coreos.com "jaeger" deleted Delete the CSV for the Operator in the target namespace using the currentCSV value from the step: USD oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators Example output clusterserviceversion.operators.coreos.com "jaeger-operator.v1.8.2" deleted 4.3.3. Refreshing failing subscriptions In Operator Lifecycle Manager (OLM), if you subscribe to an Operator that references images that are not accessible on your network, you can find jobs in the openshift-marketplace namespace that are failing with the following errors: Example output ImagePullBackOff for Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e" Example output rpc error: code = Unknown desc = error pinging docker registry example.com: Get "https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host As a result, the subscription is stuck in this failing state and the Operator is unable to install or upgrade. You can refresh a failing subscription by deleting the subscription, cluster service version (CSV), and other related objects. After recreating the subscription, OLM then reinstalls the correct version of the Operator. Prerequisites You have a failing subscription that is unable to pull an inaccessible bundle image. You have confirmed that the correct bundle image is accessible. Procedure Get the names of the Subscription and ClusterServiceVersion objects from the namespace where the Operator is installed: USD oc get sub,csv -n <namespace> Example output NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded Delete the subscription: USD oc delete subscription <subscription_name> -n <namespace> Delete the cluster service version: USD oc delete csv <csv_name> -n <namespace> Get the names of any failing jobs and related config maps in the openshift-marketplace namespace: USD oc get job,configmap -n openshift-marketplace Example output NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s Delete the job: USD oc delete job <job_name> -n openshift-marketplace This ensures pods that try to pull the inaccessible image are not recreated. Delete the config map: USD oc delete configmap <configmap_name> -n openshift-marketplace Reinstall the Operator using OperatorHub in the web console. Verification Check that the Operator has been reinstalled successfully: USD oc get sub,csv,installplan -n <namespace> 4.4. Configuring proxy support in Operator Lifecycle Manager If a global proxy is configured on the OpenShift Container Platform cluster, Operator Lifecycle Manager (OLM) automatically configures Operators that it manages with the cluster-wide proxy. However, you can also configure installed Operators to override the global proxy or inject a custom CA certificate. Additional resources Configuring the cluster-wide proxy Configuring a custom PKI (custom CA certificate) Developing Operators that support proxy settings for Go , Ansible , and Helm 4.4.1. Overriding proxy settings of an Operator If a cluster-wide egress proxy is configured, Operators running with Operator Lifecycle Manager (OLM) inherit the cluster-wide proxy settings on their deployments. Cluster administrators can also override these proxy settings by configuring the subscription of an Operator. Important Operators must handle setting environment variables for proxy settings in the pods for any managed Operands. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Select the Operator and click Install . On the Install Operator page, modify the Subscription object to include one or more of the following environment variables in the spec section: HTTP_PROXY HTTPS_PROXY NO_PROXY For example: Subscription object with proxy setting overrides apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd-config-test namespace: openshift-operators spec: config: env: - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test channel: clusterwide-alpha installPlanApproval: Automatic name: etcd source: community-operators sourceNamespace: openshift-marketplace startingCSV: etcdoperator.v0.9.4-clusterwide Note These environment variables can also be unset using an empty value to remove any previously set cluster-wide or custom proxy settings. OLM handles these environment variables as a unit; if at least one of them is set, all three are considered overridden and the cluster-wide defaults are not used for the deployments of the subscribed Operator. Click Install to make the Operator available to the selected namespaces. After the CSV for the Operator appears in the relevant namespace, you can verify that custom proxy environment variables are set in the deployment. For example, using the CLI: USD oc get deployment -n openshift-operators \ etcd-operator -o yaml \ | grep -i "PROXY" -A 2 Example output - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c ... 4.4.2. Injecting a custom CA certificate When a cluster administrator adds a custom CA certificate to a cluster using a config map, the Cluster Network Operator merges the user-provided certificates and system CA certificates into a single bundle. You can inject this merged bundle into your Operator running on Operator Lifecycle Manager (OLM), which is useful if you have a man-in-the-middle HTTPS proxy. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Custom CA certificate added to the cluster using a config map. Desired Operator installed and running on OLM. Procedure Create an empty config map in the namespace where the subscription for your Operator exists and include the following label: apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca 1 labels: config.openshift.io/inject-trusted-cabundle: "true" 2 1 Name of the config map. 2 Requests the Cluster Network Operator to inject the merged bundle. After creating this config map, it is immediately populated with the certificate contents of the merged bundle. Update your the Subscription object to include a spec.config section that mounts the trusted-ca config map as a volume to each container within a pod that requires a custom CA: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-operator spec: package: etcd channel: alpha config: 1 selector: matchLabels: <labels_for_pods> 2 volumes: 3 - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 4 path: tls-ca-bundle.pem 5 volumeMounts: 6 - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true 1 Add a config section if it does not exist. 2 Specify labels to match pods that are owned by the Operator. 3 Create a trusted-ca volume. 4 ca-bundle.crt is required as the config map key. 5 tls-ca-bundle.pem is required as the config map path. 6 Create a trusted-ca volume mount. Note Deployments of an Operator can fail to validate the authority and display a x509 certificate signed by unknown authority error. This error can occur even after injecting a custom CA when using the subscription of an Operator. In this case, you can set the mountPath as /etc/ssl/certs for trusted-ca by using the subscription of an Operator. 4.5. Viewing Operator status Understanding the state of the system in Operator Lifecycle Manager (OLM) is important for making decisions about and debugging problems with installed Operators. OLM provides insight into subscriptions and related catalog sources regarding their state and actions performed. This helps users better understand the healthiness of their Operators. 4.5.1. Operator subscription condition types Subscriptions can report the following condition types: Table 4.1. Subscription condition types Condition Description CatalogSourcesUnhealthy Some or all of the catalog sources to be used in resolution are unhealthy. InstallPlanMissing An install plan for a subscription is missing. InstallPlanPending An install plan for a subscription is pending installation. InstallPlanFailed An install plan for a subscription has failed. Note Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object. Additional resources Refreshing failing subscriptions 4.5.2. Viewing Operator subscription status by using the CLI You can view Operator subscription status by using the CLI. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List Operator subscriptions: USD oc get subs -n <operator_namespace> Use the oc describe command to inspect a Subscription resource: USD oc describe sub <subscription_name> -n <operator_namespace> In the command output, find the Conditions section for the status of Operator subscription condition types. In the following example, the CatalogSourcesUnhealthy condition type has a status of false because all available catalog sources are healthy: Example output Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy Note Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object. 4.5.3. Viewing Operator catalog source status by using the CLI You can view the status of an Operator catalog source by using the CLI. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the catalog sources in a namespace. For example, you can check the openshift-marketplace namespace, which is used for cluster-wide catalog sources: USD oc get catalogsources -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m Use the oc describe command to get more details and status about a catalog source: USD oc describe catalogsource example-catalog -n openshift-marketplace Example output Name: example-catalog Namespace: openshift-marketplace ... Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace In the preceding example output, the last observed state is TRANSIENT_FAILURE . This state indicates that there is a problem establishing a connection for the catalog source. List the pods in the namespace where your catalog source was created: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m When a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the example-catalog-bwt8z pod is ImagePullBackOff . This status indicates that there is an issue pulling the catalog source's index image. Use the oc describe command to inspect a pod for more detailed information: USD oc describe pod example-catalog-bwt8z -n openshift-marketplace Example output Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image "quay.io/example-org/example-catalog:v1": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull In the preceding example output, the error messages indicate that the catalog source's index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials. Additional resources Operator Lifecycle Manager concepts and resources Catalog source gRPC documentation: States of Connectivity Accessing images for Operators from private registries 4.6. Managing Operator conditions As a cluster administrator, you can manage Operator conditions by using Operator Lifecycle Manager (OLM). 4.6.1. Overriding Operator conditions As a cluster administrator, you might want to ignore a supported Operator condition reported by an Operator. When present, Operator conditions in the Spec.Overrides array override the conditions in the Spec.Conditions array, allowing cluster administrators to deal with situations where an Operator is incorrectly reporting a state to Operator Lifecycle Manager (OLM). Note By default, the Spec.Overrides array is not present in an OperatorCondition object until it is added by a cluster administrator. The Spec.Conditions array is also not present until it is either added by a user or as a result of custom Operator logic. For example, consider a known version of an Operator that always communicates that it is not upgradeable. In this instance, you might want to upgrade the Operator despite the Operator communicating that it is not upgradeable. This could be accomplished by overriding the Operator condition by adding the condition type and status to the Spec.Overrides array in the OperatorCondition object. Prerequisites An Operator with an OperatorCondition object, installed using OLM. Procedure Edit the OperatorCondition object for the Operator: USD oc edit operatorcondition <name> Add a Spec.Overrides array to the object: Example Operator condition override apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: overrides: - type: Upgradeable 1 status: "True" reason: "upgradeIsSafe" message: "This is a known issue with the Operator where it always reports that it cannot be upgraded." conditions: - type: Upgradeable status: "False" reason: "migration" message: "The operator is performing a migration." lastTransitionTime: "2020-08-24T23:15:55Z" 1 Allows the cluster administrator to change the upgrade readiness to True . 4.6.2. Updating your Operator to use Operator conditions Operator Lifecycle Manager (OLM) automatically creates an OperatorCondition resource for each ClusterServiceVersion resource that it reconciles. All service accounts in the CSV are granted the RBAC to interact with the OperatorCondition owned by the Operator. An Operator author can develop their Operator to use the operator-lib library such that, after the Operator has been deployed by OLM, it can set its own conditions. For more resources about setting Operator conditions as an Operator author, see the Enabling Operator conditions page. 4.6.2.1. Setting defaults In an effort to remain backwards compatible, OLM treats the absence of an OperatorCondition resource as opting out of the condition. Therefore, an Operator that opts in to using Operator conditions should set default conditions before the ready probe for the pod is set to true . This provides the Operator with a grace period to update the condition to the correct state. 4.6.3. Additional resources Operator conditions 4.7. Allowing non-cluster administrators to install Operators Cluster administrators can use Operator groups to allow regular users to install Operators. Additional resources Operator groups 4.7.1. Understanding Operator installation policy Operators can require wide privileges to run, and the required privileges can change between versions. Operator Lifecycle Manager (OLM) runs with cluster-admin privileges. By default, Operator authors can specify any set of permissions in the cluster service version (CSV), and OLM consequently grants it to the Operator. To ensure that an Operator cannot achieve cluster-scoped privileges and that users cannot escalate privileges using OLM, Cluster administrators can manually audit Operators before they are added to the cluster. Cluster administrators are also provided tools for determining and constraining which actions are allowed during an Operator installation or upgrade using service accounts. Cluster administrators can associate an Operator group with a service account that has a set of privileges granted to it. The service account sets policy on Operators to ensure they only run within predetermined boundaries by using role-based access control (RBAC) rules. As a result, the Operator is unable to do anything that is not explicitly permitted by those rules. By employing Operator groups, users with enough privileges can install Operators with a limited scope. As a result, more of the Operator Framework tools can safely be made available to more users, providing a richer experience for building applications with Operators. Note Role-based access control (RBAC) for Subscription objects is automatically granted to every user with the edit or admin role in a namespace. However, RBAC does not exist on OperatorGroup objects; this absence is what prevents regular users from installing Operators. Pre-installing Operator groups is effectively what gives installation privileges. Keep the following points in mind when associating an Operator group with a service account: The APIService and CustomResourceDefinition resources are always created by OLM using the cluster-admin role. A service account associated with an Operator group should never be granted privileges to write these resources. Any Operator tied to this Operator group is now confined to the permissions granted to the specified service account. If the Operator asks for permissions that are outside the scope of the service account, the install fails with appropriate errors so the cluster administrator can troubleshoot and resolve the issue. 4.7.1.1. Installation scenarios When determining whether an Operator can be installed or upgraded on a cluster, Operator Lifecycle Manager (OLM) considers the following scenarios: A cluster administrator creates a new Operator group and specifies a service account. All Operator(s) associated with this Operator group are installed and run against the privileges granted to the service account. A cluster administrator creates a new Operator group and does not specify any service account. OpenShift Container Platform maintains backward compatibility, so the default behavior remains and Operator installs and upgrades are permitted. For existing Operator groups that do not specify a service account, the default behavior remains and Operator installs and upgrades are permitted. A cluster administrator updates an existing Operator group and specifies a service account. OLM allows the existing Operator to continue to run with their current privileges. When such an existing Operator is going through an upgrade, it is reinstalled and run against the privileges granted to the service account like any new Operator. A service account specified by an Operator group changes by adding or removing permissions, or the existing service account is swapped with a new one. When existing Operators go through an upgrade, it is reinstalled and run against the privileges granted to the updated service account like any new Operator. A cluster administrator removes the service account from an Operator group. The default behavior remains and Operator installs and upgrades are permitted. 4.7.1.2. Installation workflow When an Operator group is tied to a service account and an Operator is installed or upgraded, Operator Lifecycle Manager (OLM) uses the following workflow: The given Subscription object is picked up by OLM. OLM fetches the Operator group tied to this subscription. OLM determines that the Operator group has a service account specified. OLM creates a client scoped to the service account and uses the scoped client to install the Operator. This ensures that any permission requested by the Operator is always confined to that of the service account in the Operator group. OLM creates a new service account with the set of permissions specified in the CSV and assigns it to the Operator. The Operator runs as the assigned service account. 4.7.2. Scoping Operator installations To provide scoping rules to Operator installations and upgrades on Operator Lifecycle Manager (OLM), associate a service account with an Operator group. Using this example, a cluster administrator can confine a set of Operators to a designated namespace. Procedure Create a new namespace: USD cat <<EOF | oc create -f - apiVersion: v1 kind: Namespace metadata: name: scoped EOF Allocate permissions that you want the Operator(s) to be confined to. This involves creating a new service account, relevant role(s), and role binding(s). USD cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: scoped namespace: scoped EOF The following example grants the service account permissions to do anything in the designated namespace for simplicity. In a production environment, you should create a more fine-grained set of permissions: USD cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: scoped namespace: scoped rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: scoped-bindings namespace: scoped roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: scoped subjects: - kind: ServiceAccount name: scoped namespace: scoped EOF Create an OperatorGroup object in the designated namespace. This Operator group targets the designated namespace to ensure that its tenancy is confined to it. In addition, Operator groups allow a user to specify a service account. Specify the service account created in the step: USD cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: scoped namespace: scoped spec: serviceAccountName: scoped targetNamespaces: - scoped EOF Any Operator installed in the designated namespace is tied to this Operator group and therefore to the service account specified. Create a Subscription object in the designated namespace to install an Operator: USD cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd namespace: scoped spec: channel: singlenamespace-alpha name: etcd source: <catalog_source_name> 1 sourceNamespace: <catalog_source_namespace> 2 EOF 1 Specify a catalog source that already exists in the designated namespace or one that is in the global catalog namespace. 2 Specify a namespace where the catalog source was created. Any Operator tied to this Operator group is confined to the permissions granted to the specified service account. If the Operator requests permissions that are outside the scope of the service account, the installation fails with relevant errors. 4.7.2.1. Fine-grained permissions Operator Lifecycle Manager (OLM) uses the service account specified in an Operator group to create or update the following resources related to the Operator being installed: ClusterServiceVersion Subscription Secret ServiceAccount Service ClusterRole and ClusterRoleBinding Role and RoleBinding To confine Operators to a designated namespace, cluster administrators can start by granting the following permissions to the service account: Note The following role is a generic example and additional rules might be required based on the specific Operator. kind: Role rules: - apiGroups: ["operators.coreos.com"] resources: ["subscriptions", "clusterserviceversions"] verbs: ["get", "create", "update", "patch"] - apiGroups: [""] resources: ["services", "serviceaccounts"] verbs: ["get", "create", "update", "patch"] - apiGroups: ["rbac.authorization.k8s.io"] resources: ["roles", "rolebindings"] verbs: ["get", "create", "update", "patch"] - apiGroups: ["apps"] 1 resources: ["deployments"] verbs: ["list", "watch", "get", "create", "update", "patch", "delete"] - apiGroups: [""] 2 resources: ["pods"] verbs: ["list", "watch", "get", "create", "update", "patch", "delete"] 1 2 Add permissions to create other resources, such as deployments and pods shown here. In addition, if any Operator specifies a pull secret, the following permissions must also be added: kind: ClusterRole 1 rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get"] --- kind: Role rules: - apiGroups: [""] resources: ["secrets"] verbs: ["create", "update", "patch"] 1 Required to get the secret from the OLM namespace. 4.7.3. Operator catalog access control When an Operator catalog is created in the global catalog namespace openshift-marketplace , the catalog's Operators are made available cluster-wide to all namespaces. A catalog created in other namespaces only makes its Operators available in that same namespace of the catalog. On clusters where non-cluster administrator users have been delegated Operator installation privileges, cluster administrators might want to further control or restrict the set of Operators those users are allowed to install. This can be achieved with the following actions: Disable all of the default global catalogs. Enable custom, curated catalogs in the same namespace where the relevant Operator groups have been pre-installed. Additional resources Disabling the default OperatorHub sources Adding a catalog source to a cluster 4.7.4. Troubleshooting permission failures If an Operator installation fails due to lack of permissions, identify the errors using the following procedure. Procedure Review the Subscription object. Its status has an object reference installPlanRef that points to the InstallPlan object that attempted to create the necessary [Cluster]Role[Binding] object(s) for the Operator: apiVersion: operators.coreos.com/v1 kind: Subscription metadata: name: etcd namespace: scoped status: installPlanRef: apiVersion: operators.coreos.com/v1 kind: InstallPlan name: install-4plp8 namespace: scoped resourceVersion: "117359" uid: 2c1df80e-afea-11e9-bce3-5254009c9c23 Check the status of the InstallPlan object for any errors: apiVersion: operators.coreos.com/v1 kind: InstallPlan status: conditions: - lastTransitionTime: "2019-07-26T21:13:10Z" lastUpdateTime: "2019-07-26T21:13:10Z" message: 'error creating clusterrole etcdoperator.v0.9.4-clusterwide-dsfx4: clusterroles.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:scoped:scoped" cannot create resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope' reason: InstallComponentFailed status: "False" type: Installed phase: Failed The error message tells you: The type of resource it failed to create, including the API group of the resource. In this case, it was clusterroles in the rbac.authorization.k8s.io group. The name of the resource. The type of error: is forbidden tells you that the user does not have enough permission to do the operation. The name of the user who attempted to create or update the resource. In this case, it refers to the service account specified in the Operator group. The scope of the operation: cluster scope or not. The user can add the missing permission to the service account and then iterate. Note Operator Lifecycle Manager (OLM) does not currently provide the complete list of errors on the first try. 4.8. Managing custom catalogs Cluster administrators and Operator catalog maintainers can create and manage custom catalogs packaged using the bundle format on Operator Lifecycle Manager (OLM) in OpenShift Container Platform. Important Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of OpenShift Container Platform that uses the Kubernetes version that removed the API. If your cluster is using custom catalogs, see Controlling Operator compatibility with OpenShift Container Platform versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades. Additional resources Red Hat-provided Operator catalogs 4.8.1. Prerequisites Install the opm CLI . 4.8.2. File-based catalogs File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible. For more details about the file-based catalog specification, see Operator Framework packaging format . 4.8.2.1. Creating a file-based catalog image You can create a catalog image that uses the plain text file-based catalog format (JSON or YAML), which replaces the deprecated SQLite database format. The opm CLI provides tooling that helps initialize a catalog in the file-based format, render new records into it, and validate that the catalog is valid. Prerequisites opm version 1.18.0+ podman version 1.9.3+ A bundle image built and pushed to a registry that supports Docker v2-2 Important The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. Procedure Initialize a catalog for a file-based catalog: Create a directory for the catalog: USD mkdir <operator_name>-index Create a Dockerfile that can build a catalog image: Example <operator_name>-index.Dockerfile # The base image is expected to contain # /bin/opm (with a serve subcommand) and /bin/grpc_health_probe FROM registry.redhat.io/openshift4/ose-operator-registry:v4.9 # Configure the entrypoint and command ENTRYPOINT ["/bin/opm"] CMD ["serve", "/configs"] # Copy declarative config root into image at /configs ADD <operator_name>-index /configs # Set DC-specific label for the location of the DC root directory # in the image LABEL operators.operatorframework.io.index.configs.v1=/configs The Dockerfile must be in the same parent directory as the catalog directory that you created in the step: Example directory structure . ├── <operator_name>-index └── <operator_name>-index.Dockerfile Populate the catalog with your package definition: USD opm init <operator_name> \ 1 --default-channel=preview \ 2 --description=./README.md \ 3 --icon=./operator-icon.svg \ 4 --output yaml \ 5 > <operator_name>-index/index.yaml 6 1 Operator, or package, name. 2 Channel that subscription will default to if unspecified. 3 Path to the Operator's README.md or other documentation. 4 Path to the Operator's icon. 5 Output format: JSON or YAML. 6 Path for creating the catalog configuration file. This command generates an olm.package declarative config blob in the specified catalog configuration file. Add a bundle to the catalog: USD opm render <registry>/<namespace>/<bundle_image_name>:<tag> \ 1 --output=yaml \ >> <operator_name>-index/index.yaml 2 1 Pull spec for the bundle image. 2 Path to the catalog configuration file. The opm render command generates a declarative config blob from the provided catalog images and bundle images. Note Channels must contain at least one bundle. Add a channel entry for the bundle. For example, modify the following example to your specifications, and add it to your <operator_name>-index/index.yaml file: Example channel entry --- schema: olm.channel package: <operator_name> name: preview entries: - name: <operator_name>.v0.1.0 1 1 Ensure that you include the period ( . ) after <operator_name> but before the v in the version. Otherwise, the entry will fail to pass the opm validate command. Validate the file-based catalog: Run the opm validate command against the catalog directory: USD opm validate <operator_name>-index Check that the error code is 0 : USD echo USD? Example output 0 Build the catalog image: USD podman build . \ -f <operator_name>-index.Dockerfile \ -t <registry>/<namespace>/<catalog_image_name>:<tag> Push the catalog image to a registry: If required, authenticate with your target registry: USD podman login <registry> Push the catalog image: USD podman push <registry>/<namespace>/<catalog_image_name>:<tag> 4.8.3. SQLite-based catalogs Important The SQLite database format for Operator catalogs is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. 4.8.3.1. Creating a SQLite-based index image You can create an index image based on the SQLite database format by using the opm CLI. Prerequisites opm version 1.18.0+ podman version 1.9.3+ A bundle image built and pushed to a registry that supports Docker v2-2 Important The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. Procedure Start a new index: USD opm index add \ --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \ 1 --tag <registry>/<namespace>/<index_image_name>:<tag> \ 2 [--binary-image <registry_base_image>] 3 1 Comma-separated list of bundle images to add to the index. 2 The image tag that you want the index image to have. 3 Optional: An alternative registry base image to use for serving the catalog. Push the index image to a registry. If required, authenticate with your target registry: USD podman login <registry> Push the index image: USD podman push <registry>/<namespace>/<index_image_name>:<tag> 4.8.3.2. Updating a SQLite-based index image After configuring OperatorHub to use a catalog source that references a custom index image, cluster administrators can keep the available Operators on their cluster up to date by adding bundle images to the index image. You can update an existing index image using the opm index add command. Prerequisites opm version 1.18.0+ podman version 1.9.3+ An index image built and pushed to a registry. An existing catalog source referencing the index image. Procedure Update the existing index by adding bundle images: USD opm index add \ --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \ 3 --pull-tool podman 4 1 The --bundles flag specifies a comma-separated list of additional bundle images to add to the index. 2 The --from-index flag specifies the previously pushed index. 3 The --tag flag specifies the image tag to apply to the updated index image. 4 The --pull-tool flag specifies the tool used to pull container images. where: <registry> Specifies the hostname of the registry, such as quay.io or mirror.example.com . <namespace> Specifies the namespace of the registry, such as ocs-dev or abc . <new_bundle_image> Specifies the new bundle image to add to the registry, such as ocs-operator . <digest> Specifies the SHA image ID, or digest, of the bundle image, such as c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 . <existing_index_image> Specifies the previously pushed image, such as abc-redhat-operator-index . <existing_tag> Specifies a previously pushed image tag, such as 4.9 . <updated_tag> Specifies the image tag to apply to the updated index image, such as 4.9.1 . Example command USD opm index add \ --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 \ --from-index mirror.example.com/abc/abc-redhat-operator-index:4.9 \ --tag mirror.example.com/abc/abc-redhat-operator-index:4.9.1 \ --pull-tool podman Push the updated index image: USD podman push <registry>/<namespace>/<existing_index_image>:<updated_tag> After Operator Lifecycle Manager (OLM) automatically polls the index image referenced in the catalog source at its regular interval, verify that the new packages are successfully added: USD oc get packagemanifests -n openshift-marketplace 4.8.3.3. Filtering a SQLite-based index image An index image, based on the Operator bundle format, is a containerized snapshot of an Operator catalog. You can filter, or prune , an index of all but a specified list of packages, which creates a copy of the source index containing only the Operators that you want. Prerequisites podman version 1.9.3+ grpcurl (third-party command-line tool) opm version 1.18.0+ Access to a registry that supports Docker v2-2 Important The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. Procedure Authenticate with your target registry: USD podman login <target_registry> Determine the list of packages you want to include in your pruned index. Run the source index image that you want to prune in a container. For example: USD podman run -p50051:50051 \ -it registry.redhat.io/redhat/redhat-operator-index:v4.9 Example output Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.9... Getting image source signatures Copying blob ae8a0c23f5b1 done ... INFO[0000] serving registry database=/database/index.db port=50051 In a separate terminal session, use the grpcurl command to get a list of the packages provided by the index: USD grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out Inspect the packages.out file and identify which package names from this list you want to keep in your pruned index. For example: Example snippets of packages list ... { "name": "advanced-cluster-management" } ... { "name": "jaeger-product" } ... { { "name": "quay-operator" } ... In the terminal session where you executed the podman run command, press Ctrl and C to stop the container process. Run the following command to prune the source index of all but the specified packages: USD opm index prune \ -f registry.redhat.io/redhat/redhat-operator-index:v4.9 \ 1 -p advanced-cluster-management,jaeger-product,quay-operator \ 2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.9 4 1 Index to prune. 2 Comma-separated list of packages to keep. 3 Required only for IBM Power and IBM Z images: Operator Registry base image with the tag that matches the target OpenShift Container Platform cluster major and minor version. 4 Custom tag for new index image being built. Run the following command to push the new index image to your target registry: USD podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.9 where <namespace> is any existing namespace on the registry. 4.8.4. Adding a catalog source to a cluster Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource object that references an index image. OperatorHub uses catalog sources to populate the user interface. Prerequisites An index image built and pushed to a registry. Procedure Create a CatalogSource object that references your index image. Modify the following to your specifications and save it as a catalogSource.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace 1 annotations: olm.catalogImageTemplate: 2 "<registry>/<namespace>/<index_image_name>:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}" spec: sourceType: grpc image: <registry>/<namespace>/<index_image_name>:<tag> 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m 1 If you want the catalog source to be available globally to users in all namespaces, specify the openshift-marketplace namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. 2 Optional: Set the olm.catalogImageTemplate annotation to your index image name and use one or more of the Kubernetes cluster version variables as shown when constructing the template for the image tag. 3 Specify your index image. 4 Specify your name or an organization name publishing the catalog. 5 Catalog sources can automatically check for new versions to keep up to date. Use the file to create the CatalogSource object: USD oc apply -f catalogSource.yaml Verify the following resources are created successfully. Check the pods: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h Check the catalog source: USD oc get catalogsource -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s Check the package manifest: USD oc get packagemanifest -n openshift-marketplace Example output NAME CATALOG AGE jaeger-product My Operator Catalog 93s You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console. Additional resources See Operator Lifecycle Manager concepts and resources Catalog source for more details on the CatalogSource object spec. If your index image is hosted on a private registry and requires authentication, see Accessing images for Operators from private registries . 4.8.5. Accessing images for Operators from private registries If certain images relevant to Operators managed by Operator Lifecycle Manager (OLM) are hosted in an authenticated container image registry, also known as a private registry, OLM and OperatorHub are unable to pull the images by default. To enable access, you can create a pull secret that contains the authentication credentials for the registry. By referencing one or more pull secrets in a catalog source, OLM can handle placing the secrets in the Operator and catalog namespace to allow installation. Other images required by an Operator or its Operands might require access to private registries as well. OLM does not handle placing the secrets in target tenant namespaces for this scenario, but authentication credentials can be added to the global cluster pull secret or individual namespace service accounts to enable the required access. The following types of images should be considered when determining whether Operators managed by OLM have appropriate pull access: Index images A CatalogSource object can reference an index image, which use the Operator bundle format and are catalog sources packaged as container images hosted in images registries. If an index image is hosted in a private registry, a secret can be used to enable pull access. Bundle images Operator bundle images are metadata and manifests packaged as container images that represent a unique version of an Operator. If any bundle images referenced in a catalog source are hosted in one or more private registries, a secret can be used to enable pull access. Operator and Operand images If an Operator installed from a catalog source uses a private image, either for the Operator image itself or one of the Operand images it watches, the Operator will fail to install because the deployment will not have access to the required registry authentication. Referencing secrets in a catalog source does not enable OLM to place the secrets in target tenant namespaces in which Operands are installed. Instead, the authentication details can be added to the global cluster pull secret in the openshift-config namespace, which provides access to all namespaces on the cluster. Alternatively, if providing access to the entire cluster is not permissible, the pull secret can be added to the default service accounts of the target tenant namespaces. Prerequisites At least one of the following hosted in a private registry: An index image or catalog image. An Operator bundle image. An Operator or Operand image. Procedure Create a secret for each required private registry. Log in to the private registry to create or update your registry credentials file: USD podman login <registry>:<port> Note The file path of your registry credentials can be different depending on the container tool used to log in to the registry. For the podman CLI, the default location is USD{XDG_RUNTIME_DIR}/containers/auth.json . For the docker CLI, the default location is /root/.docker/config.json . It is recommended to include credentials for only one registry per secret, and manage credentials for multiple registries in separate secrets. Multiple secrets can be included in a CatalogSource object in later steps, and OpenShift Container Platform will merge the secrets into a single virtual credentials file for use during an image pull. A registry credentials file can, by default, store details for more than one registry or for multiple repositories in one registry. Verify the current contents of your file. For example: File storing credentials for multiple registries { "auths": { "registry.redhat.io": { "auth": "FrNHNydQXdzclNqdg==" }, "quay.io": { "auth": "fegdsRib21iMQ==" }, "https://quay.io/my-namespace/my-user/my-image": { "auth": "eWfjwsDdfsa221==" }, "https://quay.io/my-namespace/my-user": { "auth": "feFweDdscw34rR==" }, "https://quay.io/my-namespace": { "auth": "frwEews4fescyq==" } } } Because this file is used to create secrets in later steps, ensure that you are storing details for only one registry per file. This can be accomplished by using either of the following methods: Use the podman logout <registry> command to remove credentials for additional registries until only the one registry you want remains. Edit your registry credentials file and separate the registry details to be stored in multiple files. For example: File storing credentials for one registry { "auths": { "registry.redhat.io": { "auth": "FrNHNydQXdzclNqdg==" } } } File storing credentials for another registry { "auths": { "quay.io": { "auth": "Xd2lhdsbnRib21iMQ==" } } } Create a secret in the openshift-marketplace namespace that contains the authentication credentials for a private registry: USD oc create secret generic <secret_name> \ -n openshift-marketplace \ --from-file=.dockerconfigjson=<path/to/registry/credentials> \ --type=kubernetes.io/dockerconfigjson Repeat this step to create additional secrets for any other required private registries, updating the --from-file flag to specify another registry credentials file path. Create or update an existing CatalogSource object to reference one or more secrets: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc secrets: 1 - "<secret_name_1>" - "<secret_name_2>" image: <registry>:<port>/<namespace>/<image>:<tag> displayName: My Operator Catalog publisher: <publisher_name> updateStrategy: registryPoll: interval: 30m 1 Add a spec.secrets section and specify any required secrets. If any Operator or Operand images that are referenced by a subscribed Operator require access to a private registry, you can either provide access to all namespaces in the cluster, or individual target tenant namespaces. To provide access to all namespaces in the cluster, add authentication details to the global cluster pull secret in the openshift-config namespace. Warning Cluster resources must adjust to the new global pull secret, which can temporarily limit the usability of the cluster. Extract the .dockerconfigjson file from the global pull secret: USD oc extract secret/pull-secret -n openshift-config --confirm Update the .dockerconfigjson file with your authentication credentials for the required private registry or registries and save it as a new file: USD cat .dockerconfigjson | \ jq --compact-output '.auths["<registry>:<port>/<namespace>/"] |= . + {"auth":"<token>"}' \ 1 > new_dockerconfigjson 1 Replace <registry>:<port>/<namespace> with the private registry details and <token> with your authentication credentials. Update the global pull secret with the new file: USD oc set data secret/pull-secret -n openshift-config \ --from-file=.dockerconfigjson=new_dockerconfigjson To update an individual namespace, add a pull secret to the service account for the Operator that requires access in the target tenant namespace. Recreate the secret that you created for the openshift-marketplace in the tenant namespace: USD oc create secret generic <secret_name> \ -n <tenant_namespace> \ --from-file=.dockerconfigjson=<path/to/registry/credentials> \ --type=kubernetes.io/dockerconfigjson Verify the name of the service account for the Operator by searching the tenant namespace: USD oc get sa -n <tenant_namespace> 1 1 If the Operator was installed in an individual namespace, search that namespace. If the Operator was installed for all namespaces, search the openshift-operators namespace. Example output NAME SECRETS AGE builder 2 6m1s default 2 6m1s deployer 2 6m1s etcd-operator 2 5m18s 1 1 Service account for an installed etcd Operator. Link the secret to the service account for the Operator: USD oc secrets link <operator_sa> \ -n <tenant_namespace> \ <secret_name> \ --for=pull Additional resources See What is a secret? for more information on the types of secrets, including those used for registry credentials. See Updating the global cluster pull secret for more details on the impact of changing this secret. See Allowing pods to reference images from other secured registries for more details on linking pull secrets to service accounts per namespace. 4.8.6. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. As a cluster administrator, you can disable the set of default catalogs. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 4.8.7. Removing custom catalogs As a cluster administrator, you can remove custom Operator catalogs that have been previously added to your cluster by deleting the related catalog source. Procedure In the Administrator perspective of the web console, navigate to Administration Cluster Settings . Click the Configuration tab, and then click OperatorHub . Click the Sources tab. Select the Options menu for the catalog that you want to remove, and then click Delete CatalogSource . 4.9. Using Operator Lifecycle Manager on restricted networks For OpenShift Container Platform clusters that are installed on restricted networks, also known as disconnected clusters , Operator Lifecycle Manager (OLM) by default cannot access the Red Hat-provided OperatorHub sources hosted on remote registries because those remote sources require full internet connectivity. However, as a cluster administrator you can still enable your cluster to use OLM in a restricted network if you have a workstation that has full internet access. The workstation, which requires full internet access to pull the remote OperatorHub content, is used to prepare local mirrors of the remote sources, and push the content to a mirror registry. The mirror registry can be located on a bastion host, which requires connectivity to both your workstation and the disconnected cluster, or a completely disconnected, or airgapped , host, which requires removable media to physically move the mirrored content to the disconnected environment. This guide describes the following process that is required to enable OLM in restricted networks: Disable the default remote OperatorHub sources for OLM. Use a workstation with full internet access to create and push local mirrors of the OperatorHub content to a mirror registry. Configure OLM to install and manage Operators from local sources on the mirror registry instead of the default remote sources. After enabling OLM in a restricted network, you can continue to use your unrestricted workstation to keep your local OperatorHub sources updated as newer versions of Operators are released. Important While OLM can manage Operators from local sources, the ability for a given Operator to run successfully in a restricted network still depends on the Operator itself meeting the following criteria: List any related images, or other container images that the Operator might require to perform their functions, in the relatedImages parameter of its ClusterServiceVersion (CSV) object. Reference all specified images by a digest (SHA) and not by a tag. You can search software on the Red Hat Ecosystem Catalog for a list of Red Hat Operators that support running in disconnected mode by filtering with the following selections: Type Containerized application Deployment method Operator Infrastructure features Disconnected Additional resources Red Hat-provided Operator catalogs Enabling your Operator for restricted network environments 4.9.1. Prerequisites Log in to your OpenShift Container Platform cluster as a user with cluster-admin privileges. If you want to prune the default catalog and selectively mirror only a subset of Operators, install the opm CLI . Note If you are using OLM in a restricted network on IBM Z, you must have at least 12 GB allocated to the directory where you place your registry. 4.9.2. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. You can then configure OperatorHub to use local catalog sources. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 4.9.3. Filtering a SQLite-based index image An index image, based on the Operator bundle format, is a containerized snapshot of an Operator catalog. You can filter, or prune , an index of all but a specified list of packages, which creates a copy of the source index containing only the Operators that you want. When configuring Operator Lifecycle Manager (OLM) to use mirrored content on restricted network OpenShift Container Platform clusters, use this pruning method if you want to only mirror a subset of Operators from the default catalogs. For the steps in this procedure, the target registry is an existing mirror registry that is accessible by your workstation with unrestricted network access. This example also shows pruning the index image for the default redhat-operators catalog, but the process is the same for any index image. Prerequisites Workstation with unrestricted network access podman version 1.9.3+ grpcurl (third-party command-line tool) opm version 1.18.0+ Access to a registry that supports Docker v2-2 Important The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. Procedure Authenticate with registry.redhat.io : USD podman login registry.redhat.io Authenticate with your target registry: USD podman login <target_registry> Determine the list of packages you want to include in your pruned index. Run the source index image that you want to prune in a container. For example: USD podman run -p50051:50051 \ -it registry.redhat.io/redhat/redhat-operator-index:v4.9 Example output Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.9... Getting image source signatures Copying blob ae8a0c23f5b1 done ... INFO[0000] serving registry database=/database/index.db port=50051 In a separate terminal session, use the grpcurl command to get a list of the packages provided by the index: USD grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out Inspect the packages.out file and identify which package names from this list you want to keep in your pruned index. For example: Example snippets of packages list ... { "name": "advanced-cluster-management" } ... { "name": "jaeger-product" } ... { { "name": "quay-operator" } ... In the terminal session where you executed the podman run command, press Ctrl and C to stop the container process. Run the following command to prune the source index of all but the specified packages: USD opm index prune \ -f registry.redhat.io/redhat/redhat-operator-index:v4.9 \ 1 -p advanced-cluster-management,jaeger-product,quay-operator \ 2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.9 4 1 Index to prune. 2 Comma-separated list of packages to keep. 3 Required only for IBM Power and IBM Z images: Operator Registry base image with the tag that matches the target OpenShift Container Platform cluster major and minor version. 4 Custom tag for new index image being built. Run the following command to push the new index image to your target registry: USD podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.9 where <namespace> is any existing namespace on the registry. For example, you might create an olm-mirror namespace to push all mirrored content to. 4.9.4. Mirroring an Operator catalog For instructions about mirroring Operator catalogs for use with disconnected clusters, see Installing Mirroring images for a disconnected installation . 4.9.5. Adding a catalog source to a cluster Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource object that references an index image. OperatorHub uses catalog sources to populate the user interface. Prerequisites An index image built and pushed to a registry. Procedure Create a CatalogSource object that references your index image. If you used the oc adm catalog mirror command to mirror your catalog to a target registry, you can use the generated catalogSource.yaml file in your manifests directory as a starting point. Modify the following to your specifications and save it as a catalogSource.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc image: <registry>/<namespace>/redhat-operator-index:v4.9 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m 1 If you mirrored content to local files before uploading to a registry, remove any backslash ( / ) characters from the metadata.name field to avoid an "invalid resource name" error when you create the object. 2 If you want the catalog source to be available globally to users in all namespaces, specify the openshift-marketplace namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. 3 Specify your index image. 4 Specify your name or an organization name publishing the catalog. 5 Catalog sources can automatically check for new versions to keep up to date. Use the file to create the CatalogSource object: USD oc apply -f catalogSource.yaml Verify the following resources are created successfully. Check the pods: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h Check the catalog source: USD oc get catalogsource -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s Check the package manifest: USD oc get packagemanifest -n openshift-marketplace Example output NAME CATALOG AGE jaeger-product My Operator Catalog 93s You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console. Additional resources If your index image is hosted on a private registry and requires authentication, see Accessing images for Operators from private registries . If you want your catalogs to be able to automatically update their index image version after cluster upgrades by using Kubernetes version-based image tags, see Image template for custom catalog sources . 4.9.6. Updating a SQLite-based index image After configuring OperatorHub to use a catalog source that references a custom index image, cluster administrators can keep the available Operators on their cluster up to date by adding bundle images to the index image. You can update an existing index image using the opm index add command. For restricted networks, the updated content must also be mirrored again to the cluster. Prerequisites opm version 1.18.0+ podman version 1.9.3+ An index image built and pushed to a registry. An existing catalog source referencing the index image. Procedure Update the existing index by adding bundle images: USD opm index add \ --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \ 3 --pull-tool podman 4 1 The --bundles flag specifies a comma-separated list of additional bundle images to add to the index. 2 The --from-index flag specifies the previously pushed index. 3 The --tag flag specifies the image tag to apply to the updated index image. 4 The --pull-tool flag specifies the tool used to pull container images. where: <registry> Specifies the hostname of the registry, such as quay.io or mirror.example.com . <namespace> Specifies the namespace of the registry, such as ocs-dev or abc . <new_bundle_image> Specifies the new bundle image to add to the registry, such as ocs-operator . <digest> Specifies the SHA image ID, or digest, of the bundle image, such as c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 . <existing_index_image> Specifies the previously pushed image, such as abc-redhat-operator-index . <existing_tag> Specifies a previously pushed image tag, such as 4.9 . <updated_tag> Specifies the image tag to apply to the updated index image, such as 4.9.1 . Example command USD opm index add \ --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 \ --from-index mirror.example.com/abc/abc-redhat-operator-index:4.9 \ --tag mirror.example.com/abc/abc-redhat-operator-index:4.9.1 \ --pull-tool podman Push the updated index image: USD podman push <registry>/<namespace>/<existing_index_image>:<updated_tag> Follow the steps in the Mirroring an Operator catalog procedure again to mirror the updated content. However, when you get to the step about creating the ImageContentSourcePolicy (ICSP) object, use the oc replace command instead of the oc create command. For example: USD oc replace -f ./manifests-redhat-operator-index-<random_number>/imageContentSourcePolicy.yaml This change is required because the object already exists and must be updated. Note Normally, the oc apply command can be used to update existing objects that were previously created using oc apply . However, due to a known issue regarding the size of the metadata.annotations field in ICSP objects, the oc replace command must be used for this step currently. After Operator Lifecycle Manager (OLM) automatically polls the index image referenced in the catalog source at its regular interval, verify that the new packages are successfully added: USD oc get packagemanifests -n openshift-marketplace Additional resources Mirroring an Operator catalog
|
[
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"oc apply -f sub.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: quay-v3.4 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.4.0 2",
"oc apply -f sub.yaml",
"oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV",
"currentCSV: jaeger-operator.v1.8.2",
"oc delete subscription jaeger -n openshift-operators",
"subscription.operators.coreos.com \"jaeger\" deleted",
"oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators",
"clusterserviceversion.operators.coreos.com \"jaeger-operator.v1.8.2\" deleted",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd-config-test namespace: openshift-operators spec: config: env: - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test channel: clusterwide-alpha installPlanApproval: Automatic name: etcd source: community-operators sourceNamespace: openshift-marketplace startingCSV: etcdoperator.v0.9.4-clusterwide",
"oc get deployment -n openshift-operators etcd-operator -o yaml | grep -i \"PROXY\" -A 2",
"- name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c",
"apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca 1 labels: config.openshift.io/inject-trusted-cabundle: \"true\" 2",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-operator spec: package: etcd channel: alpha config: 1 selector: matchLabels: <labels_for_pods> 2 volumes: 3 - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 4 path: tls-ca-bundle.pem 5 volumeMounts: 6 - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc edit operatorcondition <name>",
"apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: overrides: - type: Upgradeable 1 status: \"True\" reason: \"upgradeIsSafe\" message: \"This is a known issue with the Operator where it always reports that it cannot be upgraded.\" conditions: - type: Upgradeable status: \"False\" reason: \"migration\" message: \"The operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Namespace metadata: name: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: scoped namespace: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: scoped namespace: scoped rules: - apiGroups: [\"*\"] resources: [\"*\"] verbs: [\"*\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: scoped-bindings namespace: scoped roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: scoped subjects: - kind: ServiceAccount name: scoped namespace: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: scoped namespace: scoped spec: serviceAccountName: scoped targetNamespaces: - scoped EOF",
"cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd namespace: scoped spec: channel: singlenamespace-alpha name: etcd source: <catalog_source_name> 1 sourceNamespace: <catalog_source_namespace> 2 EOF",
"kind: Role rules: - apiGroups: [\"operators.coreos.com\"] resources: [\"subscriptions\", \"clusterserviceversions\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"\"] resources: [\"services\", \"serviceaccounts\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"rbac.authorization.k8s.io\"] resources: [\"roles\", \"rolebindings\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"apps\"] 1 resources: [\"deployments\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"] - apiGroups: [\"\"] 2 resources: [\"pods\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"]",
"kind: ClusterRole 1 rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\"] --- kind: Role rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"create\", \"update\", \"patch\"]",
"apiVersion: operators.coreos.com/v1 kind: Subscription metadata: name: etcd namespace: scoped status: installPlanRef: apiVersion: operators.coreos.com/v1 kind: InstallPlan name: install-4plp8 namespace: scoped resourceVersion: \"117359\" uid: 2c1df80e-afea-11e9-bce3-5254009c9c23",
"apiVersion: operators.coreos.com/v1 kind: InstallPlan status: conditions: - lastTransitionTime: \"2019-07-26T21:13:10Z\" lastUpdateTime: \"2019-07-26T21:13:10Z\" message: 'error creating clusterrole etcdoperator.v0.9.4-clusterwide-dsfx4: clusterroles.rbac.authorization.k8s.io is forbidden: User \"system:serviceaccount:scoped:scoped\" cannot create resource \"clusterroles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope' reason: InstallComponentFailed status: \"False\" type: Installed phase: Failed",
"mkdir <operator_name>-index",
"The base image is expected to contain /bin/opm (with a serve subcommand) and /bin/grpc_health_probe FROM registry.redhat.io/openshift4/ose-operator-registry:v4.9 Configure the entrypoint and command ENTRYPOINT [\"/bin/opm\"] CMD [\"serve\", \"/configs\"] Copy declarative config root into image at /configs ADD <operator_name>-index /configs Set DC-specific label for the location of the DC root directory in the image LABEL operators.operatorframework.io.index.configs.v1=/configs",
". ├── <operator_name>-index └── <operator_name>-index.Dockerfile",
"opm init <operator_name> \\ 1 --default-channel=preview \\ 2 --description=./README.md \\ 3 --icon=./operator-icon.svg \\ 4 --output yaml \\ 5 > <operator_name>-index/index.yaml 6",
"opm render <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --output=yaml >> <operator_name>-index/index.yaml 2",
"--- schema: olm.channel package: <operator_name> name: preview entries: - name: <operator_name>.v0.1.0 1",
"opm validate <operator_name>-index",
"echo USD?",
"0",
"podman build . -f <operator_name>-index.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>",
"podman login <registry>",
"podman push <registry>/<namespace>/<catalog_image_name>:<tag>",
"opm index add --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --tag <registry>/<namespace>/<index_image_name>:<tag> \\ 2 [--binary-image <registry_base_image>] 3",
"podman login <registry>",
"podman push <registry>/<namespace>/<index_image_name>:<tag>",
"opm index add --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \\ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \\ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \\ 3 --pull-tool podman 4",
"opm index add --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 --from-index mirror.example.com/abc/abc-redhat-operator-index:4.9 --tag mirror.example.com/abc/abc-redhat-operator-index:4.9.1 --pull-tool podman",
"podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>",
"oc get packagemanifests -n openshift-marketplace",
"podman login <target_registry>",
"podman run -p50051:50051 -it registry.redhat.io/redhat/redhat-operator-index:v4.9",
"Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.9 Getting image source signatures Copying blob ae8a0c23f5b1 done INFO[0000] serving registry database=/database/index.db port=50051",
"grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out",
"{ \"name\": \"advanced-cluster-management\" } { \"name\": \"jaeger-product\" } { { \"name\": \"quay-operator\" }",
"opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.9 \\ 1 -p advanced-cluster-management,jaeger-product,quay-operator \\ 2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \\ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.9 4",
"podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.9",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace 1 annotations: olm.catalogImageTemplate: 2 \"<registry>/<namespace>/<index_image_name>:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: sourceType: grpc image: <registry>/<namespace>/<index_image_name>:<tag> 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s",
"podman login <registry>:<port>",
"{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" }, \"quay.io\": { \"auth\": \"fegdsRib21iMQ==\" }, \"https://quay.io/my-namespace/my-user/my-image\": { \"auth\": \"eWfjwsDdfsa221==\" }, \"https://quay.io/my-namespace/my-user\": { \"auth\": \"feFweDdscw34rR==\" }, \"https://quay.io/my-namespace\": { \"auth\": \"frwEews4fescyq==\" } } }",
"{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" } } }",
"{ \"auths\": { \"quay.io\": { \"auth\": \"Xd2lhdsbnRib21iMQ==\" } } }",
"oc create secret generic <secret_name> -n openshift-marketplace --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc secrets: 1 - \"<secret_name_1>\" - \"<secret_name_2>\" image: <registry>:<port>/<namespace>/<image>:<tag> displayName: My Operator Catalog publisher: <publisher_name> updateStrategy: registryPoll: interval: 30m",
"oc extract secret/pull-secret -n openshift-config --confirm",
"cat .dockerconfigjson | jq --compact-output '.auths[\"<registry>:<port>/<namespace>/\"] |= . + {\"auth\":\"<token>\"}' \\ 1 > new_dockerconfigjson",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=new_dockerconfigjson",
"oc create secret generic <secret_name> -n <tenant_namespace> --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson",
"oc get sa -n <tenant_namespace> 1",
"NAME SECRETS AGE builder 2 6m1s default 2 6m1s deployer 2 6m1s etcd-operator 2 5m18s 1",
"oc secrets link <operator_sa> -n <tenant_namespace> <secret_name> --for=pull",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"podman login registry.redhat.io",
"podman login <target_registry>",
"podman run -p50051:50051 -it registry.redhat.io/redhat/redhat-operator-index:v4.9",
"Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.9 Getting image source signatures Copying blob ae8a0c23f5b1 done INFO[0000] serving registry database=/database/index.db port=50051",
"grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out",
"{ \"name\": \"advanced-cluster-management\" } { \"name\": \"jaeger-product\" } { { \"name\": \"quay-operator\" }",
"opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.9 \\ 1 -p advanced-cluster-management,jaeger-product,quay-operator \\ 2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \\ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.9 4",
"podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.9",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc image: <registry>/<namespace>/redhat-operator-index:v4.9 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s",
"opm index add --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \\ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \\ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \\ 3 --pull-tool podman 4",
"opm index add --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 --from-index mirror.example.com/abc/abc-redhat-operator-index:4.9 --tag mirror.example.com/abc/abc-redhat-operator-index:4.9.1 --pull-tool podman",
"podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>",
"oc replace -f ./manifests-redhat-operator-index-<random_number>/imageContentSourcePolicy.yaml",
"oc get packagemanifests -n openshift-marketplace"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/operators/administrator-tasks
|
Chapter 5. Authentication and Interoperability
|
Chapter 5. Authentication and Interoperability SSSD in a container now fully supported The rhel7/sssd container image, which provides the System Security Services Daemon (SSSD), is no longer a Technology Preview feature. The image is now fully supported. Note that the rhel7/ipa-server container image is still a Technology Preview feature. For details, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/using_containerized_identity_management_services . (BZ#1467260) Identity Management now supports FIPS With this enhancement, Identity Management (IdM) supports the Federal Information Processing Standard (FIPS). This enables you to run IdM in environments that must meet the FIPS criteria. To run IdM with FIPS mode enabled, you must set up all servers in the IdM environment using Red Hat Enterprise Linux 7.4 with FIPS mode enabled. Note that you cannot: Enable FIPS mode on existing IdM servers previously installed with FIPS mode disabled. Install a replica in FIPS mode when using an existing IdM server with FIPS mode disabled. For further details, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html-single/Linux_Domain_Identity_Authentication_and_Policy_Guide/index.html#prerequisites . (BZ# 1125174 ) SSSD supports obtaining a Kerberos ticket when users authenticate with a smart card The System Security Services Daemon (SSSD) now supports the Kerberos PKINIT preauthentication mechanism. When authenticating with a smart card to a desktop client system enrolled in an Identity Management (IdM) domain, users receive a valid Kerberos ticket-granting ticket (TGT) if the authentication was successful. Users can then use the TGT for further single sign-on (SSO) authentication from the client system. For details, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Linux_Domain_Identity_Authentication_and_Policy_Guide/sc-pkinit-auth.html . (BZ# 1200767 , BZ# 1405075 ) SSSD enables logging in to different user accounts with the same smart card certificate Previously, the System Security Services Daemon (SSSD) required every certificate to be uniquely mapped to a single user. When using smart card authentication, users with multiple accounts were not able to log in to all of these accounts with the same smart card certificate. For example, a user with a personal account and a functional account (such as a database administrator account) was able to log in only to the personal account. With this update, SSSD no longer requires certificates to be uniquely mapped to a single user. As a result, users can now log in to different accounts with a single smart card certificate. For details, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Linux_Domain_Identity_Authentication_and_Policy_Guide/smart-cards.html . (BZ# 1340711 , BZ# 1402959 ) IdM web UI enables smart card login The Identity Management web UI enables users to log in using smart cards. For details, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Linux_Domain_Identity_Authentication_and_Policy_Guide/sc-web-ui-auth.html . (BZ# 1366572 ) New packages: keycloak-httpd-client-install The keycloak-httpd-client-install packages provide various libraries and tools that can automate and simplify the configuration of Apache httpd authentication modules when registering as a Red Hat Single Sign-On (RH-SSO, also called Keycloak) federated Identity Provider (IdP) client. For details on RH-SSO, see https://access.redhat.com/products/red-hat-single-sign-on . As part of this update, new dependencies have been added to Red Hat Enterprise Linux: The python-requests-oauthlib package: This package provides the OAuth library support for the python-requests package, which enables python-requests to use OAuth for authentication. The python-oauthlib package: This package is a Python library providing OAuth authentication message creation and consumption. It is meant to be used in conjunction with tools providing message transport. (BZ# 1401781 , BZ#1401783, BZ#1401784) New Kerberos credential cache type: KCM This update adds a new SSSD service named kcm . The service is included in the sssd-kcm subpackage. When the kcm service is installed, you can configure the Kerberos library to use a new credential cache type named KCM . When the KCM credential cache type is configured, the sssd-kcm service manages the credentials. The KCM credential cache type is well-suited for containerized environments: With KCM, you can share credential caches between containers on demand, based on mounting the UNIX socket on which the kcm service listens. The kcm service runs in user space outside the kernel, unlike the KEYRING credential cache type that RHEL uses by default. With KCM, you can run the kcm service only in selected containers. With KEYRING, all containers share the credential caches because they share the kernel. Additionally, the KCM credential cache type supports cache collections, unlike the FILE ccache type. For details, see the sssd-kcm(8) man page. (BZ# 1396012 ) AD users can log in to the web UI to access their self-service page Previously, Active Directory (AD) users were only able to authenticate using the kinit utility from the command line. With this update, AD users can also log in to the Identity Management (IdM) web UI. Note that the IdM administrator must create an ID override for an AD user before the user is able to log in. As a result, AD users can access their self-service page through the IdM web UI. The self-service page displays the information from the AD users' ID override. For details, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Linux_Domain_Identity_Authentication_and_Policy_Guide/using-the-ui.html#ad-users-idm-web-ui . (BZ# 872671 ) SSSD enables configuring an AD subdomain in the SSSD server mode Previously, the System Security Services Daemon (SSSD) automatically configured trusted Active Directory (AD) domains. With this update, SSSD supports configuring certain parameters for trusted AD domains in the same way as the joined domain. As a result, you can set individual settings for trusted domains, such as the domain controller that SSSD communicates with. To do this, create a section in the /etc/sssd/sssd.conf file with a name that follows this template: For example, if the main IdM domain name is ipa.com and the trusted AD domain name is ad.com, the corresponding section name is: (BZ#1214491) SSSD supports user and group lookups and authentication with short names in AD environments Previously, the System Security Services Daemon (SSSD) supported user names without the domain component, also called short names, for user and group resolution and authentication only when the daemon was joined to a standalone domain. Now, you can use short names for these purposes in all SSSD domains in these environments: On clients joined to Active Directory (AD) In Identity Management (IdM) deployments with a trust relationship to an AD forest The output format of all commands is always fully-qualified even when using short names. This feature is enabled by default after you set up a domain's resolution order list in one of the following ways (listed in order of preference): Locally, by configuring the list using the domain_resolution_order option in the [sssd] section of the /etc/sssd/sssd.conf file By using an ID view Globally, in the IdM configuration To disable the feature, set the use_fully_qualified_names option to True in the [domain/example.com] section of the /etc/sssd/sssd.conf file. (BZ#1330196) SSSD supports user and group resolution, authentication, and authorization in setups without UIDs or SIDs In traditional System Security Services Daemon (SSSD) deployments, users and groups either have POSIX attributes set or SSSD can resolve the users and groups based on Windows security identifiers (SID). With this update, in setups that use LDAP as the identity provider, SSSD now supports the following functionality even when UIDs or SIDs are not present in the LDAP directory: User and group resolution through the D-Bus interface Authentication and authorization through the plugabble authentication module (PAM) interface (BZ# 1425891 ) SSSD introduces the sssctl user-checks command, which checks basic SSSD functionality in a single operation The sssctl utility now includes a new command named user-checks . The sssctl user-checks command helps debug problems in applications that use the System Security Services Daemon (SSSD) as a back end for user lookup, authentication, and authorization. The sssctl user-checks [USER_NAME] command displays user data available through Name Service Switch (NSS) and the InfoPipe responder for the D-Bus interface. The displayed data shows whether the user is authorized to log in using the system-auth pluggable authentication module (PAM) service. Additional options accepted by sssctl user-checks check authentication or different PAM services. For details on sssctl user-checks , use the sssctl user-checks --help command. (BZ#1414023) Support for secrets as a service This update adds a responder named secrets to the System Security Services Daemon (SSSD). This responder allows an application to communicate with SSSD over a UNIX socket using the Custodia API. This enables SSSD to store secrets in its local database or to forward them to a remote Custodia server. (BZ# 1311056 ) IdM enables semi-automatic upgrades of the IdM DNS records on an external DNS server To simplify updating the Identity Management (IdM) DNS records on an external DNS server, IdM introduces the ipa dns-update-system-records --dry-run --out [file] command. The command generates a list of records in a format accepted by the nsupdate utility. You can use the generated file to update the records on the external DNS server by using a standard dynamic DNS update mechanism secured with the Transaction Signature (TSIG) protocol or the GSS algorithm for TSIG (GSS-TSIG). For details, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Linux_Domain_Identity_Authentication_and_Policy_Guide/dns-updates-external.html . (BZ# 1409628 ) IdM now generates SHA-256 certificate and public key fingerprints Previously, Identity Management (IdM) used the MD5 hash algorithm when generating fingerprints for certificates and public keys. To increase security, IdM now uses the SHA-256 algorithm in the mentioned scenario. (BZ# 1444937 ) IdM supports flexible mapping mechanisms for linking smart card certificates to user accounts Previously, the only way to find a user account corresponding to a certain smart card in Identity Management (IdM) was to provide the whole smart card certificate as a Base64-encoded DER string. With this update, it is possible to find a user account also by specifying attributes of the smart card certificates, not just the certificate string itself. For example, the administrator can now define matching and mapping rules to link smart card certificates issued by a certain certificate authority (CA) to a user account in IdM. (BZ# 1402959 ) New user-space tools enable a more convenient LMDB debugging This update introduces the mdb_copy , mdb_dump , mdb_load , and mdb_stat tool in the /usr/libexec/openldap/ directory. The addition includes relevant man pages in the man/man1 subdirectory. Use the new tools only to debug problems related to the Lightning Memory-Mapped Database (LMDB) back end. (BZ#1428740) openldap rebased to version 2.4.44 The openldap packages have been upgraded to upstream version 2.4.44, which provides a number of bug fixes and enhancements over the version. In particular, this new version fixes many replication and Lightning Memory-Mapped Database (LMDB) bugs. (BZ#1386365) Improved security of DNS lookups and robustness of service principal lookups in Identity Management The Kerberos client library no longer attempts to canonicalize host names when issuing ticket-granting server (TGS) requests. This feature improves: Security because DNS lookups, which were previously required during canonicalization, are no longer performed Robustness of service principal lookups in more complex DNS environments, such as clouds or containerized applications Make sure you specify the correct fully qualified domain name (FQDN) in host and service principals. Due to this change in behavior, Kerberos does not attempt to resolve any other form of names in principals, such as short names. (BZ# 1404750 ) samba rebased to version 4.6.2 The samba packages have been upgraded to version 4.6.2, which provides a number of bug fixes and enhancements over the version: Samba now verifies the ID mapping configuration before the winbindd service starts. If the configuration is invalid, winbindd fails to start. Use the testparm utility to verify your /etc/samba/smb.conf file. For further details, see the IDENTITY MAPPING CONSIDERATIONS section in the smb.conf man page. Uploading printer drivers from Windows 10 now works correctly. Previously, the default value of the rpc server dynamic port range parameter was 1024-1300 . With this update, the default has been changed to 49152-65535 and now matches the range used in Windows Server 2008 and later. Update your firewall rules if necessary. The net ads unregister command can now delete the DNS entry of the host from the Active Directory DNS zone when leaving the domain. SMB 2.1 leases are now enabled by default in the smb2 leases parameter. SMB leasing enables clients to aggressively cache files. To improve security, the NT LAN manager version 1 (NTLMv1) protocol is now disabled by default. If you require the insecure NTLMv1 protocol, set the ntlm auth parameter in the /etc/samba/smb.conf file to yes . The event subcommand has been added to the ctdb utility for interacting with event scripts. The idmap_hash ID mapping back end is marked as deprecated will be removed in a future Samba version. The deprecated only user and username parameters have been removed. Samba automatically updates its tdb database files when the smbd , nmbd , or winbind daemon starts. Back up the databases files before starting Samba. Note that Red Hat does not support downgrading tdb database files. For further information about notable changes, read the upstream release notes before updating. (BZ#1391954) authconfig can enable SSSD to authenticate users with smart cards This new feature allows the authconfig command to configure the System Security Services Daemon (SSSD) to authenticate users with smart cards, for example: With this update, smart card authentication can now be performed on systems where pam_pkcs11 is not installed. However, if pam_pkcs11 is installed, the --smartcardmodule=sssd option is ignored. Instead, the first pkcs11_module defined in the /etc/pam_pkcs11/pam_pkcs11.conf is used as default. For details, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Linux_Domain_Identity_Authentication_and_Policy_Guide/auth-idm-client-sc.html . (BZ# 1378943 ) authconfig can now enable account locking This update adds the --enablefaillock option for the authconfig command. When the option is enabled, the configured account will be locked for 20 minutes after four consecutive failed login attempts within a 15-minute interval. (BZ#1334449) Improved performance of the IdM server The Identity Management (IdM) server has a higher performance across many of the common workflows and setups. These improvements include: Vault performance has been increased by reducing the round trips within the IdM server management framework. The IdM server management framework has been tuned to reduce the time spent in internal communication and authentication. The Directory Server connection management has been made more scalable with the use of the nunc-stans framework. On new installations, the Directory Server now auto-tunes the database entry cache and the number of threads based on the hardware resources of the server. The memberOf plug-in performance has been improved when working with large or nested groups. (BZ# 1395940 , BZ# 1425906 , BZ# 1400653 ) The default session expiration period in the IdM web UI has changed Previously, when the user logged in to the Identity Management (IdM) web UI using a user name and password, the web UI automatically logged the user out after 20 minutes of inactivity. With this update, the default session length is the same as the expiration period of the Kerberos ticket obtained during the login operation. To change the default session length, use the kinit_lifetime option in the /etc/ipa/default.conf file, and restart the httpd service. (BZ# 1459153 ) The dbmon.sh script now uses instance names to connect to Directory Server instances The dbmon.sh shell script enables you to monitor the Directory Server database and entry cache usage. With this update, the script no longer uses the HOST and PORT environment variables. To support secure binds, the script now reads the Directory Server instance name from the SERVID environment variable and uses it to retrieve the host name, port, and the information if the server requires a secure connection. For example, to monitor the slapd-localhost instance, enter: (BZ# 1394000 ) Directory Server now uses the SSHA_512 password storage scheme as default Previously, Directory Server used the weak 160-bit salted secure hash algorithm (SSHA) as default password storage scheme set in the passwordStorageScheme and nsslapd-rootpwstoragescheme parameters in the cn=config entry. To increase security, the default of both parameters has been changed to the strong 512-bit SSHA scheme (SSHA_512). The new default is used: When performing new Directory Server installations. When the passwordStorageScheme parameter is not set, and you are updating passwords stored in userPassword attributes. When the nsslapd-rootpwstoragescheme parameter is not set, and you are updating the Directory Server manager password set in the nsslapd-rootpw attribute. (BZ# 1425907 ) Directory Server now uses the tcmalloc memory allocator Red Hat Directory Server now uses the tcmalloc memory allocator. The previously used standard glibc allocator required more memory, and in certain situations, the server could run out of memory. Using the tcmalloc memory allocator, Directory Server now requires less memory, and the performance increased. (BZ# 1426275 ) Directory Server now uses the nunc-stans framework The nunc-stans event-based framework has been integrated into Directory Server. Previously, the performance could be slow when many simultaneous incoming connections were established to Directory Server. With this update, the server is able to handle a significantly larger number of connections without performance degradation. (BZ# 1426278 , BZ# 1206301 , BZ# 1425906 ) Improved performance of the Directory Server memberOf plug-in Previously, when working with large or nested groups, plug-in operations could take a long time. With this update, the performance of the Red Hat Directory Server memberOf plug-in has been improved. As a result, the memberOf plug-in now adds and removes users faster from groups. (BZ# 1426283 ) Directory Server now logs severity levels in the error log file Directory Server now logs severity levels in the /var/log/dirsrv/slapd-instance_name/errors log file. Previously, it was difficult to distinguish the severity of entries in the error log file. With this enhancement, administrators can use the severity level to filter the error log. (BZ# 1426289 ) Directory Server now supports the PBKDF2_SHA256 password storage scheme To increase security, this update adds the 256-bit password-based key derivation function 2 (PBKDF2_SHA256) to the list of supported password-storage schemes in Directory Server. The scheme uses 30,000 iterations to apply the 256-bit secure hash algorithm (SHA256). Note that the network security service (NSS) database in Red Hat Enterprise Linux prior to version 7.4 does not support PBKDF2. Therefore, you cannot use this password scheme in a replication topology with Directory Server versions. (BZ# 1436973 ) Improved auto-tuning support in Directory Server Previously, you had to monitor the databases and manually tune settings to improve the performance. With this update, Directory Server supports optimized auto-tuning for: The database and entry cache The number of threads created Directory Server tunes these settings, based on the hardware resources of the server. Auto-tuning is now automatically enabled by default if you install a new Directory Server instance. On instances upgraded from earlier versions, Red Hat recommends to enable auto-tuning. (BZ# 1426286 ) New PKI configuration parameter allows control of the TCP keepalive option This update adds the tcp.keepAlive parameter to the CS.cfg configuration file. This parameter accepts boolean values, and is set to true by default. Use this parameter to configure the TCP keepalive option for all LDAP connections created by the PKI subsystem. This option is useful in cases where certificate issuance takes a very long time and connections are being closed automatically after being idle for too long. (BZ#1413132) PKI Server now creates PKCS #12 files using strong encryption When generating PKCS #12 files, the pki pkcs12 command previously used the PKCS #12 deprecated key derivation function (KDF) and the triple DES (3DES) algorithm. With this update, the command now uses the password-based encryption standard 2 (PBES2) scheme with the password-based key derivation function 2 (PBKDF2) and the Advanced Encryption Standard (AES) algorithm to encrypt private keys. As a result, this enhancement increases the security and complies the Common Criteria certification requirements. (BZ#1426754) CC-compliant algorithms available for encryption operations Common Criteria requires that encryption and key-wrapping operations are performed using approved algorithms. These algorithms are specified in section FCS_COP.1.1(1) in the Protection Profile for Certification Authorities. This update modifies encryption and decryption in the KRA to use approved AES encryption and wrapping algorithms in the transport and storage of secrets and keys. This update required changes in both the server and client software. (BZ#1445535) New options to allow configuring visibility of menu items in the TPS interface Previously, menu items grouped under the System menu in the Token Processing System (TPS) user interface were determined statically based on user roles. In certain circumstances, the displayed menu items did not match components actually accessible by the user. With this update, the System menu in the TPS user interface only displays menu items based on the target.configure.list parameter for TPS administrators, and the target.agent_approve.list parameter for TPS agents. These parameters can be modified in the instance CS.cfg file to match accessible components. (BZ#1391737) Added a profile component to copy certificate Subject Common Name to the Subject Alternative Name extension Some TLS libraries now warn or refuse to validate DNS names when the DNS name only appears in the Subject Common Name (CN) field, which is a practice that was deprecated by RFC 2818. This update adds the CommonNameToSANDefault profile component, which copies the Subject Common Name to the Subject Alternative Name (SAN) extension, and ensures that certificates are compliant with current standards. (BZ# 1305993 ) New option to remove LDAP entries before LDIF import When migrating a CA, if an LDAP entry existed before the LDIF import, then the entry was not recreated from the LDAP import, causing some fields to be missing. Consequently, the request ID showed up as undefined. This update adds an option to remove the LDAP entry for the signing certificate at the end of the pkispawn process. This entry is then re-created in the subsequent LDIF import. Now, the request ID and other fields show up correctly if the signing entry is removed and re-added in the LDIF import. The correct parameters to add are (X represents the serial number of the signing certificate being imported, in decimal): (BZ#1409946) Certificate System now supports externally authenticated users Previously, you had to create users and roles in Certificate System. With this enhancement, you can now configure Certificate System to admit users authenticated by an external identity provider. Additionally, you can use realm-specific authorization access control lists (ACLs). As a result, it is no longer necessary to create users in Certificate System. (BZ# 1303683 ) Certificate System now supports enabling and disabling certificate and CRL publishing Prior to this update, if publishing was enabled in a certificate authority (CA), Certificate System automatically enabled both certificate revocation list (CRL) and certificate publishing. Consequently, on servers that did not have certificate publishing enabled, error messages were logged. Certificate System has been enhanced, and now supports enabling and disabling certificate and CRL publishing independently in the /var/lib/pki/<instance>/ca/conf/CS.cfg file. To enable or disable both certificate and CRL publishing, set: To enable only CRL publishing, set: To enable only certificate publishing, set: (BZ# 1325071 ) The searchBase configuration option has been added to the DirAclAuthz PKI Server plug-in To support reading different sets of authorization access control lists (ACL), the searchBase configuration option has been added to the DirAclAuthz PKI Server plug-in. As a result, you can set the sub-tree from which the plug-in loads ACLs. (BZ# 1388622 ) For better performance, Certificate System now supports ephemeral Before this update, Certificate System key recovery agent (KRA) instances always stored recovery and storage requests of secrets in the LDAP back end. This is required to store the state if multiple agents must approve the request. However, if the request is processed immediately and only one agent must approve the request, storing the state is not required. To improve performance, you can now set the kra.ephemeralRequests=true option in the /var/lib/pki/<instance>/kra/conf/CS.cfg file to no longer store requests in the LDAP back end. (BZ# 1392068 ) Section headers in PKI deployment configuration file are no longer case sensitive The section headers (such as [Tomcat] ) in the PKI deployment configuration file were previously case-sensitive. This behavior increased the chance of an error while providing no benefit. Starting with this release, section headers in the configuration file are case-insensitive, reducing the chance of an error occurring. (BZ# 1447144 ) Certificate System now supports installing a CA using HSM on FIPS-enabled Red Hat Enterprise Linux During the installation of a Certificate System Certificate Authority (CA) instance, the installer needs to restart the instance. During this restart, instances on an operating system having the Federal Information Processing Standard (FIPS) mode enabled and using a hardware security module (HSM), need to connect to the non-secure HTTP port instead of the HTTPS port. With this update, it is now possible to install a Certificate System instance on FIPS-enabled Red Hat Enterprise Linux using an HSM. (BZ# 1450143 ) CMC requests now use a random IV for AES and 3DES encryption With this update, Certificate Management over CMS (CMC) requests in PKI Server use a randomly generated initialization vector (IV) when encrypting a key to be archived. Previously, the client and server code used a fixed IV in this scenario. The CMC client code has been enhanced, and as a result, using random IVs increase security when performing encryption for both Advanced Encryption Standard (AES) and Triple Data Encryption Algorithm (3DES). (BZ# 1458055 )
|
[
"[domain/main_domain/trusted_domain]",
"[domain/ipa.com/ad.com]",
"authconfig --enablesssd --enablesssdauth --enablesmartcard --smartcardmodule=sssd --smartcardaction=0 --updateall",
"SERVID=slapd-localhost INCR=1 BINDDN=\"cn=Directory Manager\" BINDPW=\"password\" dbmon.sh",
"pki_ca_signing_record_create=False pki_ca_signing_serial_number=X",
"ca.publish.enable = True|False",
"ca.publish.enable = True ca.publish.cert.enable = False",
"ca.publish.enable = True ca.publish.crl.enable = False"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/new_features_authentication_and_interoperability
|
Chapter 5. Compliance Operator
|
Chapter 5. Compliance Operator 5.1. Compliance Operator overview OpenShift Container Platform Compliance Operator (CO) runs compliance scans and provides remediations to assist users in meeting compliance standards. For the latest updates, see the Compliance Operator release notes . If needed, you can engage Red Hat support . Important The Compliance Operator does not automatically perform remediations. Ensuring compliance standards are met is required by the user. Compliance Operator concepts Understanding the Compliance Operator Understanding the Custom Resource Definitions Compliance Operator management Installing the Compliance Operator Updating the Compliance Operator Managing the Compliance Operator Uninstalling the Compliance Operator Compliance Operator scan management Supported compliance profiles Compliance Operator scans Tailoring the Compliance Operator Retrieving Compliance Operator raw results Managing Compliance Operator remediation Performing advanced Compliance Operator tasks Troubleshooting the Compliance Operator Using the oc-compliance plugin 5.2. Compliance Operator release notes The Compliance Operator lets OpenShift Container Platform administrators describe the required compliance state of a cluster and provides them with an overview of gaps and ways to remediate them. These release notes track the development of the Compliance Operator in the OpenShift Container Platform. For an overview of the Compliance Operator, see Understanding the Compliance Operator . To access the latest release, see Updating the Compliance Operator . 5.2.1. OpenShift Compliance Operator 1.4.0 The following advisory is available for the OpenShift Compliance Operator 1.4.0: RHBA-2023:7658 - OpenShift Compliance Operator bug fix and enhancement update 5.2.1.1. New features and enhancements With this update, clusters which use custom node pools outside the default worker and master node pools no longer need to supply additional variables to ensure Compliance Operator aggregates the configuration file for that node pool. Users can now pause scan schedules by setting the ScanSetting.suspend attribute to True . This allows users to suspend a scan schedule and reactivate it without the need to delete and re-create the ScanSettingBinding . This simplifies pausing scan schedules during maintenance periods. ( CMP-2123 ) Compliance Operator now supports an optional version attribute on Profile custom resources. ( CMP-2125 ) Compliance Operator now supports profile names in ComplianceRules . ( CMP-2126 ) Compliance Operator compatibility with improved cronjob API improvements is available in this release. ( CMP-2310 ) 5.2.1.2. Bug fixes Previously, on a cluster with Windows nodes, some rules will FAIL after auto remediation is applied because the Windows nodes were not skipped by the compliance scan. With this release, Windows nodes are correctly skipped when scanning. ( OCPBUGS-7355 ) With this update, rprivate default mount propagation is now handled correctly for root volume mounts of pods that rely on multipathing. ( OCPBUGS-17494 ) Previously, the Compliance Operator would generate a remediation for coreos_vsyscall_kernel_argument without reconciling the rule even while applying the remediation. With release 1.4.0, the coreos_vsyscall_kernel_argument rule properly evaluates kernel arguments and generates an appropriate remediation.( OCPBUGS-8041 ) Before this update, rule rhcos4-audit-rules-login-events-faillock would fail even after auto-remediation has been applied. With this update, rhcos4-audit-rules-login-events-faillock failure locks are now applied correctly after auto-remediation. ( OCPBUGS-24594 ) Previously, upgrades from Compliance Operator 1.3.1 to Compliance Operator 1.4.0 would cause OVS rules scan results to go from PASS to NOT-APPLICABLE . With this update, OVS rules scan results now show PASS ( OCPBUGS-25323 ) 5.2.2. OpenShift Compliance Operator 1.3.1 The following advisory is available for the OpenShift Compliance Operator 1.3.1: RHBA-2023:5669 - OpenShift Compliance Operator bug fix and enhancement update This update addresses a CVE in an underlying dependency. Important It is recommended to update the Compliance Operator to version 1.3.1 or later before updating your OpenShift Container Platform cluster to version 4.14 or later. 5.2.2.1. New features and enhancements You can install and use the Compliance Operator in an OpenShift Container Platform cluster running in FIPS mode. Important To enable FIPS mode for your cluster, you must run the installation program from a RHEL computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . 5.2.2.2. Known issue On a cluster with Windows nodes, some rules will FAIL after auto remediation is applied because the Windows nodes are not skipped by the compliance scan. This differs from the expected results because the Windows nodes must be skipped when scanning. ( OCPBUGS-7355 ) 5.2.3. OpenShift Compliance Operator 1.3.0 The following advisory is available for the OpenShift Compliance Operator 1.3.0: RHBA-2023:5102 - OpenShift Compliance Operator enhancement update 5.2.3.1. New features and enhancements The Defense Information Systems Agency Security Technical Implementation Guide (DISA-STIG) for OpenShift Container Platform is now available from Compliance Operator 1.3.0. See Supported compliance profiles for additional information. Compliance Operator 1.3.0 now supports IBM Power and IBM Z for NIST 800-53 Moderate-Impact Baseline for OpenShift Container Platform platform and node profiles. 5.2.4. OpenShift Compliance Operator 1.2.0 The following advisory is available for the OpenShift Compliance Operator 1.2.0: RHBA-2023:4245 - OpenShift Compliance Operator enhancement update 5.2.4.1. New features and enhancements The CIS OpenShift Container Platform 4 Benchmark v1.4.0 profile is now available for platform and node applications. To locate the CIS OpenShift Container Platform v4 Benchmark, go to CIS Benchmarks and click Download Latest CIS Benchmark , where you can then register to download the benchmark. Important Upgrading to Compliance Operator 1.2.0 will overwrite the CIS OpenShift Container Platform 4 Benchmark 1.1.0 profiles. If your OpenShift Container Platform environment contains existing cis and cis-node remediations, there might be some differences in scan results after upgrading to Compliance Operator 1.2.0. Additional clarity for auditing security context constraints (SCCs) is now available for the scc-limit-container-allowed-capabilities rule. 5.2.4.2. Known issues When using the CIS OpenShift Container Platform 4 Benchmark v1.4.0 profile, some controls might fail due to tighter permissions in the CIS profile than in OpenShift Container Platform. For more information, see Solution article #7024725 . 5.2.5. OpenShift Compliance Operator 1.1.0 The following advisory is available for the OpenShift Compliance Operator 1.1.0: RHBA-2023:3630 - OpenShift Compliance Operator bug fix and enhancement update 5.2.5.1. New features and enhancements A start and end timestamp is now available in the ComplianceScan custom resource definition (CRD) status. The Compliance Operator can now be deployed on Hosted Control Planes using the OperatorHub by creating a Subscription file. For more information, see Installing the Compliance Operator on Hosted Control Planes . 5.2.5.2. Bug fixes Before this update, some Compliance Operator rule instructions were not present. After this update, instructions are improved for the following rules: classification_banner oauth_login_template_set oauth_logout_url_set oauth_provider_selection_set ocp_allowed_registries ocp_allowed_registries_for_import ( OCPBUGS-10473 ) Before this update, check accuracy and rule instructions were unclear. After this update, the check accuracy and instructions are improved for the following sysctl rules: kubelet-enable-protect-kernel-sysctl kubelet-enable-protect-kernel-sysctl-kernel-keys-root-maxbytes kubelet-enable-protect-kernel-sysctl-kernel-keys-root-maxkeys kubelet-enable-protect-kernel-sysctl-kernel-panic kubelet-enable-protect-kernel-sysctl-kernel-panic-on-oops kubelet-enable-protect-kernel-sysctl-vm-overcommit-memory kubelet-enable-protect-kernel-sysctl-vm-panic-on-oom ( OCPBUGS-11334 ) Before this update, the ocp4-alert-receiver-configured rule did not include instructions. With this update, the ocp4-alert-receiver-configured rule now includes improved instructions. ( OCPBUGS-7307 ) Before this update, the rhcos4-sshd-set-loglevel-info rule would fail for the rhcos4-e8 profile. With this update, the remediation for the sshd-set-loglevel-info rule was updated to apply the correct configuration changes, allowing subsequent scans to pass after the remediation is applied. ( OCPBUGS-7816 ) Before this update, a new installation of OpenShift Container Platform with the latest Compliance Operator install failed on the scheduler-no-bind-address rule. With this update, the scheduler-no-bind-address rule has been disabled on newer versions of OpenShift Container Platform since the parameter was removed. ( OCPBUGS-8347 ) 5.2.6. OpenShift Compliance Operator 1.0.0 The following advisory is available for the OpenShift Compliance Operator 1.0.0: RHBA-2023:1682 - OpenShift Compliance Operator bug fix update 5.2.6.1. New features and enhancements The Compliance Operator is now stable and the release channel is upgraded to stable . Future releases will follow Semantic Versioning . To access the latest release, see Updating the Compliance Operator . 5.2.6.2. Bug fixes Before this update, the compliance_operator_compliance_scan_error_total metric had an ERROR label with a different value for each error message. With this update, the compliance_operator_compliance_scan_error_total metric does not increase in values. ( OCPBUGS-1803 ) Before this update, the ocp4-api-server-audit-log-maxsize rule would result in a FAIL state. With this update, the error message has been removed from the metric, decreasing the cardinality of the metric in line with best practices. ( OCPBUGS-7520 ) Before this update, the rhcos4-enable-fips-mode rule description was misleading that FIPS could be enabled after installation. With this update, the rhcos4-enable-fips-mode rule description clarifies that FIPS must be enabled at install time. ( OCPBUGS-8358 ) 5.2.7. OpenShift Compliance Operator 0.1.61 The following advisory is available for the OpenShift Compliance Operator 0.1.61: RHBA-2023:0557 - OpenShift Compliance Operator bug fix update 5.2.7.1. New features and enhancements The Compliance Operator now supports timeout configuration for Scanner Pods. The timeout is specified in the ScanSetting object. If the scan is not completed within the timeout, the scan retries until the maximum number of retries is reached. See Configuring ScanSetting timeout for more information. 5.2.7.2. Bug fixes Before this update, Compliance Operator remediations required variables as inputs. Remediations without variables set were applied cluster-wide and resulted in stuck nodes, even though it appeared the remediation applied correctly. With this update, the Compliance Operator validates if a variable needs to be supplied using a TailoredProfile for a remediation. ( OCPBUGS-3864 ) Before this update, the instructions for ocp4-kubelet-configure-tls-cipher-suites were incomplete, requiring users to refine the query manually. With this update, the query provided in ocp4-kubelet-configure-tls-cipher-suites returns the actual results to perform the audit steps. ( OCPBUGS-3017 ) Before this update, system reserved parameters were not generated in kubelet configuration files, causing the Compliance Operator to fail to unpause the machine config pool. With this update, the Compliance Operator omits system reserved parameters during machine configuration pool evaluation. ( OCPBUGS-4445 ) Before this update, ComplianceCheckResult objects did not have correct descriptions. With this update, the Compliance Operator sources the ComplianceCheckResult information from the rule description. ( OCPBUGS-4615 ) Before this update, the Compliance Operator did not check for empty kubelet configuration files when parsing machine configurations. As a result, the Compliance Operator would panic and crash. With this update, the Compliance Operator implements improved checking of the kubelet configuration data structure and only continues if it is fully rendered. ( OCPBUGS-4621 ) Before this update, the Compliance Operator generated remediations for kubelet evictions based on machine config pool name and a grace period, resulting in multiple remediations for a single eviction rule. With this update, the Compliance Operator applies all remediations for a single rule. ( OCPBUGS-4338 ) Before this update, a regression occurred when attempting to create a ScanSettingBinding that was using a TailoredProfile with a non-default MachineConfigPool marked the ScanSettingBinding as Failed . With this update, functionality is restored and custom ScanSettingBinding using a TailoredProfile performs correctly. ( OCPBUGS-6827 ) Before this update, some kubelet configuration parameters did not have default values. With this update, the following parameters contain default values ( OCPBUGS-6708 ): ocp4-cis-kubelet-enable-streaming-connections ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-available ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree ocp4-cis-kubelet-eviction-thresholds-set-hard-memory-available ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-available Before this update, the selinux_confinement_of_daemons rule failed running on the kubelet because of the permissions necessary for the kubelet to run. With this update, the selinux_confinement_of_daemons rule is disabled. ( OCPBUGS-6968 ) 5.2.8. OpenShift Compliance Operator 0.1.59 The following advisory is available for the OpenShift Compliance Operator 0.1.59: RHBA-2022:8538 - OpenShift Compliance Operator bug fix update 5.2.8.1. New features and enhancements The Compliance Operator now supports Payment Card Industry Data Security Standard (PCI-DSS) ocp4-pci-dss and ocp4-pci-dss-node profiles on the ppc64le architecture. 5.2.8.2. Bug fixes Previously, the Compliance Operator did not support the Payment Card Industry Data Security Standard (PCI DSS) ocp4-pci-dss and ocp4-pci-dss-node profiles on different architectures such as ppc64le . Now, the Compliance Operator supports ocp4-pci-dss and ocp4-pci-dss-node profiles on the ppc64le architecture. ( OCPBUGS-3252 ) Previously, after the recent update to version 0.1.57, the rerunner service account (SA) was no longer owned by the cluster service version (CSV), which caused the SA to be removed during the Operator upgrade. Now, the CSV owns the rerunner SA in 0.1.59, and upgrades from any version will not result in a missing SA. ( OCPBUGS-3452 ) 5.2.9. OpenShift Compliance Operator 0.1.57 The following advisory is available for the OpenShift Compliance Operator 0.1.57: RHBA-2022:6657 - OpenShift Compliance Operator bug fix update 5.2.9.1. New features and enhancements KubeletConfig checks changed from Node to Platform type. KubeletConfig checks the default configuration of the KubeletConfig . The configuration files are aggregated from all nodes into a single location per node pool. See Evaluating KubeletConfig rules against default configuration values . The ScanSetting Custom Resource now allows users to override the default CPU and memory limits of scanner pods through the scanLimits attribute. For more information, see Increasing Compliance Operator resource limits . A PriorityClass object can now be set through ScanSetting . This ensures the Compliance Operator is prioritized and minimizes the chance that the cluster falls out of compliance. For more information, see Setting PriorityClass for ScanSetting scans . 5.2.9.2. Bug fixes Previously, the Compliance Operator hard-coded notifications to the default openshift-compliance namespace. If the Operator were installed in a non-default namespace, the notifications would not work as expected. Now, notifications work in non-default openshift-compliance namespaces. ( BZ#2060726 ) Previously, the Compliance Operator was unable to evaluate default configurations used by kubelet objects, resulting in inaccurate results and false positives. This new feature evaluates the kubelet configuration and now reports accurately. ( BZ#2075041 ) Previously, the Compliance Operator reported the ocp4-kubelet-configure-event-creation rule in a FAIL state after applying an automatic remediation because the eventRecordQPS value was set higher than the default value. Now, the ocp4-kubelet-configure-event-creation rule remediation sets the default value, and the rule applies correctly. ( BZ#2082416 ) The ocp4-configure-network-policies rule requires manual intervention to perform effectively. New descriptive instructions and rule updates increase applicability of the ocp4-configure-network-policies rule for clusters using Calico CNIs. ( BZ#2091794 ) Previously, the Compliance Operator would not clean up pods used to scan infrastructure when using the debug=true option in the scan settings. This caused pods to be left on the cluster even after deleting the ScanSettingBinding . Now, pods are always deleted when a ScanSettingBinding is deleted.( BZ#2092913 ) Previously, the Compliance Operator used an older version of the operator-sdk command that caused alerts about deprecated functionality. Now, an updated version of the operator-sdk command is included and there are no more alerts for deprecated functionality. ( BZ#2098581 ) Previously, the Compliance Operator would fail to apply remediations if it could not determine the relationship between kubelet and machine configurations. Now, the Compliance Operator has improved handling of the machine configurations and is able to determine if a kubelet configuration is a subset of a machine configuration. ( BZ#2102511 ) Previously, the rule for ocp4-cis-node-master-kubelet-enable-cert-rotation did not properly describe success criteria. As a result, the requirements for RotateKubeletClientCertificate were unclear. Now, the rule for ocp4-cis-node-master-kubelet-enable-cert-rotation reports accurately regardless of the configuration present in the kubelet configuration file. ( BZ#2105153 ) Previously, the rule for checking idle streaming timeouts did not consider default values, resulting in inaccurate rule reporting. Now, more robust checks ensure increased accuracy in results based on default configuration values. ( BZ#2105878 ) Previously, the Compliance Operator would fail to fetch API resources when parsing machine configurations without Ignition specifications, which caused the api-check-pods processes to crash loop. Now, the Compliance Operator handles Machine Config Pools that do not have Ignition specifications correctly. ( BZ#2117268 ) Previously, rules evaluating the modprobe configuration would fail even after applying remediations due to a mismatch in values for the modprobe configuration. Now, the same values are used for the modprobe configuration in checks and remediations, ensuring consistent results. ( BZ#2117747 ) 5.2.9.3. Deprecations Specifying Install into all namespaces in the cluster or setting the WATCH_NAMESPACES environment variable to "" no longer affects all namespaces. Any API resources installed in namespaces not specified at the time of Compliance Operator installation is no longer be operational. API resources might require creation in the selected namespace, or the openshift-compliance namespace by default. This change improves the Compliance Operator's memory usage. 5.2.10. OpenShift Compliance Operator 0.1.53 The following advisory is available for the OpenShift Compliance Operator 0.1.53: RHBA-2022:5537 - OpenShift Compliance Operator bug fix update 5.2.10.1. Bug fixes Previously, the ocp4-kubelet-enable-streaming-connections rule contained an incorrect variable comparison, resulting in false positive scan results. Now, the Compliance Operator provides accurate scan results when setting streamingConnectionIdleTimeout . ( BZ#2069891 ) Previously, group ownership for /etc/openvswitch/conf.db was incorrect on IBM Z architectures, resulting in ocp4-cis-node-worker-file-groupowner-ovs-conf-db check failures. Now, the check is marked NOT-APPLICABLE on IBM Z architecture systems. ( BZ#2072597 ) Previously, the ocp4-cis-scc-limit-container-allowed-capabilities rule reported in a FAIL state due to incomplete data regarding the security context constraints (SCC) rules in the deployment. Now, the result is MANUAL , which is consistent with other checks that require human intervention. ( BZ#2077916 ) Previously, the following rules failed to account for additional configuration paths for API servers and TLS certificates and keys, resulting in reported failures even if the certificates and keys were set properly: ocp4-cis-api-server-kubelet-client-cert ocp4-cis-api-server-kubelet-client-key ocp4-cis-kubelet-configure-tls-cert ocp4-cis-kubelet-configure-tls-key Now, the rules report accurately and observe legacy file paths specified in the kubelet configuration file. ( BZ#2079813 ) Previously, the content_rule_oauth_or_oauthclient_inactivity_timeout rule did not account for a configurable timeout set by the deployment when assessing compliance for timeouts. This resulted in the rule failing even if the timeout was valid. Now, the Compliance Operator uses the var_oauth_inactivity_timeout variable to set valid timeout length. ( BZ#2081952 ) Previously, the Compliance Operator used administrative permissions on namespaces not labeled appropriately for privileged use, resulting in warning messages regarding pod security-level violations. Now, the Compliance Operator has appropriate namespace labels and permission adjustments to access results without violating permissions. ( BZ#2088202 ) Previously, applying auto remediations for rhcos4-high-master-sysctl-kernel-yama-ptrace-scope and rhcos4-sysctl-kernel-core-pattern resulted in subsequent failures of those rules in scan results, even though they were remediated. Now, the rules report PASS accurately, even after remediations are applied.( BZ#2094382 ) Previously, the Compliance Operator would fail in a CrashLoopBackoff state because of out-of-memory exceptions. Now, the Compliance Operator is improved to handle large machine configuration data sets in memory and function correctly. ( BZ#2094854 ) 5.2.10.2. Known issue When "debug":true is set within the ScanSettingBinding object, the pods generated by the ScanSettingBinding object are not removed when that binding is deleted. As a workaround, run the following command to delete the remaining pods: USD oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis ( BZ#2092913 ) 5.2.11. OpenShift Compliance Operator 0.1.52 The following advisory is available for the OpenShift Compliance Operator 0.1.52: RHBA-2022:4657 - OpenShift Compliance Operator bug fix update 5.2.11.1. New features and enhancements The FedRAMP high SCAP profile is now available for use in OpenShift Container Platform environments. For more information, See Supported compliance profiles . 5.2.11.2. Bug fixes Previously, the OpenScap container would crash due to a mount permission issue in a security environment where DAC_OVERRIDE capability is dropped. Now, executable mount permissions are applied to all users. ( BZ#2082151 ) Previously, the compliance rule ocp4-configure-network-policies could be configured as MANUAL . Now, compliance rule ocp4-configure-network-policies is set to AUTOMATIC . ( BZ#2072431 ) Previously, the Cluster Autoscaler would fail to scale down because the Compliance Operator scan pods were never removed after a scan. Now, the pods are removed from each node by default unless explicitly saved for debugging purposes. ( BZ#2075029 ) Previously, applying the Compliance Operator to the KubeletConfig would result in the node going into a NotReady state due to unpausing the Machine Config Pools too early. Now, the Machine Config Pools are unpaused appropriately and the node operates correctly. ( BZ#2071854 ) Previously, the Machine Config Operator used base64 instead of url-encoded code in the latest release, causing Compliance Operator remediation to fail. Now, the Compliance Operator checks encoding to handle both base64 and url-encoded Machine Config code and the remediation applies correctly. ( BZ#2082431 ) 5.2.11.3. Known issue When "debug":true is set within the ScanSettingBinding object, the pods generated by the ScanSettingBinding object are not removed when that binding is deleted. As a workaround, run the following command to delete the remaining pods: USD oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis ( BZ#2092913 ) 5.2.12. OpenShift Compliance Operator 0.1.49 The following advisory is available for the OpenShift Compliance Operator 0.1.49: RHBA-2022:1148 - OpenShift Compliance Operator bug fix and enhancement update 5.2.12.1. New features and enhancements The Compliance Operator is now supported on the following architectures: IBM Power IBM Z IBM LinuxONE 5.2.12.2. Bug fixes Previously, the openshift-compliance content did not include platform-specific checks for network types. As a result, OVN- and SDN-specific checks would show as failed instead of not-applicable based on the network configuration. Now, new rules contain platform checks for networking rules, resulting in a more accurate assessment of network-specific checks. ( BZ#1994609 ) Previously, the ocp4-moderate-routes-protected-by-tls rule incorrectly checked TLS settings that results in the rule failing the check, even if the connection secure SSL/TLS protocol. Now, the check properly evaluates TLS settings that are consistent with the networking guidance and profile recommendations. ( BZ#2002695 ) Previously, ocp-cis-configure-network-policies-namespace used pagination when requesting namespaces. This caused the rule to fail because the deployments truncated lists of more than 500 namespaces. Now, the entire namespace list is requested, and the rule for checking configured network policies works for deployments with more than 500 namespaces. ( BZ#2038909 ) Previously, remediations using the sshd jinja macros were hard-coded to specific sshd configurations. As a result, the configurations were inconsistent with the content the rules were checking for and the check would fail. Now, the sshd configuration is parameterized and the rules apply successfully. ( BZ#2049141 ) Previously, the ocp4-cluster-version-operator-verify-integrity always checked the first entry in the Cluter Version Operator (CVO) history. As a result, the upgrade would fail in situations where subsequent versions of {product-name} would be verified. Now, the compliance check result for ocp4-cluster-version-operator-verify-integrity is able to detect verified versions and is accurate with the CVO history. ( BZ#2053602 ) Previously, the ocp4-api-server-no-adm-ctrl-plugins-disabled rule did not check for a list of empty admission controller plugins. As a result, the rule would always fail, even if all admission plugins were enabled. Now, more robust checking of the ocp4-api-server-no-adm-ctrl-plugins-disabled rule accurately passes with all admission controller plugins enabled. ( BZ#2058631 ) Previously, scans did not contain platform checks for running against Linux worker nodes. As a result, running scans against worker nodes that were not Linux-based resulted in a never ending scan loop. Now, the scan schedules appropriately based on platform type and labels complete successfully. ( BZ#2056911 ) 5.2.13. OpenShift Compliance Operator 0.1.48 The following advisory is available for the OpenShift Compliance Operator 0.1.48: RHBA-2022:0416 - OpenShift Compliance Operator bug fix and enhancement update 5.2.13.1. Bug fixes Previously, some rules associated with extended Open Vulnerability and Assessment Language (OVAL) definitions had a checkType of None . This was because the Compliance Operator was not processing extended OVAL definitions when parsing rules. With this update, content from extended OVAL definitions is parsed so that these rules now have a checkType of either Node or Platform . ( BZ#2040282 ) Previously, a manually created MachineConfig object for KubeletConfig prevented a KubeletConfig object from being generated for remediation, leaving the remediation in the Pending state. With this release, a KubeletConfig object is created by the remediation, regardless if there is a manually created MachineConfig object for KubeletConfig . As a result, KubeletConfig remediations now work as expected. ( BZ#2040401 ) 5.2.14. OpenShift Compliance Operator 0.1.47 The following advisory is available for the OpenShift Compliance Operator 0.1.47: RHBA-2022:0014 - OpenShift Compliance Operator bug fix and enhancement update 5.2.14.1. New features and enhancements The Compliance Operator now supports the following compliance benchmarks for the Payment Card Industry Data Security Standard (PCI DSS): ocp4-pci-dss ocp4-pci-dss-node Additional rules and remediations for FedRAMP moderate impact level are added to the OCP4-moderate, OCP4-moderate-node, and rhcos4-moderate profiles. Remediations for KubeletConfig are now available in node-level profiles. 5.2.14.2. Bug fixes Previously, if your cluster was running OpenShift Container Platform 4.6 or earlier, remediations for USBGuard-related rules would fail for the moderate profile. This is because the remediations created by the Compliance Operator were based on an older version of USBGuard that did not support drop-in directories. Now, invalid remediations for USBGuard-related rules are not created for clusters running OpenShift Container Platform 4.6. If your cluster is using OpenShift Container Platform 4.6, you must manually create remediations for USBGuard-related rules. Additionally, remediations are created only for rules that satisfy minimum version requirements. ( BZ#1965511 ) Previously, when rendering remediations, the compliance operator would check that the remediation was well-formed by using a regular expression that was too strict. As a result, some remediations, such as those that render sshd_config , would not pass the regular expression check and therefore, were not created. The regular expression was found to be unnecessary and removed. Remediations now render correctly. ( BZ#2033009 ) 5.2.15. OpenShift Compliance Operator 0.1.44 The following advisory is available for the OpenShift Compliance Operator 0.1.44: RHBA-2021:4530 - OpenShift Compliance Operator bug fix and enhancement update 5.2.15.1. New features and enhancements In this release, the strictNodeScan option is now added to the ComplianceScan , ComplianceSuite and ScanSetting CRs. This option defaults to true which matches the behavior, where an error occurred if a scan was not able to be scheduled on a node. Setting the option to false allows the Compliance Operator to be more permissive about scheduling scans. Environments with ephemeral nodes can set the strictNodeScan value to false, which allows a compliance scan to proceed, even if some of the nodes in the cluster are not available for scheduling. You can now customize the node that is used to schedule the result server workload by configuring the nodeSelector and tolerations attributes of the ScanSetting object. These attributes are used to place the ResultServer pod, the pod that is used to mount a PV storage volume and store the raw Asset Reporting Format (ARF) results. Previously, the nodeSelector and the tolerations parameters defaulted to selecting one of the control plane nodes and tolerating the node-role.kubernetes.io/master taint . This did not work in environments where control plane nodes are not permitted to mount PVs. This feature provides a way for you to select the node and tolerate a different taint in those environments. The Compliance Operator can now remediate KubeletConfig objects. A comment containing an error message is now added to help content developers differentiate between objects that do not exist in the cluster compared to objects that cannot be fetched. Rule objects now contain two new attributes, checkType and description . These attributes allow you to determine if the rule pertains to a node check or platform check, and also allow you to review what the rule does. This enhancement removes the requirement that you have to extend an existing profile to create a tailored profile. This means the extends field in the TailoredProfile CRD is no longer mandatory. You can now select a list of rule objects to create a tailored profile. Note that you must select whether your profile applies to nodes or the platform by setting the compliance.openshift.io/product-type: annotation or by setting the -node suffix for the TailoredProfile CR. In this release, the Compliance Operator is now able to schedule scans on all nodes irrespective of their taints. Previously, the scan pods would only tolerated the node-role.kubernetes.io/master taint , meaning that they would either ran on nodes with no taints or only on nodes with the node-role.kubernetes.io/master taint. In deployments that use custom taints for their nodes, this resulted in the scans not being scheduled on those nodes. Now, the scan pods tolerate all node taints. In this release, the Compliance Operator supports the following North American Electric Reliability Corporation (NERC) security profiles: ocp4-nerc-cip ocp4-nerc-cip-node rhcos4-nerc-cip In this release, the Compliance Operator supports the NIST 800-53 Moderate-Impact Baseline for the Red Hat OpenShift - Node level, ocp4-moderate-node, security profile. 5.2.15.2. Templating and variable use In this release, the remediation template now allows multi-value variables. With this update, the Compliance Operator can change remediations based on variables that are set in the compliance profile. This is useful for remediations that include deployment-specific values such as time outs, NTP server host names, or similar. Additionally, the ComplianceCheckResult objects now use the label compliance.openshift.io/check-has-value that lists the variables a check has used. 5.2.15.3. Bug fixes Previously, while performing a scan, an unexpected termination occurred in one of the scanner containers of the pods. In this release, the Compliance Operator uses the latest OpenSCAP version 1.3.5 to avoid a crash. Previously, using autoReplyRemediations to apply remediations triggered an update of the cluster nodes. This was disruptive if some of the remediations did not include all of the required input variables. Now, if a remediation is missing one or more required input variables, it is assigned a state of NeedsReview . If one or more remediations are in a NeedsReview state, the machine config pool remains paused, and the remediations are not applied until all of the required variables are set. This helps minimize disruption to the nodes. The RBAC Role and Role Binding used for Prometheus metrics are changed to 'ClusterRole' and 'ClusterRoleBinding' to ensure that monitoring works without customization. Previously, if an error occurred while parsing a profile, rules or variables objects were removed and deleted from the profile. Now, if an error occurs during parsing, the profileparser annotates the object with a temporary annotation that prevents the object from being deleted until after parsing completes. ( BZ#1988259 ) Previously, an error occurred if titles or descriptions were missing from a tailored profile. Because the XCCDF standard requires titles and descriptions for tailored profiles, titles and descriptions are now required to be set in TailoredProfile CRs. Previously, when using tailored profiles, TailoredProfile variable values were allowed to be set using only a specific selection set. This restriction is now removed, and TailoredProfile variables can be set to any value. 5.2.16. Release Notes for Compliance Operator 0.1.39 The following advisory is available for the OpenShift Compliance Operator 0.1.39: RHBA-2021:3214 - OpenShift Compliance Operator bug fix and enhancement update 5.2.16.1. New features and enhancements Previously, the Compliance Operator was unable to parse Payment Card Industry Data Security Standard (PCI DSS) references. Now, the Operator can parse compliance content that is provided with PCI DSS profiles. Previously, the Compliance Operator was unable to execute rules for AU-5 control in the moderate profile. Now, permission is added to the Operator so that it can read Prometheusrules.monitoring.coreos.com objects and run the rules that cover AU-5 control in the moderate profile. 5.2.17. Additional resources Understanding the Compliance Operator 5.3. Compliance Operator concepts 5.3.1. Understanding the Compliance Operator The Compliance Operator lets OpenShift Container Platform administrators describe the required compliance state of a cluster and provides them with an overview of gaps and ways to remediate them. The Compliance Operator assesses compliance of both the Kubernetes API resources of OpenShift Container Platform, as well as the nodes running the cluster. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies provided by the content. Important The Compliance Operator is available for Red Hat Enterprise Linux CoreOS (RHCOS) deployments only. 5.3.1.1. Compliance Operator profiles There are several profiles available as part of the Compliance Operator installation. You can use the oc get command to view available profiles, profile details, and specific rules. View the available profiles: USD oc get -n openshift-compliance profiles.compliance Example output NAME AGE ocp4-cis 94m ocp4-cis-node 94m ocp4-e8 94m ocp4-high 94m ocp4-high-node 94m ocp4-moderate 94m ocp4-moderate-node 94m ocp4-nerc-cip 94m ocp4-nerc-cip-node 94m ocp4-pci-dss 94m ocp4-pci-dss-node 94m rhcos4-e8 94m rhcos4-high 94m rhcos4-moderate 94m rhcos4-nerc-cip 94m These profiles represent different compliance benchmarks. Each profile has the product name that it applies to added as a prefix to the profile's name. ocp4-e8 applies the Essential 8 benchmark to the OpenShift Container Platform product, while rhcos4-e8 applies the Essential 8 benchmark to the Red Hat Enterprise Linux CoreOS (RHCOS) product. Run the following command to view the details of the rhcos4-e8 profile: USD oc get -n openshift-compliance -oyaml profiles.compliance rhcos4-e8 Example 5.1. Example output apiVersion: compliance.openshift.io/v1alpha1 description: 'This profile contains configuration checks for Red Hat Enterprise Linux CoreOS that align to the Australian Cyber Security Centre (ACSC) Essential Eight. A copy of the Essential Eight in Linux Environments guide can be found at the ACSC website: https://www.cyber.gov.au/acsc/view-all-content/publications/hardening-linux-workstations-and-servers' id: xccdf_org.ssgproject.content_profile_e8 kind: Profile metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/product: redhat_enterprise_linux_coreos_4 compliance.openshift.io/product-type: Node creationTimestamp: "2022-10-19T12:06:49Z" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-e8 namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: "43699" uid: 86353f70-28f7-40b4-bf0e-6289ec33675b rules: - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown - rhcos4-audit-rules-execution-chcon - rhcos4-audit-rules-execution-restorecon - rhcos4-audit-rules-execution-semanage - rhcos4-audit-rules-execution-setfiles - rhcos4-audit-rules-execution-setsebool - rhcos4-audit-rules-execution-seunshare - rhcos4-audit-rules-kernel-module-loading-delete - rhcos4-audit-rules-kernel-module-loading-finit - rhcos4-audit-rules-kernel-module-loading-init - rhcos4-audit-rules-login-events - rhcos4-audit-rules-login-events-faillock - rhcos4-audit-rules-login-events-lastlog - rhcos4-audit-rules-login-events-tallylog - rhcos4-audit-rules-networkconfig-modification - rhcos4-audit-rules-sysadmin-actions - rhcos4-audit-rules-time-adjtimex - rhcos4-audit-rules-time-clock-settime - rhcos4-audit-rules-time-settimeofday - rhcos4-audit-rules-time-stime - rhcos4-audit-rules-time-watch-localtime - rhcos4-audit-rules-usergroup-modification - rhcos4-auditd-data-retention-flush - rhcos4-auditd-freq - rhcos4-auditd-local-events - rhcos4-auditd-log-format - rhcos4-auditd-name-format - rhcos4-auditd-write-logs - rhcos4-configure-crypto-policy - rhcos4-configure-ssh-crypto-policy - rhcos4-no-empty-passwords - rhcos4-selinux-policytype - rhcos4-selinux-state - rhcos4-service-auditd-enabled - rhcos4-sshd-disable-empty-passwords - rhcos4-sshd-disable-gssapi-auth - rhcos4-sshd-disable-rhosts - rhcos4-sshd-disable-root-login - rhcos4-sshd-disable-user-known-hosts - rhcos4-sshd-do-not-permit-user-env - rhcos4-sshd-enable-strictmodes - rhcos4-sshd-print-last-log - rhcos4-sshd-set-loglevel-info - rhcos4-sysctl-kernel-dmesg-restrict - rhcos4-sysctl-kernel-kptr-restrict - rhcos4-sysctl-kernel-randomize-va-space - rhcos4-sysctl-kernel-unprivileged-bpf-disabled - rhcos4-sysctl-kernel-yama-ptrace-scope - rhcos4-sysctl-net-core-bpf-jit-harden title: Australian Cyber Security Centre (ACSC) Essential Eight Run the following command to view the details of the rhcos4-audit-rules-login-events rule: USD oc get -n openshift-compliance -oyaml rules rhcos4-audit-rules-login-events Example 5.2. Example output apiVersion: compliance.openshift.io/v1alpha1 checkType: Node description: |- The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins id: xccdf_org.ssgproject.content_rule_audit_rules_login_events kind: Rule metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/rule: audit-rules-login-events control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a) control.compliance.openshift.io/PCI-DSS: Req-10.2.3 policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3 policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS creationTimestamp: "2022-10-19T12:07:08Z" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-audit-rules-login-events namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: "44819" uid: 75872f1f-3c93-40ca-a69d-44e5438824a4 rationale: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. severity: medium title: Record Attempts to Alter Logon and Logout Events warning: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. 5.3.1.1.1. Compliance Operator profile types There are two types of compliance profiles available: Platform and Node. Platform Platform scans target your OpenShift Container Platform cluster. Node Node scans target the nodes of the cluster. Important For compliance profiles that have Node and Platform applications, such as pci-dss compliance profiles, you must run both in your OpenShift Container Platform environment. 5.3.1.2. Additional resources Supported compliance profiles 5.3.2. Understanding the Custom Resource Definitions The Compliance Operator in the OpenShift Container Platform provides you with several Custom Resource Definitions (CRDs) to accomplish the compliance scans. To run a compliance scan, it leverages the predefined security policies, which are derived from the ComplianceAsCode community project. The Compliance Operator converts these security policies into CRDs, which you can use to run compliance scans and get remediations for the issues found. 5.3.2.1. CRDs workflow The CRD provides you the following workflow to complete the compliance scans: Define your compliance scan requirements Configure the compliance scan settings Process compliance requirements with compliance scans settings Monitor the compliance scans Check the compliance scan results 5.3.2.2. Defining the compliance scan requirements By default, the Compliance Operator CRDs include ProfileBundle and Profile objects, in which you can define and set the rules for your compliance scan requirements. You can also customize the default profiles by using a TailoredProfile object. 5.3.2.2.1. ProfileBundle object When you install the Compliance Operator, it includes ready-to-run ProfileBundle objects. The Compliance Operator parses the ProfileBundle object and creates a Profile object for each profile in the bundle. It also parses Rule and Variable objects, which are used by the Profile object. Example ProfileBundle object apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle name: <profile bundle name> namespace: openshift-compliance status: dataStreamStatus: VALID 1 1 Indicates whether the Compliance Operator was able to parse the content files. Note When the contentFile fails, an errorMessage attribute appears, which provides details of the error that occurred. Troubleshooting When you roll back to a known content image from an invalid image, the ProfileBundle object stops responding and displays PENDING state. As a workaround, you can move to a different image than the one. Alternatively, you can delete and re-create the ProfileBundle object to return to the working state. 5.3.2.2.2. Profile object The Profile object defines the rules and variables that can be evaluated for a certain compliance standard. It contains parsed out details about an OpenSCAP profile, such as its XCCDF identifier and profile checks for a Node or Platform type. You can either directly use the Profile object or further customize it using a TailorProfile object. Note You cannot create or modify the Profile object manually because it is derived from a single ProfileBundle object. Typically, a single ProfileBundle object can include several Profile objects. Example Profile object apiVersion: compliance.openshift.io/v1alpha1 description: <description of the profile> id: xccdf_org.ssgproject.content_profile_moderate 1 kind: Profile metadata: annotations: compliance.openshift.io/product: <product name> compliance.openshift.io/product-type: Node 2 creationTimestamp: "YYYY-MM-DDTMM:HH:SSZ" generation: 1 labels: compliance.openshift.io/profile-bundle: <profile bundle name> name: rhcos4-moderate namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: <profile bundle name> uid: <uid string> resourceVersion: "<version number>" selfLink: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/profiles/rhcos4-moderate uid: <uid string> rules: 3 - rhcos4-account-disable-post-pw-expiration - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown title: <title of the profile> 1 Specify the XCCDF name of the profile. Use this identifier when you define a ComplianceScan object as the value of the profile attribute of the scan. 2 Specify either a Node or Platform . Node profiles scan the cluster nodes and platform profiles scan the Kubernetes platform. 3 Specify the list of rules for the profile. Each rule corresponds to a single check. 5.3.2.2.3. Rule object The Rule object, which forms the profiles, are also exposed as objects. Use the Rule object to define your compliance check requirements and specify how it could be fixed. Example Rule object apiVersion: compliance.openshift.io/v1alpha1 checkType: Platform 1 description: <description of the rule> id: xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces 2 instructions: <manual instructions for the scan> kind: Rule metadata: annotations: compliance.openshift.io/rule: configure-network-policies-namespaces control.compliance.openshift.io/CIS-OCP: 5.3.2 control.compliance.openshift.io/NERC-CIP: CIP-003-3 R4;CIP-003-3 R4.2;CIP-003-3 R5;CIP-003-3 R6;CIP-004-3 R2.2.4;CIP-004-3 R3;CIP-007-3 R2;CIP-007-3 R2.1;CIP-007-3 R2.2;CIP-007-3 R2.3;CIP-007-3 R5.1;CIP-007-3 R6.1 control.compliance.openshift.io/NIST-800-53: AC-4;AC-4(21);CA-3(5);CM-6;CM-6(1);CM-7;CM-7(1);SC-7;SC-7(3);SC-7(5);SC-7(8);SC-7(12);SC-7(13);SC-7(18) labels: compliance.openshift.io/profile-bundle: ocp4 name: ocp4-configure-network-policies-namespaces namespace: openshift-compliance rationale: <description of why this rule is checked> severity: high 3 title: <summary of the rule> 1 Specify the type of check this rule executes. Node profiles scan the cluster nodes and Platform profiles scan the Kubernetes platform. An empty value indicates there is no automated check. 2 Specify the XCCDF name of the rule, which is parsed directly from the datastream. 3 Specify the severity of the rule when it fails. Note The Rule object gets an appropriate label for an easy identification of the associated ProfileBundle object. The ProfileBundle also gets specified in the OwnerReferences of this object. 5.3.2.2.4. TailoredProfile object Use the TailoredProfile object to modify the default Profile object based on your organization requirements. You can enable or disable rules, set variable values, and provide justification for the customization. After validation, the TailoredProfile object creates a ConfigMap , which can be referenced by a ComplianceScan object. Tip You can use the TailoredProfile object by referencing it in a ScanSettingBinding object. For more information about ScanSettingBinding , see ScanSettingBinding object. Example TailoredProfile object apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: rhcos4-with-usb spec: extends: rhcos4-moderate 1 title: <title of the tailored profile> disableRules: - name: <name of a rule object to be disabled> rationale: <description of why this rule is checked> status: id: xccdf_compliance.openshift.io_profile_rhcos4-with-usb 2 outputRef: name: rhcos4-with-usb-tp 3 namespace: openshift-compliance state: READY 4 1 This is optional. Name of the Profile object upon which the TailoredProfile is built. If no value is set, a new profile is created from the enableRules list. 2 Specifies the XCCDF name of the tailored profile. 3 Specifies the ConfigMap name, which can be used as the value of the tailoringConfigMap.name attribute of a ComplianceScan . 4 Shows the state of the object such as READY , PENDING , and FAILURE . If the state of the object is ERROR , then the attribute status.errorMessage provides the reason for the failure. With the TailoredProfile object, it is possible to create a new Profile object using the TailoredProfile construct. To create a new Profile , set the following configuration parameters : an appropriate title extends value must be empty scan type annotation on the TailoredProfile object: compliance.openshift.io/product-type: Platform/Node Note If you have not set the product-type annotation, the Compliance Operator defaults to Platform scan type. Adding the -node suffix to the name of the TailoredProfile object results in node scan type. 5.3.2.3. Configuring the compliance scan settings After you have defined the requirements of the compliance scan, you can configure it by specifying the type of the scan, occurrence of the scan, and location of the scan. To do so, Compliance Operator provides you with a ScanSetting object. 5.3.2.3.1. ScanSetting object Use the ScanSetting object to define and reuse the operational policies to run your scans. By default, the Compliance Operator creates the following ScanSetting objects: default - it runs a scan every day at 1 AM on both master and worker nodes using a 1Gi Persistent Volume (PV) and keeps the last three results. Remediation is neither applied nor updated automatically. default-auto-apply - it runs a scan every day at 1AM on both control plane and worker nodes using a 1Gi Persistent Volume (PV) and keeps the last three results. Both autoApplyRemediations and autoUpdateRemediations are set to true. Example ScanSetting object apiVersion: compliance.openshift.io/v1alpha1 autoApplyRemediations: true 1 autoUpdateRemediations: true 2 kind: ScanSetting maxRetryOnTimeout: 3 metadata: creationTimestamp: "2022-10-18T20:21:00Z" generation: 1 name: default-auto-apply namespace: openshift-compliance resourceVersion: "38840" uid: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 rawResultStorage: nodeSelector: node-role.kubernetes.io/master: "" pvAccessModes: - ReadWriteOnce rotation: 3 3 size: 1Gi 4 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists roles: 5 - master - worker scanTolerations: - operator: Exists schedule: 0 1 * * * 6 showNotApplicable: false strictNodeScan: true timeout: 30m 1 Set to true to enable auto remediations. Set to false to disable auto remediations. 2 Set to true to enable auto remediations for content updates. Set to false to disable auto remediations for content updates. 3 Specify the number of stored scans in the raw result format. The default value is 3 . As the older results get rotated, the administrator must store the results elsewhere before the rotation happens. 4 Specify the storage size that should be created for the scan to store the raw results. The default value is 1Gi 6 Specify how often the scan should be run in cron format. Note To disable the rotation policy, set the value to 0 . 5 Specify the node-role.kubernetes.io label value to schedule the scan for Node type. This value has to match the name of a MachineConfigPool . 5.3.2.4. Processing the compliance scan requirements with compliance scans settings When you have defined the compliance scan requirements and configured the settings to run the scans, then the Compliance Operator processes it using the ScanSettingBinding object. 5.3.2.4.1. ScanSettingBinding object Use the ScanSettingBinding object to specify your compliance requirements with reference to the Profile or TailoredProfile object. It is then linked to a ScanSetting object, which provides the operational constraints for the scan. Then the Compliance Operator generates the ComplianceSuite object based on the ScanSetting and ScanSettingBinding objects. Example ScanSettingBinding object apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: <name of the scan> profiles: 1 # Node checks - name: rhcos4-with-usb kind: TailoredProfile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-moderate kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: 2 name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 1 Specify the details of Profile or TailoredProfile object to scan your environment. 2 Specify the operational constraints, such as schedule and storage size. The creation of ScanSetting and ScanSettingBinding objects results in the compliance suite. To get the list of compliance suite, run the following command: USD oc get compliancesuites Important If you delete ScanSettingBinding , then compliance suite also is deleted. 5.3.2.5. Tracking the compliance scans After the creation of compliance suite, you can monitor the status of the deployed scans using the ComplianceSuite object. 5.3.2.5.1. ComplianceSuite object The ComplianceSuite object helps you keep track of the state of the scans. It contains the raw settings to create scans and the overall result. For Node type scans, you should map the scan to the MachineConfigPool , since it contains the remediations for any issues. If you specify a label, ensure it directly applies to a pool. Example ComplianceSuite object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: <name of the scan> spec: autoApplyRemediations: false 1 schedule: "0 1 * * *" 2 scans: 3 - name: workers-scan scanType: Node profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... rule: "xccdf_org.ssgproject.content_rule_no_netrc_files" nodeSelector: node-role.kubernetes.io/worker: "" status: Phase: DONE 4 Result: NON-COMPLIANT 5 scanStatuses: - name: workers-scan phase: DONE result: NON-COMPLIANT 1 Set to true to enable auto remediations. Set to false to disable auto remediations. 2 Specify how often the scan should be run in cron format. 3 Specify a list of scan specifications to run in the cluster. 4 Indicates the progress of the scans. 5 Indicates the overall verdict of the suite. The suite in the background creates the ComplianceScan object based on the scans parameter. You can programmatically fetch the ComplianceSuites events. To get the events for the suite, run the following command: USD oc get events --field-selector involvedObject.kind=ComplianceSuite,involvedObject.name=<name of the suite> Important You might create errors when you manually define the ComplianceSuite , since it contains the XCCDF attributes. 5.3.2.5.2. Advanced ComplianceScan Object The Compliance Operator includes options for advanced users for debugging or integrating with existing tooling. While it is recommended that you not create a ComplianceScan object directly, you can instead manage it using a ComplianceSuite object. Example Advanced ComplianceScan object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceScan metadata: name: <name of the scan> spec: scanType: Node 1 profile: xccdf_org.ssgproject.content_profile_moderate 2 content: ssg-ocp4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... 3 rule: "xccdf_org.ssgproject.content_rule_no_netrc_files" 4 nodeSelector: 5 node-role.kubernetes.io/worker: "" status: phase: DONE 6 result: NON-COMPLIANT 7 1 Specify either Node or Platform . Node profiles scan the cluster nodes and platform profiles scan the Kubernetes platform. 2 Specify the XCCDF identifier of the profile that you want to run. 3 Specify the container image that encapsulates the profile files. 4 It is optional. Specify the scan to run a single rule. This rule has to be identified with the XCCDF ID, and has to belong to the specified profile. Note If you skip the rule parameter, then scan runs for all the available rules of the specified profile. 5 If you are on the OpenShift Container Platform and wants to generate a remediation, then nodeSelector label has to match the MachineConfigPool label. Note If you do not specify nodeSelector parameter or match the MachineConfig label, scan will still run, but it will not create remediation. 6 Indicates the current phase of the scan. 7 Indicates the verdict of the scan. Important If you delete a ComplianceSuite object, then all the associated scans get deleted. When the scan is complete, it generates the result as Custom Resources of the ComplianceCheckResult object. However, the raw results are available in ARF format. These results are stored in a Persistent Volume (PV), which has a Persistent Volume Claim (PVC) associated with the name of the scan. You can programmatically fetch the ComplianceScans events. To generate events for the suite, run the following command: oc get events --field-selector involvedObject.kind=ComplianceScan,involvedObject.name=<name of the suite> 5.3.2.6. Viewing the compliance results When the compliance suite reaches the DONE phase, you can view the scan results and possible remediations. 5.3.2.6.1. ComplianceCheckResult object When you run a scan with a specific profile, several rules in the profiles are verified. For each of these rules, a ComplianceCheckResult object is created, which provides the state of the cluster for a specific rule. Example ComplianceCheckResult object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceCheckResult metadata: labels: compliance.openshift.io/check-severity: medium compliance.openshift.io/check-status: FAIL compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan name: workers-scan-no-direct-root-logins namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceScan name: workers-scan description: <description of scan check> instructions: <manual instructions for the scan> id: xccdf_org.ssgproject.content_rule_no_direct_root_logins severity: medium 1 status: FAIL 2 1 Describes the severity of the scan check. 2 Describes the result of the check. The possible values are: PASS: check was successful. FAIL: check was unsuccessful. INFO: check was successful and found something not severe enough to be considered an error. MANUAL: check cannot automatically assess the status and manual check is required. INCONSISTENT: different nodes report different results. ERROR: check run successfully, but could not complete. NOTAPPLICABLE: check did not run as it is not applicable. To get all the check results from a suite, run the following command: oc get compliancecheckresults \ -l compliance.openshift.io/suite=workers-compliancesuite 5.3.2.6.2. ComplianceRemediation object For a specific check you can have a datastream specified fix. However, if a Kubernetes fix is available, then the Compliance Operator creates a ComplianceRemediation object. Example ComplianceRemediation object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: labels: compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan machineconfiguration.openshift.io/role: worker name: workers-scan-disable-users-coredumps namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: workers-scan-disable-users-coredumps uid: <UID> spec: apply: false 1 object: current: 2 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:,%2A%20%20%20%20%20hard%20%20%20core%20%20%20%200 filesystem: root mode: 420 path: /etc/security/limits.d/75-disable_users_coredumps.conf outdated: {} 3 1 true indicates the remediation was applied. false indicates the remediation was not applied. 2 Includes the definition of the remediation. 3 Indicates remediation that was previously parsed from an earlier version of the content. The Compliance Operator still retains the outdated objects to give the administrator a chance to review the new remediations before applying them. To get all the remediations from a suite, run the following command: oc get complianceremediations \ -l compliance.openshift.io/suite=workers-compliancesuite To list all failing checks that can be remediated automatically, run the following command: oc get compliancecheckresults \ -l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation' To list all failing checks that can be remediated manually, run the following command: oc get compliancecheckresults \ -l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation' 5.4. Compliance Operator management 5.4.1. Installing the Compliance Operator Before you can use the Compliance Operator, you must ensure it is deployed in the cluster. Important The Compliance Operator might report incorrect results on managed platforms, such as OpenShift Dedicated, Red Hat OpenShift Service on AWS, and Microsoft Azure Red Hat OpenShift. For more information, see the Red Hat Knowledgebase Solution #6983418 . 5.4.1.1. Installing the Compliance Operator through the web console Prerequisites You must have admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Compliance Operator, then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-compliance namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Compliance Operator is installed in the openshift-compliance namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-compliance project that are reporting issues. Important If the restricted Security Context Constraints (SCC) have been modified to contain the system:authenticated group or has added requiredDropCapabilities , the Compliance Operator may not function properly due to permissions issues. You can create a custom SCC for the Compliance Operator scanner pod service account. For more information, see Creating a custom SCC for the Compliance Operator . 5.4.1.2. Installing the Compliance Operator using the CLI Prerequisites You must have admin privileges. Procedure Define a Namespace object: Example namespace-object.yaml apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: openshift-compliance Create the Namespace object: USD oc create -f namespace-object.yaml Define an OperatorGroup object: Example operator-group-object.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance Create the OperatorGroup object: USD oc create -f operator-group-object.yaml Define a Subscription object: Example subscription-object.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: "stable" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription object: USD oc create -f subscription-object.yaml Note If you are setting the global scheduler feature and enable defaultNodeSelector , you must create the namespace manually and update the annotations of the openshift-compliance namespace, or the namespace where the Compliance Operator was installed, with openshift.io/node-selector: "" . This removes the default node selector and prevents deployment failures. Verification Verify the installation succeeded by inspecting the CSV file: USD oc get csv -n openshift-compliance Verify that the Compliance Operator is up and running: USD oc get deploy -n openshift-compliance Important If the restricted Security Context Constraints (SCC) have been modified to contain the system:authenticated group or has added requiredDropCapabilities , the Compliance Operator may not function properly due to permissions issues. You can create a custom SCC for the Compliance Operator scanner pod service account. For more information, see Creating a custom SCC for the Compliance Operator . 5.4.1.3. Installing the Compliance Operator on Hosted control planes The Compliance Operator can be installed in Hosted control planes using the OperatorHub by creating a Subscription file. Important Hosted control planes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You must have admin privileges. Procedure Define a Namespace object similar to the following: Example namespace-object.yaml apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance 1 In OpenShift Container Platform 4.11, the pod security label must be set to privileged at the namespace level. Create the Namespace object by running the following command: USD oc create -f namespace-object.yaml Define an OperatorGroup object: Example operator-group-object.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance Create the OperatorGroup object by running the following command: USD oc create -f operator-group-object.yaml Define a Subscription object: Example subscription-object.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: "stable" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: "" env: - name: PLATFORM value: "HyperShift" Create the Subscription object by running the following command: USD oc create -f subscription-object.yaml Verification Verify the installation succeeded by inspecting the CSV file by running the following command: USD oc get csv -n openshift-compliance Verify that the Compliance Operator is up and running by running the following command: USD oc get deploy -n openshift-compliance 5.4.1.4. Additional resources The Compliance Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks . Overview of hosted control planes (Technology Preview) 5.4.2. Updating the Compliance Operator As a cluster administrator, you can update the Compliance Operator on your OpenShift Container Platform cluster. Important It is recommended to update the Compliance Operator to version 1.3.1 or later before updating your OpenShift Container Platform cluster to version 4.14 or later. 5.4.2.1. Preparing for an Operator update The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel. The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). Note You cannot change installed Operators to a channel that is older than the current channel. Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators: Red Hat OpenShift Container Platform Operator Update Information Checker You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of OpenShift Container Platform. Cluster Version Operator-based Operators are not included. 5.4.2.2. Changing the update channel for an Operator You can change the update channel for an Operator by using the OpenShift Container Platform web console. Tip If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators . Click the name of the Operator you want to change the update channel for. Click the Subscription tab. Click the name of the update channel under Channel . Click the newer update channel that you want to change to, then click Save . For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab. 5.4.2.3. Manually approving a pending Operator update If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending update display a status with Upgrade available . Click the name of the Operator you want to update. Click the Subscription tab. Any update requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for update. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . 5.4.3. Managing the Compliance Operator This section describes the lifecycle of security content, including how to use an updated version of compliance content and how to create a custom ProfileBundle object. 5.4.3.1. ProfileBundle CR example The ProfileBundle object requires two pieces of information: the URL of a container image that contains the contentImage and the file that contains the compliance content. The contentFile parameter is relative to the root of the file system. You can define the built-in rhcos4 ProfileBundle object as shown in the following example: apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: "2022-10-19T12:06:30Z" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: "46741" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml 1 contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 2 status: conditions: - lastTransitionTime: "2022-10-19T12:07:51Z" message: Profile bundle successfully parsed reason: Valid status: "True" type: Ready dataStreamStatus: VALID 1 Location of the file containing the compliance content. 2 Content image location. Important The base image used for the content images must include coreutils . 5.4.3.2. Updating security content Security content is included as container images that the ProfileBundle objects refer to. To accurately track updates to ProfileBundles and the custom resources parsed from the bundles such as rules or profiles, identify the container image with the compliance content using a digest instead of a tag: USD oc -n openshift-compliance get profilebundles rhcos4 -oyaml Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: "2022-10-19T12:06:30Z" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: "46741" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 1 status: conditions: - lastTransitionTime: "2022-10-19T12:07:51Z" message: Profile bundle successfully parsed reason: Valid status: "True" type: Ready dataStreamStatus: VALID 1 Security container image. Each ProfileBundle is backed by a deployment. When the Compliance Operator detects that the container image digest has changed, the deployment is updated to reflect the change and parse the content again. Using the digest instead of a tag ensures that you use a stable and predictable set of profiles. 5.4.3.3. Additional resources The Compliance Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks . 5.4.4. Uninstalling the Compliance Operator You can remove the OpenShift Compliance Operator from your cluster by using the OpenShift Container Platform web console or the CLI. 5.4.4.1. Uninstalling the OpenShift Compliance Operator from OpenShift Container Platform using the web console To remove the Compliance Operator, you must first delete the objects in the namespace. After the objects are removed, you can remove the Operator and its namespace by deleting the openshift-compliance project. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. The OpenShift Compliance Operator must be installed. Procedure To remove the Compliance Operator by using the OpenShift Container Platform web console: Go to the Operators Installed Operators Compliance Operator page. Click All instances . In All namespaces , click the Options menu and delete all ScanSettingBinding, ComplainceSuite, ComplianceScan, and ProfileBundle objects. Switch to the Administration Operators Installed Operators page. Click the Options menu on the Compliance Operator entry and select Uninstall Operator . Switch to the Home Projects page. Search for 'compliance'. Click the Options menu to the openshift-compliance project, and select Delete Project . Confirm the deletion by typing openshift-compliance in the dialog box, and click Delete . 5.4.4.2. Uninstalling the OpenShift Compliance Operator from OpenShift Container Platform using the CLI To remove the Compliance Operator, you must first delete the objects in the namespace. After the objects are removed, you can remove the Operator and its namespace by deleting the openshift-compliance project. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. The OpenShift Compliance Operator must be installed. Procedure Delete all objects in the namespace. Delete the ScanSettingBinding objects: USD oc delete ssb --all -n openshift-compliance Delete the ScanSetting objects: USD oc delete ss --all -n openshift-compliance Delete the ComplianceSuite objects: USD oc delete suite --all -n openshift-compliance Delete the ComplianceScan objects: USD oc delete scan --all -n openshift-compliance Delete the ProfileBundle objects: USD oc delete profilebundle.compliance --all -n openshift-compliance Delete the Subscription object: USD oc delete sub --all -n openshift-compliance Delete the CSV object: USD oc delete csv --all -n openshift-compliance Delete the project: USD oc delete project openshift-compliance Example output project.project.openshift.io "openshift-compliance" deleted Verification Confirm the namespace is deleted: USD oc get project/openshift-compliance Example output Error from server (NotFound): namespaces "openshift-compliance" not found 5.5. Compliance Operator scan management 5.5.1. Supported compliance profiles There are several profiles available as part of the Compliance Operator (CO) installation. While you can use the following profiles to assess gaps in a cluster, usage alone does not infer or guarantee compliance with a particular profile. Important The Compliance Operator might report incorrect results on managed platforms, such as OpenShift Dedicated, Red Hat OpenShift Service on AWS, and Azure Red Hat OpenShift. For more information, see the Red Hat Knowledgebase Solution #6983418 . 5.5.1.1. Compliance profiles The Compliance Operator provides the following compliance profiles: Table 5.1. Supported compliance profiles Profile Profile title Application Compliance Operator version Industry compliance benchmark Supported architectures rhcos4-stig Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift Node 1.3.0+ DISA-STIG [1] x86_64 ocp4-stig-node Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift Node 1.3.0+ DISA-STIG [1] x86_64 ocp4-stig Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift Platform 1.3.0+ DISA-STIG [1] x86_64 ocp4-cis CIS Red Hat OpenShift Container Platform 4 Benchmark v1.4.0 Platform 1.2.0+ CIS Benchmarks TM [1] x86_64 ppc64le s390x ocp4-cis-node CIS Red Hat OpenShift Container Platform 4 Benchmark v1.4.0 Node [2] 1.2.0+ CIS Benchmarks TM [1] x86_64 ppc64le s390x ocp4-e8 Australian Cyber Security Centre (ACSC) Essential Eight Platform 0.1.39+ ACSC Hardening Linux Workstations and Servers x86_64 ocp4-moderate NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Platform level Platform 0.1.39+ NIST SP-800-53 Release Search x86_64 ppc64le s390x rhcos4-e8 Australian Cyber Security Centre (ACSC) Essential Eight Node 0.1.39+ ACSC Hardening Linux Workstations and Servers x86_64 rhcos4-moderate NIST 800-53 Moderate-Impact Baseline for Red Hat Enterprise Linux CoreOS Node 0.1.39+ NIST SP-800-53 Release Search x86_64 ocp4-moderate-node NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Node level Node [2] 0.1.44+ NIST SP-800-53 Release Search x86_64 ppc64le s390x ocp4-nerc-cip North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the Red Hat OpenShift Container Platform - Platform level Platform 0.1.44+ NERC CIP Standards x86_64 ocp4-nerc-cip-node North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the Red Hat OpenShift Container Platform - Node level Node [2] 0.1.44+ NERC CIP Standards x86_64 rhcos4-nerc-cip North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for Red Hat Enterprise Linux CoreOS Node 0.1.44+ NERC CIP Standards x86_64 ocp4-pci-dss PCI-DSS v3.2.1 Control Baseline for Red Hat OpenShift Container Platform 4 Platform 0.1.47+ PCI Security Standards(R) Council Document Library x86_64 ppc64le ocp4-pci-dss-node PCI-DSS v3.2.1 Control Baseline for Red Hat OpenShift Container Platform 4 Node [2] 0.1.47+ PCI Security Standards(R) Council Document Library x86_64 ppc64le ocp4-high NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Platform level Platform 0.1.52+ NIST SP-800-53 Release Search x86_64 ocp4-high-node NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Node level Node [2] 0.1.52+ NIST SP-800-53 Release Search x86_64 rhcos4-high NIST 800-53 High-Impact Baseline for Red Hat Enterprise Linux CoreOS Node 0.1.52+ NIST SP-800-53 Release Search x86_64 To locate the CIS OpenShift Container Platform v4 Benchmark, go to CIS Benchmarks and click Download Latest CIS Benchmark , where you can then register to download the benchmark. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . 5.5.1.1.1. About extended compliance profiles Some compliance profiles have controls that require following industry best practices, resulting in some profiles extending others. Combining the Center for Internet Security (CIS) best practices with National Institute of Standards and Technology (NIST) security frameworks establishes a path to a secure and compliant environment. For example, the NIST High-Impact and Moderate-Impact profiles extend the CIS profile to achieve compliance. As a result, extended compliance profiles eliminate the need to run both profiles in a single cluster. Table 5.2. Profile extensions Profile Extends ocp4-pci-dss ocp4-cis ocp4-pci-dss-node ocp4-cis-node ocp4-high ocp4-cis ocp4-high-node ocp4-cis-node ocp4-moderate ocp4-cis ocp4-moderate-node ocp4-cis-node ocp4-nerc-cip ocp4-moderate ocp4-nerc-cip-node ocp4-moderate-node 5.5.1.2. Additional resources Compliance Operator profile types 5.5.2. Compliance Operator scans The ScanSetting and ScanSettingBinding APIs are recommended to run compliance scans with the Compliance Operator. For more information on these API objects, run: USD oc explain scansettings or USD oc explain scansettingbindings 5.5.2.1. Running compliance scans You can run a scan using the Center for Internet Security (CIS) profiles. For convenience, the Compliance Operator creates a ScanSetting object with reasonable defaults on startup. This ScanSetting object is named default . Note For all-in-one control plane and worker nodes, the compliance scan runs twice on the worker and control plane nodes. The compliance scan might generate inconsistent scan results. You can avoid inconsistent results by defining only a single role in the ScanSetting object. Procedure Inspect the ScanSetting object by running: USD oc describe scansettings default -n openshift-compliance Example output Name: default Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Kind: ScanSetting Metadata: Creation Timestamp: 2022-10-10T14:07:29Z Generation: 1 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:rawResultStorage: .: f:nodeSelector: .: f:node-role.kubernetes.io/master: f:pvAccessModes: f:rotation: f:size: f:tolerations: f:roles: f:scanTolerations: f:schedule: f:showNotApplicable: f:strictNodeScan: Manager: compliance-operator Operation: Update Time: 2022-10-10T14:07:29Z Resource Version: 56111 UID: c21d1d14-3472-47d7-a450-b924287aec90 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce 1 Rotation: 3 2 Size: 1Gi 3 Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master 4 worker 5 Scan Tolerations: 6 Operator: Exists Schedule: 0 1 * * * 7 Show Not Applicable: false Strict Node Scan: true Events: <none> 1 The Compliance Operator creates a persistent volume (PV) that contains the results of the scans. By default, the PV will use access mode ReadWriteOnce because the Compliance Operator cannot make any assumptions about the storage classes configured on the cluster. Additionally, ReadWriteOnce access mode is available on most clusters. If you need to fetch the scan results, you can do so by using a helper pod, which also binds the volume. Volumes that use the ReadWriteOnce access mode can be mounted by only one pod at time, so it is important to remember to delete the helper pods. Otherwise, the Compliance Operator will not be able to reuse the volume for subsequent scans. 2 The Compliance Operator keeps results of three subsequent scans in the volume; older scans are rotated. 3 The Compliance Operator will allocate one GB of storage for the scan results. 4 5 If the scan setting uses any profiles that scan cluster nodes, scan these node roles. 6 The default scan setting object scans all the nodes. 7 The default scan setting object runs scans at 01:00 each day. As an alternative to the default scan setting, you can use default-auto-apply , which has the following settings: Name: default-auto-apply Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Auto Apply Remediations: true 1 Auto Update Remediations: true 2 Kind: ScanSetting Metadata: Creation Timestamp: 2022-10-18T20:21:00Z Generation: 1 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:autoApplyRemediations: f:autoUpdateRemediations: f:rawResultStorage: .: f:nodeSelector: .: f:node-role.kubernetes.io/master: f:pvAccessModes: f:rotation: f:size: f:tolerations: f:roles: f:scanTolerations: f:schedule: f:showNotApplicable: f:strictNodeScan: Manager: compliance-operator Operation: Update Time: 2022-10-18T20:21:00Z Resource Version: 38840 UID: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce Rotation: 3 Size: 1Gi Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master worker Scan Tolerations: Operator: Exists Schedule: 0 1 * * * Show Not Applicable: false Strict Node Scan: true Events: <none> 1 2 Setting autoUpdateRemediations and autoApplyRemediations flags to true allows you to easily create ScanSetting objects that auto-remediate without extra steps. Create a ScanSettingBinding object that binds to the default ScanSetting object and scans the cluster using the cis and cis-node profiles. For example: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis-compliance namespace: openshift-compliance profiles: - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 Create the ScanSettingBinding object by running: USD oc create -f <file-name>.yaml -n openshift-compliance At this point in the process, the ScanSettingBinding object is reconciled and based on the Binding and the Bound settings. The Compliance Operator creates a ComplianceSuite object and the associated ComplianceScan objects. Follow the compliance scan progress by running: USD oc get compliancescan -w -n openshift-compliance The scans progress through the scanning phases and eventually reach the DONE phase when complete. In most cases, the result of the scan is NON-COMPLIANT . You can review the scan results and start applying remediations to make the cluster compliant. See Managing Compliance Operator remediation for more information. 5.5.2.2. Scheduling the result server pod on a worker node The result server pod mounts the persistent volume (PV) that stores the raw Asset Reporting Format (ARF) scan results. The nodeSelector and tolerations attributes enable you to configure the location of the result server pod. This is helpful for those environments where control plane nodes are not permitted to mount persistent volumes. Procedure Create a ScanSetting custom resource (CR) for the Compliance Operator: Define the ScanSetting CR, and save the YAML file, for example, rs-workers.yaml : apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: rs-on-workers namespace: openshift-compliance rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: "" 1 pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists 2 roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * 1 The Compliance Operator uses this node to store scan results in ARF format. 2 The result server pod tolerates all taints. To create the ScanSetting CR, run the following command: USD oc create -f rs-workers.yaml Verification To verify that the ScanSetting object is created, run the following command: USD oc get scansettings rs-on-workers -n openshift-compliance -o yaml Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: creationTimestamp: "2021-11-19T19:36:36Z" generation: 1 name: rs-on-workers namespace: openshift-compliance resourceVersion: "48305" uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: "" pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * strictNodeScan: true 5.5.2.3. ScanSetting Custom Resource The ScanSetting Custom Resource now allows you to override the default CPU and memory limits of scanner pods through the scan limits attribute. The Compliance Operator will use defaults of 500Mi memory, 100m CPU for the scanner container, and 200Mi memory with 100m CPU for the api-resource-collector container. To set the memory limits of the Operator, modify the Subscription object if installed through OLM or the Operator deployment itself. To increase the default CPU and memory limits of the Compliance Operator, see Increasing Compliance Operator resource limits . Important Increasing the memory limit for the Compliance Operator or the scanner pods is needed if the default limits are not sufficient and the Operator or scanner pods are ended by the Out Of Memory (OOM) process. 5.5.2.4. Configuring the Hosted control planes management cluster If you are hosting your own Hosted control plane or Hypershift environment and want to scan a Hosted Cluster from the management cluster, you will need to set the name and prefix namespace for the target Hosted Cluster. You can achieve this by creating a TailoredProfile . Important This procedure only applies to users managing their own Hosted control planes environment. Note Only ocp4-cis and ocp4-pci-dss profiles are supported in Hosted control planes management clusters. Prerequisites The Compliance Operator is installed in the management cluster. Procedure Obtain the name and namespace of the hosted cluster to be scanned by running the following command: USD oc get hostedcluster -A Example output NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE local-cluster 79136a1bdb84b3c13217 4.13.5 79136a1bdb84b3c13217-admin-kubeconfig Completed True False The hosted control plane is available In the management cluster, create a TailoredProfile extending the scan Profile and define the name and namespace of the Hosted Cluster to be scanned: Example management-tailoredprofile.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: hypershift-cisk57aw88gry namespace: openshift-compliance spec: description: This profile test required rules extends: ocp4-cis 1 title: Management namespace profile setValues: - name: ocp4-hypershift-cluster rationale: This value is used for HyperShift version detection value: 79136a1bdb84b3c13217 2 - name: ocp4-hypershift-namespace-prefix rationale: This value is used for HyperShift control plane namespace detection value: local-cluster 3 1 Variable. Only ocp4-cis and ocp4-pci-dss profiles are supported in Hosted control planes management clusters. 2 The value is the NAME from the output in the step. 3 The value is the NAMESPACE from the output in the step. Create the TailoredProfile : USD oc create -n openshift-compliance -f mgmt-tp.yaml 5.5.2.5. Applying resource requests and limits When the kubelet starts a container as part of a Pod, the kubelet passes that container's requests and limits for memory and CPU to the container runtime. In Linux, the container runtime configures the kernel cgroups that apply and enforce the limits you defined. The CPU limit defines how much CPU time the container can use. During each scheduling interval, the Linux kernel checks to see if this limit is exceeded. If so, the kernel waits before allowing the cgroup to resume execution. If several different containers (cgroups) want to run on a contended system, workloads with larger CPU requests are allocated more CPU time than workloads with small requests. The memory request is used during Pod scheduling. On a node that uses cgroups v2, the container runtime might use the memory request as a hint to set memory.min and memory.low values. If a container attempts to allocate more memory than this limit, the Linux kernel out-of-memory subsystem activates and intervenes by stopping one of the processes in the container that tried to allocate memory. The memory limit for the Pod or container can also apply to pages in memory-backed volumes, such as an emptyDir. The kubelet tracks tmpfs emptyDir volumes as container memory is used, rather than as local ephemeral storage. If a container exceeds its memory request and the node that it runs on becomes short of memory overall, the Pod's container might be evicted. Important A container may not exceed its CPU limit for extended periods. Container run times do not stop Pods or containers for excessive CPU usage. To determine whether a container cannot be scheduled or is being killed due to resource limits, see Troubleshooting the Compliance Operator . 5.5.2.6. Scheduling Pods with container resource requests When a Pod is created, the scheduler selects a Node for the Pod to run on. Each node has a maximum capacity for each resource type in the amount of CPU and memory it can provide for the Pods. The scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity nodes for each resource type. Although memory or CPU resource usage on nodes is very low, the scheduler might still refuse to place a Pod on a node if the capacity check fails to protect against a resource shortage on a node. For each container, you can specify the following resource limits and request: spec.containers[].resources.limits.cpu spec.containers[].resources.limits.memory spec.containers[].resources.limits.hugepages-<size> spec.containers[].resources.requests.cpu spec.containers[].resources.requests.memory spec.containers[].resources.requests.hugepages-<size> Although you can specify requests and limits for only individual containers, it is also useful to consider the overall resource requests and limits for a pod. For a particular resource, a container resource request or limit is the sum of the resource requests or limits of that type for each container in the pod. Example container resource requests and limits apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: app image: images.my-company.example/app:v4 resources: requests: 1 memory: "64Mi" cpu: "250m" limits: 2 memory: "128Mi" cpu: "500m" - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" 1 The container is requesting 64 Mi of memory and 250 m CPU. 2 The container's limits are 128 Mi of memory and 500 m CPU. 5.5.3. Tailoring the Compliance Operator While the Compliance Operator comes with ready-to-use profiles, they must be modified to fit the organizations' needs and requirements. The process of modifying a profile is called tailoring . The Compliance Operator provides the TailoredProfile object to help tailor profiles. 5.5.3.1. Creating a new tailored profile You can write a tailored profile from scratch by using the TailoredProfile object. Set an appropriate title and description and leave the extends field empty. Indicate to the Compliance Operator what type of scan this custom profile will generate: Node scan: Scans the Operating System. Platform scan: Scans the OpenShift Container Platform configuration. Procedure Set the following annotation on the TailoredProfile object: Example new-profile.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: new-profile annotations: compliance.openshift.io/product-type: Node 1 spec: extends: ocp4-cis-node 2 description: My custom profile 3 title: Custom profile 4 enableRules: - name: ocp4-etcd-unique-ca rationale: We really need to enable this disableRules: - name: ocp4-file-groupowner-cni-conf rationale: This does not apply to the cluster 1 Set Node or Platform accordingly. 2 The extends field is optional. 3 Use the description field to describe the function of the new TailoredProfile object. 4 Give your TailoredProfile object a title with the title field. Note Adding the -node suffix to the name field of the TailoredProfile object is similar to adding the Node product type annotation and generates an Operating System scan. 5.5.3.2. Using tailored profiles to extend existing ProfileBundles While the TailoredProfile CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it. The ComplianceSuite object contains an optional TailoringConfigMap attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap attribute is a name of a config map, which must contain a key called tailoring.xml and the value of this key is the tailoring contents. Procedure Browse the available rules for the Red Hat Enterprise Linux CoreOS (RHCOS) ProfileBundle : USD oc get rules.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4 Browse the available variables in the same ProfileBundle : USD oc get variables.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4 Create a tailored profile named nist-moderate-modified : Choose which rules you want to add to the nist-moderate-modified tailored profile. This example extends the rhcos4-moderate profile by disabling two rules and changing one value. Use the rationale value to describe why these changes were made: Example new-profile-node.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: nist-moderate-modified spec: extends: rhcos4-moderate description: NIST moderate profile title: My modified NIST moderate profile disableRules: - name: rhcos4-file-permissions-var-log-messages rationale: The file contains logs of error messages in the system - name: rhcos4-account-disable-post-pw-expiration rationale: No need to check this as it comes from the IdP setValues: - name: rhcos4-var-selinux-state rationale: Organizational requirements value: permissive Table 5.3. Attributes for spec variables Attribute Description extends Name of the Profile object upon which this TailoredProfile is built. title Human-readable title of the TailoredProfile . disableRules A list of name and rationale pairs. Each name refers to a name of a rule object that is to be disabled. The rationale value is human-readable text describing why the rule is disabled. manualRules A list of name and rationale pairs. When a manual rule is added, the check result status will always be manual and remediation will not be generated. This attribute is automatic and by default has no values when set as a manual rule. enableRules A list of name and rationale pairs. Each name refers to a name of a rule object that is to be enabled. The rationale value is human-readable text describing why the rule is enabled. description Human-readable text describing the TailoredProfile . setValues A list of name, rationale, and value groupings. Each name refers to a name of the value set. The rationale is human-readable text describing the set. The value is the actual setting. Add the tailoredProfile.spec.manualRules attribute: Example tailoredProfile.spec.manualRules.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: ocp4-manual-scc-check spec: extends: ocp4-cis description: This profile extends ocp4-cis by forcing the SCC check to always return MANUAL title: OCP4 CIS profile with manual SCC check manualRules: - name: ocp4-scc-limit-container-allowed-capabilities rationale: We use third party software that installs its own SCC with extra privileges Create the TailoredProfile object: USD oc create -n openshift-compliance -f new-profile-node.yaml 1 1 The TailoredProfile object is created in the default openshift-compliance namespace. Example output tailoredprofile.compliance.openshift.io/nist-moderate-modified created Define the ScanSettingBinding object to bind the new nist-moderate-modified tailored profile to the default ScanSetting object. Example new-scansettingbinding.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: nist-moderate-modified profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate - apiGroup: compliance.openshift.io/v1alpha1 kind: TailoredProfile name: nist-moderate-modified settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default Create the ScanSettingBinding object: USD oc create -n openshift-compliance -f new-scansettingbinding.yaml Example output scansettingbinding.compliance.openshift.io/nist-moderate-modified created 5.5.4. Retrieving Compliance Operator raw results When proving compliance for your OpenShift Container Platform cluster, you might need to provide the scan results for auditing purposes. 5.5.4.1. Obtaining Compliance Operator raw results from a persistent volume Procedure The Compliance Operator generates and stores the raw results in a persistent volume. These results are in Asset Reporting Format (ARF). Explore the ComplianceSuite object: USD oc get compliancesuites nist-moderate-modified \ -o json -n openshift-compliance | jq '.status.scanStatuses[].resultsStorage' Example output { "name": "ocp4-moderate", "namespace": "openshift-compliance" } { "name": "nist-moderate-modified-master", "namespace": "openshift-compliance" } { "name": "nist-moderate-modified-worker", "namespace": "openshift-compliance" } This shows the persistent volume claims where the raw results are accessible. Verify the raw data location by using the name and namespace of one of the results: USD oc get pvc -n openshift-compliance rhcos4-moderate-worker Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rhcos4-moderate-worker Bound pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a 1Gi RWO gp2 92m Fetch the raw results by spawning a pod that mounts the volume and copying the results: USD oc create -n openshift-compliance -f pod.yaml Example pod.yaml apiVersion: "v1" kind: Pod metadata: name: pv-extract spec: containers: - name: pv-extract-pod image: registry.access.redhat.com/ubi8/ubi command: ["sleep", "3000"] volumeMounts: - mountPath: "/workers-scan-results" name: workers-scan-vol volumes: - name: workers-scan-vol persistentVolumeClaim: claimName: rhcos4-moderate-worker After the pod is running, download the results: USD oc cp pv-extract:/workers-scan-results -n openshift-compliance . Important Spawning a pod that mounts the persistent volume will keep the claim as Bound . If the volume's storage class in use has permissions set to ReadWriteOnce , the volume is only mountable by one pod at a time. You must delete the pod upon completion, or it will not be possible for the Operator to schedule a pod and continue storing results in this location. After the extraction is complete, the pod can be deleted: USD oc delete pod pv-extract -n openshift-compliance 5.5.5. Managing Compliance Operator result and remediation Each ComplianceCheckResult represents a result of one compliance rule check. If the rule can be remediated automatically, a ComplianceRemediation object with the same name, owned by the ComplianceCheckResult is created. Unless requested, the remediations are not applied automatically, which gives an OpenShift Container Platform administrator the opportunity to review what the remediation does and only apply a remediation once it has been verified. 5.5.5.1. Filters for compliance check results By default, the ComplianceCheckResult objects are labeled with several useful labels that allow you to query the checks and decide on the steps after the results are generated. List checks that belong to a specific suite: USD oc get -n openshift-compliance compliancecheckresults \ -l compliance.openshift.io/suite=workers-compliancesuite List checks that belong to a specific scan: USD oc get -n openshift-compliance compliancecheckresults \ -l compliance.openshift.io/scan=workers-scan Not all ComplianceCheckResult objects create ComplianceRemediation objects. Only ComplianceCheckResult objects that can be remediated automatically do. A ComplianceCheckResult object has a related remediation if it is labeled with the compliance.openshift.io/automated-remediation label. The name of the remediation is the same as the name of the check. List all failing checks that can be remediated automatically: USD oc get -n openshift-compliance compliancecheckresults \ -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation' List all failing checks sorted by severity: USD oc get compliancecheckresults -n openshift-compliance \ -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/check-severity=high' Example output NAME STATUS SEVERITY nist-moderate-modified-master-configure-crypto-policy FAIL high nist-moderate-modified-master-coreos-pti-kernel-argument FAIL high nist-moderate-modified-master-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-master-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-master-enable-fips-mode FAIL high nist-moderate-modified-master-no-empty-passwords FAIL high nist-moderate-modified-master-selinux-state FAIL high nist-moderate-modified-worker-configure-crypto-policy FAIL high nist-moderate-modified-worker-coreos-pti-kernel-argument FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-worker-enable-fips-mode FAIL high nist-moderate-modified-worker-no-empty-passwords FAIL high nist-moderate-modified-worker-selinux-state FAIL high ocp4-moderate-configure-network-policies-namespaces FAIL high ocp4-moderate-fips-mode-enabled-on-all-nodes FAIL high List all failing checks that must be remediated manually: USD oc get -n openshift-compliance compliancecheckresults \ -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation' The manual remediation steps are typically stored in the description attribute in the ComplianceCheckResult object. Table 5.4. ComplianceCheckResult Status ComplianceCheckResult Status Description PASS Compliance check ran to completion and passed. FAIL Compliance check ran to completion and failed. INFO Compliance check ran to completion and found something not severe enough to be considered an error. MANUAL Compliance check does not have a way to automatically assess the success or failure and must be checked manually. INCONSISTENT Compliance check reports different results from different sources, typically cluster nodes. ERROR Compliance check ran, but could not complete properly. NOT-APPLICABLE Compliance check did not run because it is not applicable or not selected. 5.5.5.2. Reviewing a remediation Review both the ComplianceRemediation object and the ComplianceCheckResult object that owns the remediation. The ComplianceCheckResult object contains human-readable descriptions of what the check does and the hardening trying to prevent, as well as other metadata like the severity and the associated security controls. The ComplianceRemediation object represents a way to fix the problem described in the ComplianceCheckResult . After first scan, check for remediations with the state MissingDependencies . Below is an example of a check and a remediation called sysctl-net-ipv4-conf-all-accept-redirects . This example is redacted to only show spec and status and omits metadata : spec: apply: false current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf mode: 0644 contents: source: data:,net.ipv4.conf.all.accept_redirects%3D0 outdated: {} status: applicationState: NotApplied The remediation payload is stored in the spec.current attribute. The payload can be any Kubernetes object, but because this remediation was produced by a node scan, the remediation payload in the above example is a MachineConfig object. For Platform scans, the remediation payload is often a different kind of an object (for example, a ConfigMap or Secret object), but typically applying that remediation is up to the administrator, because otherwise the Compliance Operator would have required a very broad set of permissions to manipulate any generic Kubernetes object. An example of remediating a Platform check is provided later in the text. To see exactly what the remediation does when applied, the MachineConfig object contents use the Ignition objects for the configuration. See the Ignition specification for further information about the format. In our example, the spec.config.storage.files[0].path attribute specifies the file that is being create by this remediation ( /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf ) and the spec.config.storage.files[0].contents.source attribute specifies the contents of that file. Note The contents of the files are URL-encoded. Use the following Python script to view the contents: USD echo "net.ipv4.conf.all.accept_redirects%3D0" | python3 -c "import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))" Example output net.ipv4.conf.all.accept_redirects=0 Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.5.5.3. Applying remediation when using customized machine config pools When you create a custom MachineConfigPool , add a label to the MachineConfigPool so that machineConfigPoolSelector present in the KubeletConfig can match the label with MachineConfigPool . Important Do not set protectKernelDefaults: false in the KubeletConfig file, because the MachineConfigPool object might fail to unpause unexpectedly after the Compliance Operator finishes applying remediation. Procedure List the nodes. USD oc get nodes -n openshift-compliance Example output NAME STATUS ROLES AGE VERSION ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.24.0 ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.24.0 ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.24.0 ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.24.0 ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.24.0 Add a label to nodes. USD oc -n openshift-compliance \ label node ip-10-0-166-81.us-east-2.compute.internal \ node-role.kubernetes.io/<machine_config_pool_name>= Example output node/ip-10-0-166-81.us-east-2.compute.internal labeled Create custom MachineConfigPool CR. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: <machine_config_pool_name> labels: pools.operator.machineconfiguration.openshift.io/<machine_config_pool_name>: '' 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,<machine_config_pool_name>]} nodeSelector: matchLabels: node-role.kubernetes.io/<machine_config_pool_name>: "" 1 The labels field defines label name to add for Machine config pool(MCP). Verify MCP created successfully. USD oc get mcp -w 5.5.5.4. Evaluating KubeletConfig rules against default configuration values OpenShift Container Platform infrastructure might contain incomplete configuration files at run time, and nodes assume default configuration values for missing configuration options. Some configuration options can be passed as command line arguments. As a result, the Compliance Operator cannot verify if the configuration file on the node is complete because it might be missing options used in the rule checks. To prevent false negative results where the default configuration value passes a check, the Compliance Operator uses the Node/Proxy API to fetch the configuration for each node in a node pool, then all configuration options that are consistent across nodes in the node pool are stored in a file that represents the configuration for all nodes within that node pool. This increases the accuracy of the scan results. No additional configuration changes are required to use this feature with default master and worker node pools configurations. 5.5.5.5. Scanning custom node pools The Compliance Operator does not maintain a copy of each node pool configuration. The Compliance Operator aggregates consistent configuration options for all nodes within a single node pool into one copy of the configuration file. The Compliance Operator then uses the configuration file for a particular node pool to evaluate rules against nodes within that pool. Procedure Add the example role to the ScanSetting object that will be stored in the ScanSettingBinding CR: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master - example scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' Create a scan that uses the ScanSettingBinding CR: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis namespace: openshift-compliance profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis-node settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default Verification The Platform KubeletConfig rules are checked through the Node/Proxy object. You can find those rules by running the following command: USD oc get rules -o json | jq '.items[] | select(.checkType == "Platform") | select(.metadata.name | contains("ocp4-kubelet-")) | .metadata.name' 5.5.5.6. Remediating KubeletConfig sub pools KubeletConfig remediation labels can be applied to MachineConfigPool sub-pools. Procedure Add a label to the sub-pool MachineConfigPool CR: USD oc label mcp <sub-pool-name> pools.operator.machineconfiguration.openshift.io/<sub-pool-name>= 5.5.5.7. Applying a remediation The boolean attribute spec.apply controls whether the remediation should be applied by the Compliance Operator. You can apply the remediation by setting the attribute to true : USD oc -n openshift-compliance \ patch complianceremediations/<scan-name>-sysctl-net-ipv4-conf-all-accept-redirects \ --patch '{"spec":{"apply":true}}' --type=merge After the Compliance Operator processes the applied remediation, the status.ApplicationState attribute would change to Applied or to Error if incorrect. When a machine config remediation is applied, that remediation along with all other applied remediations are rendered into a MachineConfig object named 75-USDscan-name-USDsuite-name . That MachineConfig object is subsequently rendered by the Machine Config Operator and finally applied to all the nodes in a machine config pool by an instance of the machine control daemon running on each node. Note that when the Machine Config Operator applies a new MachineConfig object to nodes in a pool, all the nodes belonging to the pool are rebooted. This might be inconvenient when applying multiple remediations, each of which re-renders the composite 75-USDscan-name-USDsuite-name MachineConfig object. To prevent applying the remediation immediately, you can pause the machine config pool by setting the .spec.paused attribute of a MachineConfigPool object to true . Note Make sure the pools are unpaused when the CA certificate rotation happens. If the MCPs are paused, the MCO cannot push the newly rotated certificates to those nodes. This causes the cluster to become degraded and causes failure in multiple oc commands, including oc debug , oc logs , oc exec , and oc attach . You receive alerts in the Alerting UI of the OpenShift Container Platform web console if an MCP is paused when the certificates are rotated. The Compliance Operator can apply remediations automatically. Set autoApplyRemediations: true in the ScanSetting top-level object. Warning Applying remediations automatically should only be done with careful consideration. Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.5.5.8. Remediating a platform check manually Checks for Platform scans typically have to be remediated manually by the administrator for two reasons: It is not always possible to automatically determine the value that must be set. One of the checks requires that a list of allowed registries is provided, but the scanner has no way of knowing which registries the organization wants to allow. Different checks modify different API objects, requiring automated remediation to possess root or superuser access to modify objects in the cluster, which is not advised. Procedure The example below uses the ocp4-ocp-allowed-registries-for-import rule, which would fail on a default OpenShift Container Platform installation. Inspect the rule oc get rule.compliance/ocp4-ocp-allowed-registries-for-import -oyaml , the rule is to limit the registries the users are allowed to import images from by setting the allowedRegistriesForImport attribute, The warning attribute of the rule also shows the API object checked, so it can be modified and remediate the issue: USD oc edit image.config.openshift.io/cluster Example output apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2020-09-10T10:12:54Z" generation: 2 name: cluster resourceVersion: "363096" selfLink: /apis/config.openshift.io/v1/images/cluster uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e spec: allowedRegistriesForImport: - domainName: registry.redhat.io status: externalRegistryHostnames: - default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 Re-run the scan: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= 5.5.5.9. Updating remediations When a new version of compliance content is used, it might deliver a new and different version of a remediation than the version. The Compliance Operator will keep the old version of the remediation applied. The OpenShift Container Platform administrator is also notified of the new version to review and apply. A ComplianceRemediation object that had been applied earlier, but was updated changes its status to Outdated . The outdated objects are labeled so that they can be searched for easily. The previously applied remediation contents would then be stored in the spec.outdated attribute of a ComplianceRemediation object and the new updated contents would be stored in the spec.current attribute. After updating the content to a newer version, the administrator then needs to review the remediation. As long as the spec.outdated attribute exists, it would be used to render the resulting MachineConfig object. After the spec.outdated attribute is removed, the Compliance Operator re-renders the resulting MachineConfig object, which causes the Operator to push the configuration to the nodes. Procedure Search for any outdated remediations: USD oc -n openshift-compliance get complianceremediations \ -l complianceoperator.openshift.io/outdated-remediation= Example output NAME STATE workers-scan-no-empty-passwords Outdated The currently applied remediation is stored in the Outdated attribute and the new, unapplied remediation is stored in the Current attribute. If you are satisfied with the new version, remove the Outdated field. If you want to keep the updated content, remove the Current and Outdated attributes. Apply the newer version of the remediation: USD oc -n openshift-compliance patch complianceremediations workers-scan-no-empty-passwords \ --type json -p '[{"op":"remove", "path":/spec/outdated}]' The remediation state will switch from Outdated to Applied : USD oc get -n openshift-compliance complianceremediations workers-scan-no-empty-passwords Example output NAME STATE workers-scan-no-empty-passwords Applied The nodes will apply the newer remediation version and reboot. Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.5.5.10. Unapplying a remediation It might be required to unapply a remediation that was previously applied. Procedure Set the apply flag to false : USD oc -n openshift-compliance \ patch complianceremediations/rhcos4-moderate-worker-sysctl-net-ipv4-conf-all-accept-redirects \ --patch '{"spec":{"apply":false}}' --type=merge The remediation status will change to NotApplied and the composite MachineConfig object would be re-rendered to not include the remediation. Important All affected nodes with the remediation will be rebooted. Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.5.5.11. Removing a KubeletConfig remediation KubeletConfig remediations are included in node-level profiles. In order to remove a KubeletConfig remediation, you must manually remove it from the KubeletConfig objects. This example demonstrates how to remove the compliance check for the one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available remediation. Procedure Locate the scan-name and compliance check for the one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available remediation: USD oc -n openshift-compliance get remediation \ one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: annotations: compliance.openshift.io/xccdf-value-used: var-kubelet-evictionhard-imagefs-available creationTimestamp: "2022-01-05T19:52:27Z" generation: 1 labels: compliance.openshift.io/scan-name: one-rule-tp-node-master 1 compliance.openshift.io/suite: one-rule-ssb-node name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available uid: fe8e1577-9060-4c59-95b2-3e2c51709adc resourceVersion: "84820" uid: 5339d21a-24d7-40cb-84d2-7a2ebb015355 spec: apply: true current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: kubeletConfig: evictionHard: imagefs.available: 10% 2 outdated: {} type: Configuration status: applicationState: Applied 1 The scan name of the remediation. 2 The remediation that was added to the KubeletConfig objects. Note If the remediation invokes an evictionHard kubelet configuration, you must specify all of the evictionHard parameters: memory.available , nodefs.available , nodefs.inodesFree , imagefs.available , and imagefs.inodesFree . If you do not specify all parameters, only the specified parameters are applied and the remediation will not function properly. Remove the remediation: Set apply to false for the remediation object: USD oc -n openshift-compliance patch \ complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available \ -p '{"spec":{"apply":false}}' --type=merge Using the scan-name , find the KubeletConfig object that the remediation was applied to: USD oc -n openshift-compliance get kubeletconfig \ --selector compliance.openshift.io/scan-name=one-rule-tp-node-master Example output NAME AGE compliance-operator-kubelet-master 2m34s Manually remove the remediation, imagefs.available: 10% , from the KubeletConfig object: USD oc edit -n openshift-compliance KubeletConfig compliance-operator-kubelet-master Important All affected nodes with the remediation will be rebooted. Note You must also exclude the rule from any scheduled scans in your tailored profiles that auto-applies the remediation, otherwise, the remediation will be re-applied during the scheduled scan. 5.5.5.12. Inconsistent ComplianceScan The ScanSetting object lists the node roles that the compliance scans generated from the ScanSetting or ScanSettingBinding objects would scan. Each node role usually maps to a machine config pool. Important It is expected that all machines in a machine config pool are identical and all scan results from the nodes in a pool should be identical. If some of the results are different from others, the Compliance Operator flags a ComplianceCheckResult object where some of the nodes will report as INCONSISTENT . All ComplianceCheckResult objects are also labeled with compliance.openshift.io/inconsistent-check . Because the number of machines in a pool might be quite large, the Compliance Operator attempts to find the most common state and list the nodes that differ from the common state. The most common state is stored in the compliance.openshift.io/most-common-status annotation and the annotation compliance.openshift.io/inconsistent-source contains pairs of hostname:status of check statuses that differ from the most common status. If no common state can be found, all the hostname:status pairs are listed in the compliance.openshift.io/inconsistent-source annotation . If possible, a remediation is still created so that the cluster can converge to a compliant status. However, this might not always be possible and correcting the difference between nodes must be done manually. The compliance scan must be re-run to get a consistent result by annotating the scan with the compliance.openshift.io/rescan= option: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= 5.5.5.13. Additional resources Modifying nodes . 5.5.6. Performing advanced Compliance Operator tasks The Compliance Operator includes options for advanced users for the purpose of debugging or integration with existing tooling. 5.5.6.1. Using the ComplianceSuite and ComplianceScan objects directly While it is recommended that users take advantage of the ScanSetting and ScanSettingBinding objects to define the suites and scans, there are valid use cases to define the ComplianceSuite objects directly: Specifying only a single rule to scan. This can be useful for debugging together with the debug: true attribute which increases the OpenSCAP scanner verbosity, as the debug mode tends to get quite verbose otherwise. Limiting the test to one rule helps to lower the amount of debug information. Providing a custom nodeSelector. In order for a remediation to be applicable, the nodeSelector must match a pool. Pointing the Scan to a bespoke config map with a tailoring file. For testing or development when the overhead of parsing profiles from bundles is not required. The following example shows a ComplianceSuite that scans the worker machines with only a single rule: apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... debug: true rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins nodeSelector: node-role.kubernetes.io/worker: "" The ComplianceSuite object and the ComplianceScan objects referred to above specify several attributes in a format that OpenSCAP expects. To find out the profile, content, or rule values, you can start by creating a similar Suite from ScanSetting and ScanSettingBinding or inspect the objects parsed from the ProfileBundle objects like rules or profiles. Those objects contain the xccdf_org identifiers you can use to refer to them from a ComplianceSuite . 5.5.6.2. Setting PriorityClass for ScanSetting scans In large scale environments, the default PriorityClass object can be too low to guarantee Pods execute scans on time. For clusters that must maintain compliance or guarantee automated scanning, it is recommended to set the PriorityClass variable to ensure the Compliance Operator is always given priority in resource constrained situations. Procedure Set the PriorityClass variable: apiVersion: compliance.openshift.io/v1alpha1 strictNodeScan: true metadata: name: default namespace: openshift-compliance priorityClass: compliance-high-priority 1 kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker scanTolerations: - operator: Exists 1 If the PriorityClass referenced in the ScanSetting cannot be found, the Operator will leave the PriorityClass empty, issue a warning, and continue scheduling scans without a PriorityClass . 5.5.6.3. Using raw tailored profiles While the TailoredProfile CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it. The ComplianceSuite object contains an optional TailoringConfigMap attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap attribute is a name of a config map which must contain a key called tailoring.xml and the value of this key is the tailoring contents. Procedure Create the ConfigMap object from a file: USD oc -n openshift-compliance \ create configmap nist-moderate-modified \ --from-file=tailoring.xml=/path/to/the/tailoringFile.xml Reference the tailoring file in a scan that belongs to a suite: apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: debug: true scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... debug: true tailoringConfigMap: name: nist-moderate-modified nodeSelector: node-role.kubernetes.io/worker: "" 5.5.6.4. Performing a rescan Typically you will want to re-run a scan on a defined schedule, like every Monday or daily. It can also be useful to re-run a scan once after fixing a problem on a node. To perform a single scan, annotate the scan with the compliance.openshift.io/rescan= option: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= A rescan generates four additional mc for rhcos-moderate profile: USD oc get mc Example output 75-worker-scan-chronyd-or-ntpd-specify-remote-server 75-worker-scan-configure-usbguard-auditbackend 75-worker-scan-service-usbguard-enabled 75-worker-scan-usbguard-allow-hid-and-hub Important When the scan setting default-auto-apply label is applied, remediations are applied automatically and outdated remediations automatically update. If there are remediations that were not applied due to dependencies, or remediations that had been outdated, rescanning applies the remediations and might trigger a reboot. Only remediations that use MachineConfig objects trigger reboots. If there are no updates or dependencies to be applied, no reboot occurs. 5.5.6.5. Setting custom storage size for results While the custom resources such as ComplianceCheckResult represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the etcd key-value store. Instead, every scan creates a persistent volume (PV) which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the rawResultStorage.size attribute that is exposed in both the ScanSetting and ComplianceScan resources. A related parameter is rawResultStorage.rotation which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment. 5.5.6.5.1. Using custom result storage values Because OpenShift Container Platform can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the rawResultStorage.StorageClassName attribute. Important If your cluster does not specify a default storage class, this attribute must be set. Configure the ScanSetting custom resource to use a standard storage class and create persistent volumes that are 10GB in size and keep the last 10 results: Example ScanSetting CR apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' 5.5.6.6. Applying remediations generated by suite scans Although you can use the autoApplyRemediations boolean parameter in a ComplianceSuite object, you can alternatively annotate the object with compliance.openshift.io/apply-remediations . This allows the Operator to apply all of the created remediations. Procedure Apply the compliance.openshift.io/apply-remediations annotation by running: USD oc -n openshift-compliance \ annotate compliancesuites/workers-compliancesuite compliance.openshift.io/apply-remediations= 5.5.6.7. Automatically update remediations In some cases, a scan with newer content might mark remediations as OUTDATED . As an administrator, you can apply the compliance.openshift.io/remove-outdated annotation to apply new remediations and remove the outdated ones. Procedure Apply the compliance.openshift.io/remove-outdated annotation: USD oc -n openshift-compliance \ annotate compliancesuites/workers-compliancesuite compliance.openshift.io/remove-outdated= Alternatively, set the autoUpdateRemediations flag in a ScanSetting or ComplianceSuite object to update the remediations automatically. 5.5.6.8. Creating a custom SCC for the Compliance Operator In some environments, you must create a custom Security Context Constraints (SCC) file to ensure the correct permissions are available to the Compliance Operator api-resource-collector . Prerequisites You must have admin privileges. Procedure Define the SCC in a YAML file named restricted-adjusted-compliance.yaml : SecurityContextConstraints object definition allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs kind: SecurityContextConstraints metadata: name: restricted-adjusted-compliance priority: 30 1 readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - SETUID - SETGID - MKNOD runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: - system:serviceaccount:openshift-compliance:api-resource-collector 2 volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret 1 The priority of this SCC must be higher than any other SCC that applies to the system:authenticated group. 2 Service Account used by Compliance Operator Scanner pod. Create the SCC: USD oc create -n openshift-compliance -f restricted-adjusted-compliance.yaml Example output securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance created Verification Verify the SCC was created: USD oc get -n openshift-compliance scc restricted-adjusted-compliance Example output NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted-adjusted-compliance false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny 30 false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] 5.5.6.9. Additional resources Managing security context constraints 5.5.7. Troubleshooting the Compliance Operator This section describes how to troubleshoot the Compliance Operator. The information can be useful either to diagnose a problem or provide information in a bug report. Some general tips: The Compliance Operator emits Kubernetes events when something important happens. You can either view all events in the cluster using the command: USD oc get events -n openshift-compliance Or view events for an object like a scan using the command: USD oc describe -n openshift-compliance compliancescan/cis-compliance The Compliance Operator consists of several controllers, approximately one per API object. It could be useful to filter only those controllers that correspond to the API object having issues. If a ComplianceRemediation cannot be applied, view the messages from the remediationctrl controller. You can filter the messages from a single controller by parsing with jq : USD oc -n openshift-compliance logs compliance-operator-775d7bddbd-gj58f \ | jq -c 'select(.logger == "profilebundlectrl")' The timestamps are logged as seconds since UNIX epoch in UTC. To convert them to a human-readable date, use date -d @timestamp --utc , for example: USD date -d @1596184628.955853 --utc Many custom resources, most importantly ComplianceSuite and ScanSetting , allow the debug option to be set. Enabling this option increases verbosity of the OpenSCAP scanner pods, as well as some other helper pods. If a single rule is passing or failing unexpectedly, it could be helpful to run a single scan or a suite with only that rule to find the rule ID from the corresponding ComplianceCheckResult object and use it as the rule attribute value in a Scan CR. Then, together with the debug option enabled, the scanner container logs in the scanner pod would show the raw OpenSCAP logs. 5.5.7.1. Anatomy of a scan The following sections outline the components and stages of Compliance Operator scans. 5.5.7.1.1. Compliance sources The compliance content is stored in Profile objects that are generated from a ProfileBundle object. The Compliance Operator creates a ProfileBundle object for the cluster and another for the cluster nodes. USD oc get -n openshift-compliance profilebundle.compliance USD oc get -n openshift-compliance profile.compliance The ProfileBundle objects are processed by deployments labeled with the Bundle name. To troubleshoot an issue with the Bundle , you can find the deployment and view logs of the pods in a deployment: USD oc logs -n openshift-compliance -lprofile-bundle=ocp4 -c profileparser USD oc get -n openshift-compliance deployments,pods -lprofile-bundle=ocp4 USD oc logs -n openshift-compliance pods/<pod-name> USD oc describe -n openshift-compliance pod/<pod-name> -c profileparser 5.5.7.1.2. The ScanSetting and ScanSettingBinding objects lifecycle and debugging With valid compliance content sources, the high-level ScanSetting and ScanSettingBinding objects can be used to generate ComplianceSuite and ComplianceScan objects: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: my-companys-constraints debug: true # For each role, a separate scan will be created pointing # to a node-role specified in roles roles: - worker --- apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: my-companys-compliance-requirements profiles: # Node checks - name: rhcos4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 Both ScanSetting and ScanSettingBinding objects are handled by the same controller tagged with logger=scansettingbindingctrl . These objects have no status. Any issues are communicated in form of events: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuiteCreated 9m52s scansettingbindingctrl ComplianceSuite openshift-compliance/my-companys-compliance-requirements created Now a ComplianceSuite object is created. The flow continues to reconcile the newly created ComplianceSuite . 5.5.7.1.3. ComplianceSuite custom resource lifecycle and debugging The ComplianceSuite CR is a wrapper around ComplianceScan CRs. The ComplianceSuite CR is handled by controller tagged with logger=suitectrl . This controller handles creating scans from a suite, reconciling and aggregating individual Scan statuses into a single Suite status. If a suite is set to execute periodically, the suitectrl also handles creating a CronJob CR that re-runs the scans in the suite after the initial run is done: USD oc get cronjobs Example output NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE <cron_name> 0 1 * * * False 0 <none> 151m For the most important issues, events are emitted. View them with oc describe compliancesuites/<name> . The Suite objects also have a Status subresource that is updated when any of Scan objects that belong to this suite update their Status subresource. After all expected scans are created, control is passed to the scan controller. 5.5.7.1.4. ComplianceScan custom resource lifecycle and debugging The ComplianceScan CRs are handled by the scanctrl controller. This is also where the actual scans happen and the scan results are created. Each scan goes through several phases: 5.5.7.1.4.1. Pending phase The scan is validated for correctness in this phase. If some parameters like storage size are invalid, the scan transitions to DONE with ERROR result, otherwise proceeds to the Launching phase. 5.5.7.1.4.2. Launching phase In this phase, several config maps that contain either environment for the scanner pods or directly the script that the scanner pods will be evaluating. List the config maps: USD oc -n openshift-compliance get cm \ -l compliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script= These config maps will be used by the scanner pods. If you ever needed to modify the scanner behavior, change the scanner debug level or print the raw results, modifying the config maps is the way to go. Afterwards, a persistent volume claim is created per scan to store the raw ARF results: USD oc get pvc -n openshift-compliance -lcompliance.openshift.io/scan-name=rhcos4-e8-worker The PVCs are mounted by a per-scan ResultServer deployment. A ResultServer is a simple HTTP server where the individual scanner pods upload the full ARF results to. Each server can run on a different node. The full ARF results might be very large and you cannot presume that it would be possible to create a volume that could be mounted from multiple nodes at the same time. After the scan is finished, the ResultServer deployment is scaled down. The PVC with the raw results can be mounted from another custom pod and the results can be fetched or inspected. The traffic between the scanner pods and the ResultServer is protected by mutual TLS protocols. Finally, the scanner pods are launched in this phase; one scanner pod for a Platform scan instance and one scanner pod per matching node for a node scan instance. The per-node pods are labeled with the node name. Each pod is always labeled with the ComplianceScan name: USD oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scanner --show-labels Example output NAME READY STATUS RESTARTS AGE LABELS rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Completed 0 39m compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner + The scan then proceeds to the Running phase. 5.5.7.1.4.3. Running phase The running phase waits until the scanner pods finish. The following terms and processes are in use in the running phase: init container : There is one init container called content-container . It runs the contentImage container and executes a single command that copies the contentFile to the /content directory shared with the other containers in this pod. scanner : This container runs the scan. For node scans, the container mounts the node filesystem as /host and mounts the content delivered by the init container. The container also mounts the entrypoint ConfigMap created in the Launching phase and executes it. The default script in the entrypoint ConfigMap executes OpenSCAP and stores the result files in the /results directory shared between the pod's containers. Logs from this pod can be viewed to determine what the OpenSCAP scanner checked. More verbose output can be viewed with the debug flag. logcollector : The logcollector container waits until the scanner container finishes. Then, it uploads the full ARF results to the ResultServer and separately uploads the XCCDF results along with scan result and OpenSCAP result code as a ConfigMap. These result config maps are labeled with the scan name ( compliance.openshift.io/scan-name=rhcos4-e8-worker ): USD oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Example output Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Namespace: openshift-compliance Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker complianceoperator.openshift.io/scan-result= Annotations: compliance-remediations/processed: compliance.openshift.io/scan-error-msg: compliance.openshift.io/scan-result: NON-COMPLIANT OpenSCAP-scan-result/node: ip-10-0-169-90.eu-north-1.compute.internal Data ==== exit-code: ---- 2 results: ---- <?xml version="1.0" encoding="UTF-8"?> ... Scanner pods for Platform scans are similar, except: There is one extra init container called api-resource-collector that reads the OpenSCAP content provided by the content-container init, container, figures out which API resources the content needs to examine and stores those API resources to a shared directory where the scanner container would read them from. The scanner container does not need to mount the host file system. When the scanner pods are done, the scans move on to the Aggregating phase. 5.5.7.1.4.4. Aggregating phase In the aggregating phase, the scan controller spawns yet another pod called the aggregator pod. Its purpose it to take the result ConfigMap objects, read the results and for each check result create the corresponding Kubernetes object. If the check failure can be automatically remediated, a ComplianceRemediation object is created. To provide human-readable metadata for the checks and remediations, the aggregator pod also mounts the OpenSCAP content using an init container. When a config map is processed by an aggregator pod, it is labeled the compliance-remediations/processed label. The result of this phase are ComplianceCheckResult objects: USD oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker Example output NAME STATUS SEVERITY rhcos4-e8-worker-accounts-no-uid-except-zero PASS high rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium and ComplianceRemediation objects: USD oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker Example output NAME STATE rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied rhcos4-e8-worker-audit-rules-execution-chcon NotApplied rhcos4-e8-worker-audit-rules-execution-restorecon NotApplied rhcos4-e8-worker-audit-rules-execution-semanage NotApplied rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied After these CRs are created, the aggregator pod exits and the scan moves on to the Done phase. 5.5.7.1.4.5. Done phase In the final scan phase, the scan resources are cleaned up if needed and the ResultServer deployment is either scaled down (if the scan was one-time) or deleted if the scan is continuous; the scan instance would then recreate the deployment again. It is also possible to trigger a re-run of a scan in the Done phase by annotating it: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= After the scan reaches the Done phase, nothing else happens on its own unless the remediations are set to be applied automatically with autoApplyRemediations: true . The OpenShift Container Platform administrator would now review the remediations and apply them as needed. If the remediations are set to be applied automatically, the ComplianceSuite controller takes over in the Done phase, pauses the machine config pool to which the scan maps to and applies all the remediations in one go. If a remediation is applied, the ComplianceRemediation controller takes over. 5.5.7.1.5. ComplianceRemediation controller lifecycle and debugging The example scan has reported some findings. One of the remediations can be enabled by toggling its apply attribute to true : USD oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{"spec":{"apply":true}}' --type=merge The ComplianceRemediation controller ( logger=remediationctrl ) reconciles the modified object. The result of the reconciliation is change of status of the remediation object that is reconciled, but also a change of the rendered per-suite MachineConfig object that contains all the applied remediations. The MachineConfig object always begins with 75- and is named after the scan and the suite: USD oc get mc | grep 75- Example output 75-rhcos4-e8-worker-my-companys-compliance-requirements 3.2.0 2m46s The remediations the mc currently consists of are listed in the machine config's annotations: USD oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements Example output Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements Labels: machineconfiguration.openshift.io/role=worker Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod: The ComplianceRemediation controller's algorithm works like this: All currently applied remediations are read into an initial remediation set. If the reconciled remediation is supposed to be applied, it is added to the set. A MachineConfig object is rendered from the set and annotated with names of remediations in the set. If the set is empty (the last remediation was unapplied), the rendered MachineConfig object is removed. If and only if the rendered machine config is different from the one already applied in the cluster, the applied MC is updated (or created, or deleted). Creating or modifying a MachineConfig object triggers a reboot of nodes that match the machineconfiguration.openshift.io/role label - see the Machine Config Operator documentation for more details. The remediation loop ends once the rendered machine config is updated, if needed, and the reconciled remediation object status is updated. In our case, applying the remediation would trigger a reboot. After the reboot, annotate the scan to re-run it: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= The scan will run and finish. Check for the remediation to pass: USD oc -n openshift-compliance \ get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod Example output NAME STATUS SEVERITY rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium 5.5.7.1.6. Useful labels Each pod that is spawned by the Compliance Operator is labeled specifically with the scan it belongs to and the work it does. The scan identifier is labeled with the compliance.openshift.io/scan-name label. The workload identifier is labeled with the workload label. The Compliance Operator schedules the following workloads: scanner : Performs the compliance scan. resultserver : Stores the raw results for the compliance scan. aggregator : Aggregates the results, detects inconsistencies and outputs result objects (checkresults and remediations). suitererunner : Will tag a suite to be re-run (when a schedule is set). profileparser : Parses a datastream and creates the appropriate profiles, rules and variables. When debugging and logs are required for a certain workload, run: USD oc logs -l workload=<workload_name> -c <container_name> 5.5.7.2. Increasing Compliance Operator resource limits In some cases, the Compliance Operator might require more memory than the default limits allow. The best way to mitigate this issue is to set custom resource limits. To increase the default memory and CPU limits of scanner pods, see `ScanSetting` Custom resource . Procedure To increase the Operator's memory limits to 500 Mi, create the following patch file named co-memlimit-patch.yaml : spec: config: resources: limits: memory: 500Mi Apply the patch file: USD oc patch sub compliance-operator -nopenshift-compliance --patch-file co-memlimit-patch.yaml --type=merge 5.5.7.3. Configuring Operator resource constraints The resources field defines Resource Constraints for all the containers in the Pod created by the Operator Lifecycle Manager (OLM). Note Resource Constraints applied in this process overwrites the existing resource constraints. Procedure Inject a request of 0.25 cpu and 64 Mi of memory, and a limit of 0.5 cpu and 128 Mi of memory in each container by editing the Subscription object: kind: Subscription metadata: name: custom-operator spec: package: etcd channel: alpha config: resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" 5.5.7.4. Configuring ScanSetting timeout The ScanSetting object has a timeout option that can be specified in the ComplianceScanSetting object as a duration string, such as 1h30m . If the scan does not finish within the specified timeout, the scan reattempts until the maxRetryOnTimeout limit is reached. Procedure To set a timeout and maxRetryOnTimeout in ScanSetting, modify an existing ScanSetting object: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' timeout: '10m0s' 1 maxRetryOnTimeout: 3 2 1 The timeout variable is defined as a duration string, such as 1h30m . The default value is 30m . To disable the timeout, set the value to 0s . 2 The maxRetryOnTimeout variable defines how many times a retry is attempted. The default value is 3 . 5.5.7.5. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 5.5.8. Using the oc-compliance plugin Although the Compliance Operator automates many of the checks and remediations for the cluster, the full process of bringing a cluster into compliance often requires administrator interaction with the Compliance Operator API and other components. The oc-compliance plugin makes the process easier. 5.5.8.1. Installing the oc-compliance plugin Procedure Extract the oc-compliance image to get the oc-compliance binary: USD podman run --rm -v ~/.local/bin:/mnt/out:Z registry.redhat.io/compliance/oc-compliance-rhel8:stable /bin/cp /usr/bin/oc-compliance /mnt/out/ Example output W0611 20:35:46.486903 11354 manifest.go:440] Chose linux/amd64 manifest from the manifest list. You can now run oc-compliance . 5.5.8.2. Fetching raw results When a compliance scan finishes, the results of the individual checks are listed in the resulting ComplianceCheckResult custom resource (CR). However, an administrator or auditor might require the complete details of the scan. The OpenSCAP tool creates an Advanced Recording Format (ARF) formatted file with the detailed results. This ARF file is too large to store in a config map or other standard Kubernetes resource, so a persistent volume (PV) is created to contain it. Procedure Fetching the results from the PV with the Compliance Operator is a four-step process. However, with the oc-compliance plugin, you can use a single command: USD oc compliance fetch-raw <object-type> <object-name> -o <output-path> <object-type> can be either scansettingbinding , compliancescan or compliancesuite , depending on which of these objects the scans were launched with. <object-name> is the name of the binding, suite, or scan object to gather the ARF file for, and <output-path> is the local directory to place the results. For example: USD oc compliance fetch-raw scansettingbindings my-binding -o /tmp/ Example output Fetching results for my-binding scans: ocp4-cis, ocp4-cis-node-worker, ocp4-cis-node-master Fetching raw compliance results for scan 'ocp4-cis'....... The raw compliance results are available in the following directory: /tmp/ocp4-cis Fetching raw compliance results for scan 'ocp4-cis-node-worker'........... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-worker Fetching raw compliance results for scan 'ocp4-cis-node-master'...... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-master View the list of files in the directory: USD ls /tmp/ocp4-cis-node-master/ Example output ocp4-cis-node-master-ip-10-0-128-89.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-150-5.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-163-32.ec2.internal-pod.xml.bzip2 Extract the results: USD bunzip2 -c resultsdir/worker-scan/worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 > resultsdir/worker-scan/worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml View the results: USD ls resultsdir/worker-scan/ Example output worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 worker-scan-stage-459-tqkg7-compute-1-pod.xml.bzip2 5.5.8.3. Re-running scans Although it is possible to run scans as scheduled jobs, you must often re-run a scan on demand, particularly after remediations are applied or when other changes to the cluster are made. Procedure Rerunning a scan with the Compliance Operator requires use of an annotation on the scan object. However, with the oc-compliance plugin you can rerun a scan with a single command. Enter the following command to rerun the scans for the ScanSettingBinding object named my-binding : USD oc compliance rerun-now scansettingbindings my-binding Example output Rerunning scans from 'my-binding': ocp4-cis Re-running scan 'openshift-compliance/ocp4-cis' 5.5.8.4. Using ScanSettingBinding custom resources When using the ScanSetting and ScanSettingBinding custom resources (CRs) that the Compliance Operator provides, it is possible to run scans for multiple profiles while using a common set of scan options, such as schedule , machine roles , tolerations , and so on. While that is easier than working with multiple ComplianceSuite or ComplianceScan objects, it can confuse new users. The oc compliance bind subcommand helps you create a ScanSettingBinding CR. Procedure Run: USD oc compliance bind [--dry-run] -N <binding name> [-S <scansetting name>] <objtype/objname> [..<objtype/objname>] If you omit the -S flag, the default scan setting provided by the Compliance Operator is used. The object type is the Kubernetes object type, which can be profile or tailoredprofile . More than one object can be provided. The object name is the name of the Kubernetes resource, such as .metadata.name . Add the --dry-run option to display the YAML file of the objects that are created. For example, given the following profiles and scan settings: USD oc get profile.compliance -n openshift-compliance Example output NAME AGE ocp4-cis 9m54s ocp4-cis-node 9m54s ocp4-e8 9m54s ocp4-moderate 9m54s ocp4-ncp 9m54s rhcos4-e8 9m54s rhcos4-moderate 9m54s rhcos4-ncp 9m54s rhcos4-ospp 9m54s rhcos4-stig 9m54s USD oc get scansettings -n openshift-compliance Example output NAME AGE default 10m default-auto-apply 10m To apply the default settings to the ocp4-cis and ocp4-cis-node profiles, run: USD oc compliance bind -N my-binding profile/ocp4-cis profile/ocp4-cis-node Example output Creating ScanSettingBinding my-binding Once the ScanSettingBinding CR is created, the bound profile begins scanning for both profiles with the related settings. Overall, this is the fastest way to begin scanning with the Compliance Operator. 5.5.8.5. Printing controls Compliance standards are generally organized into a hierarchy as follows: A benchmark is the top-level definition of a set of controls for a particular standard. For example, FedRAMP Moderate or Center for Internet Security (CIS) v.1.6.0. A control describes a family of requirements that must be met in order to be in compliance with the benchmark. For example, FedRAMP AC-01 (access control policy and procedures). A rule is a single check that is specific for the system being brought into compliance, and one or more of these rules map to a control. The Compliance Operator handles the grouping of rules into a profile for a single benchmark. It can be difficult to determine which controls that the set of rules in a profile satisfy. Procedure The oc compliance controls subcommand provides a report of the standards and controls that a given profile satisfies: USD oc compliance controls profile ocp4-cis-node Example output +-----------+----------+ | FRAMEWORK | CONTROLS | +-----------+----------+ | CIS-OCP | 1.1.1 | + +----------+ | | 1.1.10 | + +----------+ | | 1.1.11 | + +----------+ ... 5.5.8.6. Fetching compliance remediation details The Compliance Operator provides remediation objects that are used to automate the changes required to make the cluster compliant. The fetch-fixes subcommand can help you understand exactly which configuration remediations are used. Use the fetch-fixes subcommand to extract the remediation objects from a profile, rule, or ComplianceRemediation object into a directory to inspect. Procedure View the remediations for a profile: USD oc compliance fetch-fixes profile ocp4-cis -o /tmp Example output No fixes to persist for rule 'ocp4-api-server-api-priority-flowschema-catch-all' 1 No fixes to persist for rule 'ocp4-api-server-api-priority-gate-enabled' No fixes to persist for rule 'ocp4-api-server-audit-log-maxbackup' Persisted rule fix to /tmp/ocp4-api-server-audit-log-maxsize.yaml No fixes to persist for rule 'ocp4-api-server-audit-log-path' No fixes to persist for rule 'ocp4-api-server-auth-mode-no-aa' No fixes to persist for rule 'ocp4-api-server-auth-mode-node' No fixes to persist for rule 'ocp4-api-server-auth-mode-rbac' No fixes to persist for rule 'ocp4-api-server-basic-auth' No fixes to persist for rule 'ocp4-api-server-bind-address' No fixes to persist for rule 'ocp4-api-server-client-ca' Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-cipher.yaml Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-config.yaml 1 The No fixes to persist warning is expected whenever there are rules in a profile that do not have a corresponding remediation, because either the rule cannot be remediated automatically or a remediation was not provided. You can view a sample of the YAML file. The head command will show you the first 10 lines: USD head /tmp/ocp4-api-server-audit-log-maxsize.yaml Example output apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: maximumFileSizeMegabytes: 100 View the remediation from a ComplianceRemediation object created after a scan: USD oc get complianceremediations -n openshift-compliance Example output NAME STATE ocp4-cis-api-server-encryption-provider-cipher NotApplied ocp4-cis-api-server-encryption-provider-config NotApplied USD oc compliance fetch-fixes complianceremediations ocp4-cis-api-server-encryption-provider-cipher -o /tmp Example output Persisted compliance remediation fix to /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml You can view a sample of the YAML file. The head command will show you the first 10 lines: USD head /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml Example output apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: encryption: type: aescbc Warning Use caution before applying remediations directly. Some remediations might not be applicable in bulk, such as the usbguard rules in the moderate profile. In these cases, allow the Compliance Operator to apply the rules because it addresses the dependencies and ensures that the cluster remains in a good state. 5.5.8.7. Viewing ComplianceCheckResult object details When scans are finished running, ComplianceCheckResult objects are created for the individual scan rules. The view-result subcommand provides a human-readable output of the ComplianceCheckResult object details. Procedure Run: USD oc compliance view-result ocp4-cis-scheduler-no-bind-address
|
[
"oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis",
"oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis",
"oc get -n openshift-compliance profiles.compliance",
"NAME AGE ocp4-cis 94m ocp4-cis-node 94m ocp4-e8 94m ocp4-high 94m ocp4-high-node 94m ocp4-moderate 94m ocp4-moderate-node 94m ocp4-nerc-cip 94m ocp4-nerc-cip-node 94m ocp4-pci-dss 94m ocp4-pci-dss-node 94m rhcos4-e8 94m rhcos4-high 94m rhcos4-moderate 94m rhcos4-nerc-cip 94m",
"oc get -n openshift-compliance -oyaml profiles.compliance rhcos4-e8",
"apiVersion: compliance.openshift.io/v1alpha1 description: 'This profile contains configuration checks for Red Hat Enterprise Linux CoreOS that align to the Australian Cyber Security Centre (ACSC) Essential Eight. A copy of the Essential Eight in Linux Environments guide can be found at the ACSC website: https://www.cyber.gov.au/acsc/view-all-content/publications/hardening-linux-workstations-and-servers' id: xccdf_org.ssgproject.content_profile_e8 kind: Profile metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/product: redhat_enterprise_linux_coreos_4 compliance.openshift.io/product-type: Node creationTimestamp: \"2022-10-19T12:06:49Z\" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-e8 namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: \"43699\" uid: 86353f70-28f7-40b4-bf0e-6289ec33675b rules: - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown - rhcos4-audit-rules-execution-chcon - rhcos4-audit-rules-execution-restorecon - rhcos4-audit-rules-execution-semanage - rhcos4-audit-rules-execution-setfiles - rhcos4-audit-rules-execution-setsebool - rhcos4-audit-rules-execution-seunshare - rhcos4-audit-rules-kernel-module-loading-delete - rhcos4-audit-rules-kernel-module-loading-finit - rhcos4-audit-rules-kernel-module-loading-init - rhcos4-audit-rules-login-events - rhcos4-audit-rules-login-events-faillock - rhcos4-audit-rules-login-events-lastlog - rhcos4-audit-rules-login-events-tallylog - rhcos4-audit-rules-networkconfig-modification - rhcos4-audit-rules-sysadmin-actions - rhcos4-audit-rules-time-adjtimex - rhcos4-audit-rules-time-clock-settime - rhcos4-audit-rules-time-settimeofday - rhcos4-audit-rules-time-stime - rhcos4-audit-rules-time-watch-localtime - rhcos4-audit-rules-usergroup-modification - rhcos4-auditd-data-retention-flush - rhcos4-auditd-freq - rhcos4-auditd-local-events - rhcos4-auditd-log-format - rhcos4-auditd-name-format - rhcos4-auditd-write-logs - rhcos4-configure-crypto-policy - rhcos4-configure-ssh-crypto-policy - rhcos4-no-empty-passwords - rhcos4-selinux-policytype - rhcos4-selinux-state - rhcos4-service-auditd-enabled - rhcos4-sshd-disable-empty-passwords - rhcos4-sshd-disable-gssapi-auth - rhcos4-sshd-disable-rhosts - rhcos4-sshd-disable-root-login - rhcos4-sshd-disable-user-known-hosts - rhcos4-sshd-do-not-permit-user-env - rhcos4-sshd-enable-strictmodes - rhcos4-sshd-print-last-log - rhcos4-sshd-set-loglevel-info - rhcos4-sysctl-kernel-dmesg-restrict - rhcos4-sysctl-kernel-kptr-restrict - rhcos4-sysctl-kernel-randomize-va-space - rhcos4-sysctl-kernel-unprivileged-bpf-disabled - rhcos4-sysctl-kernel-yama-ptrace-scope - rhcos4-sysctl-net-core-bpf-jit-harden title: Australian Cyber Security Centre (ACSC) Essential Eight",
"oc get -n openshift-compliance -oyaml rules rhcos4-audit-rules-login-events",
"apiVersion: compliance.openshift.io/v1alpha1 checkType: Node description: |- The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins id: xccdf_org.ssgproject.content_rule_audit_rules_login_events kind: Rule metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/rule: audit-rules-login-events control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a) control.compliance.openshift.io/PCI-DSS: Req-10.2.3 policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3 policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS creationTimestamp: \"2022-10-19T12:07:08Z\" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-audit-rules-login-events namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: \"44819\" uid: 75872f1f-3c93-40ca-a69d-44e5438824a4 rationale: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. severity: medium title: Record Attempts to Alter Logon and Logout Events warning: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion.",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle name: <profile bundle name> namespace: openshift-compliance status: dataStreamStatus: VALID 1",
"apiVersion: compliance.openshift.io/v1alpha1 description: <description of the profile> id: xccdf_org.ssgproject.content_profile_moderate 1 kind: Profile metadata: annotations: compliance.openshift.io/product: <product name> compliance.openshift.io/product-type: Node 2 creationTimestamp: \"YYYY-MM-DDTMM:HH:SSZ\" generation: 1 labels: compliance.openshift.io/profile-bundle: <profile bundle name> name: rhcos4-moderate namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: <profile bundle name> uid: <uid string> resourceVersion: \"<version number>\" selfLink: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/profiles/rhcos4-moderate uid: <uid string> rules: 3 - rhcos4-account-disable-post-pw-expiration - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown title: <title of the profile>",
"apiVersion: compliance.openshift.io/v1alpha1 checkType: Platform 1 description: <description of the rule> id: xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces 2 instructions: <manual instructions for the scan> kind: Rule metadata: annotations: compliance.openshift.io/rule: configure-network-policies-namespaces control.compliance.openshift.io/CIS-OCP: 5.3.2 control.compliance.openshift.io/NERC-CIP: CIP-003-3 R4;CIP-003-3 R4.2;CIP-003-3 R5;CIP-003-3 R6;CIP-004-3 R2.2.4;CIP-004-3 R3;CIP-007-3 R2;CIP-007-3 R2.1;CIP-007-3 R2.2;CIP-007-3 R2.3;CIP-007-3 R5.1;CIP-007-3 R6.1 control.compliance.openshift.io/NIST-800-53: AC-4;AC-4(21);CA-3(5);CM-6;CM-6(1);CM-7;CM-7(1);SC-7;SC-7(3);SC-7(5);SC-7(8);SC-7(12);SC-7(13);SC-7(18) labels: compliance.openshift.io/profile-bundle: ocp4 name: ocp4-configure-network-policies-namespaces namespace: openshift-compliance rationale: <description of why this rule is checked> severity: high 3 title: <summary of the rule>",
"apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: rhcos4-with-usb spec: extends: rhcos4-moderate 1 title: <title of the tailored profile> disableRules: - name: <name of a rule object to be disabled> rationale: <description of why this rule is checked> status: id: xccdf_compliance.openshift.io_profile_rhcos4-with-usb 2 outputRef: name: rhcos4-with-usb-tp 3 namespace: openshift-compliance state: READY 4",
"compliance.openshift.io/product-type: Platform/Node",
"apiVersion: compliance.openshift.io/v1alpha1 autoApplyRemediations: true 1 autoUpdateRemediations: true 2 kind: ScanSetting maxRetryOnTimeout: 3 metadata: creationTimestamp: \"2022-10-18T20:21:00Z\" generation: 1 name: default-auto-apply namespace: openshift-compliance resourceVersion: \"38840\" uid: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 rawResultStorage: nodeSelector: node-role.kubernetes.io/master: \"\" pvAccessModes: - ReadWriteOnce rotation: 3 3 size: 1Gi 4 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists roles: 5 - master - worker scanTolerations: - operator: Exists schedule: 0 1 * * * 6 showNotApplicable: false strictNodeScan: true timeout: 30m",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: <name of the scan> profiles: 1 # Node checks - name: rhcos4-with-usb kind: TailoredProfile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-moderate kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: 2 name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1",
"oc get compliancesuites",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: <name of the scan> spec: autoApplyRemediations: false 1 schedule: \"0 1 * * *\" 2 scans: 3 - name: workers-scan scanType: Node profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc rule: \"xccdf_org.ssgproject.content_rule_no_netrc_files\" nodeSelector: node-role.kubernetes.io/worker: \"\" status: Phase: DONE 4 Result: NON-COMPLIANT 5 scanStatuses: - name: workers-scan phase: DONE result: NON-COMPLIANT",
"oc get events --field-selector involvedObject.kind=ComplianceSuite,involvedObject.name=<name of the suite>",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceScan metadata: name: <name of the scan> spec: scanType: Node 1 profile: xccdf_org.ssgproject.content_profile_moderate 2 content: ssg-ocp4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... 3 rule: \"xccdf_org.ssgproject.content_rule_no_netrc_files\" 4 nodeSelector: 5 node-role.kubernetes.io/worker: \"\" status: phase: DONE 6 result: NON-COMPLIANT 7",
"get events --field-selector involvedObject.kind=ComplianceScan,involvedObject.name=<name of the suite>",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceCheckResult metadata: labels: compliance.openshift.io/check-severity: medium compliance.openshift.io/check-status: FAIL compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan name: workers-scan-no-direct-root-logins namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceScan name: workers-scan description: <description of scan check> instructions: <manual instructions for the scan> id: xccdf_org.ssgproject.content_rule_no_direct_root_logins severity: medium 1 status: FAIL 2",
"get compliancecheckresults -l compliance.openshift.io/suite=workers-compliancesuite",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: labels: compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan machineconfiguration.openshift.io/role: worker name: workers-scan-disable-users-coredumps namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: workers-scan-disable-users-coredumps uid: <UID> spec: apply: false 1 object: current: 2 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:,%2A%20%20%20%20%20hard%20%20%20core%20%20%20%200 filesystem: root mode: 420 path: /etc/security/limits.d/75-disable_users_coredumps.conf outdated: {} 3",
"get complianceremediations -l compliance.openshift.io/suite=workers-compliancesuite",
"get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation'",
"get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation'",
"apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" name: openshift-compliance",
"oc create -f namespace-object.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance",
"oc create -f operator-group-object.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f subscription-object.yaml",
"oc get csv -n openshift-compliance",
"oc get deploy -n openshift-compliance",
"apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance",
"oc create -f namespace-object.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance",
"oc create -f operator-group-object.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: \"\" env: - name: PLATFORM value: \"HyperShift\"",
"oc create -f subscription-object.yaml",
"oc get csv -n openshift-compliance",
"oc get deploy -n openshift-compliance",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: \"2022-10-19T12:06:30Z\" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: \"46741\" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml 1 contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 2 status: conditions: - lastTransitionTime: \"2022-10-19T12:07:51Z\" message: Profile bundle successfully parsed reason: Valid status: \"True\" type: Ready dataStreamStatus: VALID",
"oc -n openshift-compliance get profilebundles rhcos4 -oyaml",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: \"2022-10-19T12:06:30Z\" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: \"46741\" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 1 status: conditions: - lastTransitionTime: \"2022-10-19T12:07:51Z\" message: Profile bundle successfully parsed reason: Valid status: \"True\" type: Ready dataStreamStatus: VALID",
"oc delete ssb --all -n openshift-compliance",
"oc delete ss --all -n openshift-compliance",
"oc delete suite --all -n openshift-compliance",
"oc delete scan --all -n openshift-compliance",
"oc delete profilebundle.compliance --all -n openshift-compliance",
"oc delete sub --all -n openshift-compliance",
"oc delete csv --all -n openshift-compliance",
"oc delete project openshift-compliance",
"project.project.openshift.io \"openshift-compliance\" deleted",
"oc get project/openshift-compliance",
"Error from server (NotFound): namespaces \"openshift-compliance\" not found",
"oc explain scansettings",
"oc explain scansettingbindings",
"oc describe scansettings default -n openshift-compliance",
"Name: default Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Kind: ScanSetting Metadata: Creation Timestamp: 2022-10-10T14:07:29Z Generation: 1 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:rawResultStorage: .: f:nodeSelector: .: f:node-role.kubernetes.io/master: f:pvAccessModes: f:rotation: f:size: f:tolerations: f:roles: f:scanTolerations: f:schedule: f:showNotApplicable: f:strictNodeScan: Manager: compliance-operator Operation: Update Time: 2022-10-10T14:07:29Z Resource Version: 56111 UID: c21d1d14-3472-47d7-a450-b924287aec90 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce 1 Rotation: 3 2 Size: 1Gi 3 Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master 4 worker 5 Scan Tolerations: 6 Operator: Exists Schedule: 0 1 * * * 7 Show Not Applicable: false Strict Node Scan: true Events: <none>",
"Name: default-auto-apply Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Auto Apply Remediations: true 1 Auto Update Remediations: true 2 Kind: ScanSetting Metadata: Creation Timestamp: 2022-10-18T20:21:00Z Generation: 1 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:autoApplyRemediations: f:autoUpdateRemediations: f:rawResultStorage: .: f:nodeSelector: .: f:node-role.kubernetes.io/master: f:pvAccessModes: f:rotation: f:size: f:tolerations: f:roles: f:scanTolerations: f:schedule: f:showNotApplicable: f:strictNodeScan: Manager: compliance-operator Operation: Update Time: 2022-10-18T20:21:00Z Resource Version: 38840 UID: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce Rotation: 3 Size: 1Gi Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master worker Scan Tolerations: Operator: Exists Schedule: 0 1 * * * Show Not Applicable: false Strict Node Scan: true Events: <none>",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis-compliance namespace: openshift-compliance profiles: - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1",
"oc create -f <file-name>.yaml -n openshift-compliance",
"oc get compliancescan -w -n openshift-compliance",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: rs-on-workers namespace: openshift-compliance rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: \"\" 1 pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists 2 roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * *",
"oc create -f rs-workers.yaml",
"oc get scansettings rs-on-workers -n openshift-compliance -o yaml",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: creationTimestamp: \"2021-11-19T19:36:36Z\" generation: 1 name: rs-on-workers namespace: openshift-compliance resourceVersion: \"48305\" uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: \"\" pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * strictNodeScan: true",
"oc get hostedcluster -A",
"NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE local-cluster 79136a1bdb84b3c13217 4.13.5 79136a1bdb84b3c13217-admin-kubeconfig Completed True False The hosted control plane is available",
"apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: hypershift-cisk57aw88gry namespace: openshift-compliance spec: description: This profile test required rules extends: ocp4-cis 1 title: Management namespace profile setValues: - name: ocp4-hypershift-cluster rationale: This value is used for HyperShift version detection value: 79136a1bdb84b3c13217 2 - name: ocp4-hypershift-namespace-prefix rationale: This value is used for HyperShift control plane namespace detection value: local-cluster 3",
"oc create -n openshift-compliance -f mgmt-tp.yaml",
"spec.containers[].resources.limits.cpu spec.containers[].resources.limits.memory spec.containers[].resources.limits.hugepages-<size> spec.containers[].resources.requests.cpu spec.containers[].resources.requests.memory spec.containers[].resources.requests.hugepages-<size>",
"apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: app image: images.my-company.example/app:v4 resources: requests: 1 memory: \"64Mi\" cpu: \"250m\" limits: 2 memory: \"128Mi\" cpu: \"500m\" - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\"",
"apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: new-profile annotations: compliance.openshift.io/product-type: Node 1 spec: extends: ocp4-cis-node 2 description: My custom profile 3 title: Custom profile 4 enableRules: - name: ocp4-etcd-unique-ca rationale: We really need to enable this disableRules: - name: ocp4-file-groupowner-cni-conf rationale: This does not apply to the cluster",
"oc get rules.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4",
"oc get variables.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4",
"apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: nist-moderate-modified spec: extends: rhcos4-moderate description: NIST moderate profile title: My modified NIST moderate profile disableRules: - name: rhcos4-file-permissions-var-log-messages rationale: The file contains logs of error messages in the system - name: rhcos4-account-disable-post-pw-expiration rationale: No need to check this as it comes from the IdP setValues: - name: rhcos4-var-selinux-state rationale: Organizational requirements value: permissive",
"apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: ocp4-manual-scc-check spec: extends: ocp4-cis description: This profile extends ocp4-cis by forcing the SCC check to always return MANUAL title: OCP4 CIS profile with manual SCC check manualRules: - name: ocp4-scc-limit-container-allowed-capabilities rationale: We use third party software that installs its own SCC with extra privileges",
"oc create -n openshift-compliance -f new-profile-node.yaml 1",
"tailoredprofile.compliance.openshift.io/nist-moderate-modified created",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: nist-moderate-modified profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate - apiGroup: compliance.openshift.io/v1alpha1 kind: TailoredProfile name: nist-moderate-modified settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default",
"oc create -n openshift-compliance -f new-scansettingbinding.yaml",
"scansettingbinding.compliance.openshift.io/nist-moderate-modified created",
"oc get compliancesuites nist-moderate-modified -o json -n openshift-compliance | jq '.status.scanStatuses[].resultsStorage'",
"{ \"name\": \"ocp4-moderate\", \"namespace\": \"openshift-compliance\" } { \"name\": \"nist-moderate-modified-master\", \"namespace\": \"openshift-compliance\" } { \"name\": \"nist-moderate-modified-worker\", \"namespace\": \"openshift-compliance\" }",
"oc get pvc -n openshift-compliance rhcos4-moderate-worker",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rhcos4-moderate-worker Bound pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a 1Gi RWO gp2 92m",
"oc create -n openshift-compliance -f pod.yaml",
"apiVersion: \"v1\" kind: Pod metadata: name: pv-extract spec: containers: - name: pv-extract-pod image: registry.access.redhat.com/ubi8/ubi command: [\"sleep\", \"3000\"] volumeMounts: - mountPath: \"/workers-scan-results\" name: workers-scan-vol volumes: - name: workers-scan-vol persistentVolumeClaim: claimName: rhcos4-moderate-worker",
"oc cp pv-extract:/workers-scan-results -n openshift-compliance .",
"oc delete pod pv-extract -n openshift-compliance",
"oc get -n openshift-compliance compliancecheckresults -l compliance.openshift.io/suite=workers-compliancesuite",
"oc get -n openshift-compliance compliancecheckresults -l compliance.openshift.io/scan=workers-scan",
"oc get -n openshift-compliance compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation'",
"oc get compliancecheckresults -n openshift-compliance -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/check-severity=high'",
"NAME STATUS SEVERITY nist-moderate-modified-master-configure-crypto-policy FAIL high nist-moderate-modified-master-coreos-pti-kernel-argument FAIL high nist-moderate-modified-master-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-master-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-master-enable-fips-mode FAIL high nist-moderate-modified-master-no-empty-passwords FAIL high nist-moderate-modified-master-selinux-state FAIL high nist-moderate-modified-worker-configure-crypto-policy FAIL high nist-moderate-modified-worker-coreos-pti-kernel-argument FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-worker-enable-fips-mode FAIL high nist-moderate-modified-worker-no-empty-passwords FAIL high nist-moderate-modified-worker-selinux-state FAIL high ocp4-moderate-configure-network-policies-namespaces FAIL high ocp4-moderate-fips-mode-enabled-on-all-nodes FAIL high",
"oc get -n openshift-compliance compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation'",
"spec: apply: false current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf mode: 0644 contents: source: data:,net.ipv4.conf.all.accept_redirects%3D0 outdated: {} status: applicationState: NotApplied",
"echo \"net.ipv4.conf.all.accept_redirects%3D0\" | python3 -c \"import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))\"",
"net.ipv4.conf.all.accept_redirects=0",
"oc get nodes -n openshift-compliance",
"NAME STATUS ROLES AGE VERSION ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.24.0 ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.24.0 ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.24.0 ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.24.0 ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.24.0",
"oc -n openshift-compliance label node ip-10-0-166-81.us-east-2.compute.internal node-role.kubernetes.io/<machine_config_pool_name>=",
"node/ip-10-0-166-81.us-east-2.compute.internal labeled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: <machine_config_pool_name> labels: pools.operator.machineconfiguration.openshift.io/<machine_config_pool_name>: '' 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,<machine_config_pool_name>]} nodeSelector: matchLabels: node-role.kubernetes.io/<machine_config_pool_name>: \"\"",
"oc get mcp -w",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master - example scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis namespace: openshift-compliance profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis-node settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default",
"oc get rules -o json | jq '.items[] | select(.checkType == \"Platform\") | select(.metadata.name | contains(\"ocp4-kubelet-\")) | .metadata.name'",
"oc label mcp <sub-pool-name> pools.operator.machineconfiguration.openshift.io/<sub-pool-name>=",
"oc -n openshift-compliance patch complianceremediations/<scan-name>-sysctl-net-ipv4-conf-all-accept-redirects --patch '{\"spec\":{\"apply\":true}}' --type=merge",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2020-09-10T10:12:54Z\" generation: 2 name: cluster resourceVersion: \"363096\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e spec: allowedRegistriesForImport: - domainName: registry.redhat.io status: externalRegistryHostnames: - default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=",
"oc -n openshift-compliance get complianceremediations -l complianceoperator.openshift.io/outdated-remediation=",
"NAME STATE workers-scan-no-empty-passwords Outdated",
"oc -n openshift-compliance patch complianceremediations workers-scan-no-empty-passwords --type json -p '[{\"op\":\"remove\", \"path\":/spec/outdated}]'",
"oc get -n openshift-compliance complianceremediations workers-scan-no-empty-passwords",
"NAME STATE workers-scan-no-empty-passwords Applied",
"oc -n openshift-compliance patch complianceremediations/rhcos4-moderate-worker-sysctl-net-ipv4-conf-all-accept-redirects --patch '{\"spec\":{\"apply\":false}}' --type=merge",
"oc -n openshift-compliance get remediation \\ one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: annotations: compliance.openshift.io/xccdf-value-used: var-kubelet-evictionhard-imagefs-available creationTimestamp: \"2022-01-05T19:52:27Z\" generation: 1 labels: compliance.openshift.io/scan-name: one-rule-tp-node-master 1 compliance.openshift.io/suite: one-rule-ssb-node name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available uid: fe8e1577-9060-4c59-95b2-3e2c51709adc resourceVersion: \"84820\" uid: 5339d21a-24d7-40cb-84d2-7a2ebb015355 spec: apply: true current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: kubeletConfig: evictionHard: imagefs.available: 10% 2 outdated: {} type: Configuration status: applicationState: Applied",
"oc -n openshift-compliance patch complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -p '{\"spec\":{\"apply\":false}}' --type=merge",
"oc -n openshift-compliance get kubeletconfig --selector compliance.openshift.io/scan-name=one-rule-tp-node-master",
"NAME AGE compliance-operator-kubelet-master 2m34s",
"oc edit -n openshift-compliance KubeletConfig compliance-operator-kubelet-master",
"oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc debug: true rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins nodeSelector: node-role.kubernetes.io/worker: \"\"",
"apiVersion: compliance.openshift.io/v1alpha1 strictNodeScan: true metadata: name: default namespace: openshift-compliance priorityClass: compliance-high-priority 1 kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker scanTolerations: - operator: Exists",
"oc -n openshift-compliance create configmap nist-moderate-modified --from-file=tailoring.xml=/path/to/the/tailoringFile.xml",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: debug: true scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc debug: true tailoringConfigMap: name: nist-moderate-modified nodeSelector: node-role.kubernetes.io/worker: \"\"",
"oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=",
"oc get mc",
"75-worker-scan-chronyd-or-ntpd-specify-remote-server 75-worker-scan-configure-usbguard-auditbackend 75-worker-scan-service-usbguard-enabled 75-worker-scan-usbguard-allow-hid-and-hub",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'",
"oc -n openshift-compliance annotate compliancesuites/workers-compliancesuite compliance.openshift.io/apply-remediations=",
"oc -n openshift-compliance annotate compliancesuites/workers-compliancesuite compliance.openshift.io/remove-outdated=",
"allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs kind: SecurityContextConstraints metadata: name: restricted-adjusted-compliance priority: 30 1 readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - SETUID - SETGID - MKNOD runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: - system:serviceaccount:openshift-compliance:api-resource-collector 2 volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret",
"oc create -n openshift-compliance -f restricted-adjusted-compliance.yaml",
"securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance created",
"oc get -n openshift-compliance scc restricted-adjusted-compliance",
"NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted-adjusted-compliance false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny 30 false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"]",
"oc get events -n openshift-compliance",
"oc describe -n openshift-compliance compliancescan/cis-compliance",
"oc -n openshift-compliance logs compliance-operator-775d7bddbd-gj58f | jq -c 'select(.logger == \"profilebundlectrl\")'",
"date -d @1596184628.955853 --utc",
"oc get -n openshift-compliance profilebundle.compliance",
"oc get -n openshift-compliance profile.compliance",
"oc logs -n openshift-compliance -lprofile-bundle=ocp4 -c profileparser",
"oc get -n openshift-compliance deployments,pods -lprofile-bundle=ocp4",
"oc logs -n openshift-compliance pods/<pod-name>",
"oc describe -n openshift-compliance pod/<pod-name> -c profileparser",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: my-companys-constraints debug: true For each role, a separate scan will be created pointing to a node-role specified in roles roles: - worker --- apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: my-companys-compliance-requirements profiles: # Node checks - name: rhcos4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuiteCreated 9m52s scansettingbindingctrl ComplianceSuite openshift-compliance/my-companys-compliance-requirements created",
"oc get cronjobs",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE <cron_name> 0 1 * * * False 0 <none> 151m",
"oc -n openshift-compliance get cm -l compliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script=",
"oc get pvc -n openshift-compliance -lcompliance.openshift.io/scan-name=rhcos4-e8-worker",
"oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scanner --show-labels",
"NAME READY STATUS RESTARTS AGE LABELS rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Completed 0 39m compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner",
"oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod",
"Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Namespace: openshift-compliance Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker complianceoperator.openshift.io/scan-result= Annotations: compliance-remediations/processed: compliance.openshift.io/scan-error-msg: compliance.openshift.io/scan-result: NON-COMPLIANT OpenSCAP-scan-result/node: ip-10-0-169-90.eu-north-1.compute.internal Data ==== exit-code: ---- 2 results: ---- <?xml version=\"1.0\" encoding=\"UTF-8\"?>",
"oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker",
"NAME STATUS SEVERITY rhcos4-e8-worker-accounts-no-uid-except-zero PASS high rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium",
"oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker",
"NAME STATE rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied rhcos4-e8-worker-audit-rules-execution-chcon NotApplied rhcos4-e8-worker-audit-rules-execution-restorecon NotApplied rhcos4-e8-worker-audit-rules-execution-semanage NotApplied rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied",
"oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=",
"oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{\"spec\":{\"apply\":true}}' --type=merge",
"oc get mc | grep 75-",
"75-rhcos4-e8-worker-my-companys-compliance-requirements 3.2.0 2m46s",
"oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements",
"Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements Labels: machineconfiguration.openshift.io/role=worker Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod:",
"oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=",
"oc -n openshift-compliance get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod",
"NAME STATUS SEVERITY rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium",
"oc logs -l workload=<workload_name> -c <container_name>",
"spec: config: resources: limits: memory: 500Mi",
"oc patch sub compliance-operator -nopenshift-compliance --patch-file co-memlimit-patch.yaml --type=merge",
"kind: Subscription metadata: name: custom-operator spec: package: etcd channel: alpha config: resources: requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\"",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' timeout: '10m0s' 1 maxRetryOnTimeout: 3 2",
"podman run --rm -v ~/.local/bin:/mnt/out:Z registry.redhat.io/compliance/oc-compliance-rhel8:stable /bin/cp /usr/bin/oc-compliance /mnt/out/",
"W0611 20:35:46.486903 11354 manifest.go:440] Chose linux/amd64 manifest from the manifest list.",
"oc compliance fetch-raw <object-type> <object-name> -o <output-path>",
"oc compliance fetch-raw scansettingbindings my-binding -o /tmp/",
"Fetching results for my-binding scans: ocp4-cis, ocp4-cis-node-worker, ocp4-cis-node-master Fetching raw compliance results for scan 'ocp4-cis'.... The raw compliance results are available in the following directory: /tmp/ocp4-cis Fetching raw compliance results for scan 'ocp4-cis-node-worker'........ The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-worker Fetching raw compliance results for scan 'ocp4-cis-node-master'... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-master",
"ls /tmp/ocp4-cis-node-master/",
"ocp4-cis-node-master-ip-10-0-128-89.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-150-5.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-163-32.ec2.internal-pod.xml.bzip2",
"bunzip2 -c resultsdir/worker-scan/worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 > resultsdir/worker-scan/worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml",
"ls resultsdir/worker-scan/",
"worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 worker-scan-stage-459-tqkg7-compute-1-pod.xml.bzip2",
"oc compliance rerun-now scansettingbindings my-binding",
"Rerunning scans from 'my-binding': ocp4-cis Re-running scan 'openshift-compliance/ocp4-cis'",
"oc compliance bind [--dry-run] -N <binding name> [-S <scansetting name>] <objtype/objname> [..<objtype/objname>]",
"oc get profile.compliance -n openshift-compliance",
"NAME AGE ocp4-cis 9m54s ocp4-cis-node 9m54s ocp4-e8 9m54s ocp4-moderate 9m54s ocp4-ncp 9m54s rhcos4-e8 9m54s rhcos4-moderate 9m54s rhcos4-ncp 9m54s rhcos4-ospp 9m54s rhcos4-stig 9m54s",
"oc get scansettings -n openshift-compliance",
"NAME AGE default 10m default-auto-apply 10m",
"oc compliance bind -N my-binding profile/ocp4-cis profile/ocp4-cis-node",
"Creating ScanSettingBinding my-binding",
"oc compliance controls profile ocp4-cis-node",
"+-----------+----------+ | FRAMEWORK | CONTROLS | +-----------+----------+ | CIS-OCP | 1.1.1 | + +----------+ | | 1.1.10 | + +----------+ | | 1.1.11 | + +----------+",
"oc compliance fetch-fixes profile ocp4-cis -o /tmp",
"No fixes to persist for rule 'ocp4-api-server-api-priority-flowschema-catch-all' 1 No fixes to persist for rule 'ocp4-api-server-api-priority-gate-enabled' No fixes to persist for rule 'ocp4-api-server-audit-log-maxbackup' Persisted rule fix to /tmp/ocp4-api-server-audit-log-maxsize.yaml No fixes to persist for rule 'ocp4-api-server-audit-log-path' No fixes to persist for rule 'ocp4-api-server-auth-mode-no-aa' No fixes to persist for rule 'ocp4-api-server-auth-mode-node' No fixes to persist for rule 'ocp4-api-server-auth-mode-rbac' No fixes to persist for rule 'ocp4-api-server-basic-auth' No fixes to persist for rule 'ocp4-api-server-bind-address' No fixes to persist for rule 'ocp4-api-server-client-ca' Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-cipher.yaml Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-config.yaml",
"head /tmp/ocp4-api-server-audit-log-maxsize.yaml",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: maximumFileSizeMegabytes: 100",
"oc get complianceremediations -n openshift-compliance",
"NAME STATE ocp4-cis-api-server-encryption-provider-cipher NotApplied ocp4-cis-api-server-encryption-provider-config NotApplied",
"oc compliance fetch-fixes complianceremediations ocp4-cis-api-server-encryption-provider-cipher -o /tmp",
"Persisted compliance remediation fix to /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml",
"head /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: encryption: type: aescbc",
"oc compliance view-result ocp4-cis-scheduler-no-bind-address"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/security_and_compliance/compliance-operator
|
Chapter 19. Red Hat Enterprise Linux 7.5 for ARM
|
Chapter 19. Red Hat Enterprise Linux 7.5 for ARM Red Hat Enterprise Linux 7.5 for ARM introduces Red Hat Enterprise Linux 7.5 user space with an updated kernel, which is based on version 4.14 and is provided by the kernel-alt packages. The offering is distributed with other updated packages but most of the packages are standard Red Hat Enterprise Linux 7 Server RPMs. Installation ISO images are available on the Customer Portal Downloads page . For information about Red Hat Enterprise Linux 7.5 user space, see the Red Hat Enterprise Linux 7 documentation . For information regarding the version, refer to Red Hat Enterprise Linux 7.4 for ARM - Release Notes . The following packages are provided as Development Preview in this release: libvirt (Optional channel) qemu-kvm-ma (Optional channel) Note KVM virtualization is a Development Preview on the 64-bit ARM architecture, and thus is not supported by Red Hat. For more information, see the Virtualization Deployment and Administration Guide . Customers may contact Red Hat and describe their use case, which will be taken into consideration for a future release of Red Hat Enterprise Linux. 19.1. New Features and Updates Core Kernel This update introduces the qrwlock queue write lock for 64-bit ARM systems. The implementation of this mechanism improves performance and prevents lock starvation by ensuring fair handling of multiple CPUs competing for the global task lock. This change also resolves a known issue, which was present in earlier releases and which caused soft lockups under heavy load. Note that any kernel modules built for versions of Red Hat Enterprise Linux 7 for ARM (against the kernel-alt packages) must be rebuilt against the updated kernel. (BZ#1507568) Security USBGuard is now fully supported on 64-bit ARM systems The USBGuard software framework provides system protection against intrusive USB devices by implementing basic whitelisting and blacklisting capabilities based on device attributes. Using USBGuard on 64-bit ARM systems, previously available as a Technology Preview, is now fully supported.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/chap-red_hat_enterprise_linux-7.5_release_notes-rhel_for_arm
|
Chapter 11. Troubleshooting CephFS PVC creation in external mode
|
Chapter 11. Troubleshooting CephFS PVC creation in external mode If you have updated the Red Hat Ceph Storage cluster from a version lower than 4.1.1 to the latest release and is not a freshly deployed cluster, you must manually set the application type for the CephFS pool on the Red Hat Ceph Storage cluster to enable CephFS Persistent Volume Claim (PVC) creation in external mode. Check for CephFS pvc stuck in Pending status. Example output : Check the output of the oc describe command to see the events for respective pvc. Expected error message is cephfs_metadata/csi.volumes.default/csi.volume.pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx: (1) Operation not permitted) Example output: Check the settings for the <cephfs metadata pool name> (here cephfs_metadata ) and <cephfs data pool name> (here cephfs_data ). For running the command, you will need jq preinstalled in the Red Hat Ceph Storage client node. Set the application type for the CephFS pool. Run the following commands on the Red Hat Ceph Storage client node : Verify if the settings are applied. Check the CephFS PVC status again. The PVC should now be in Bound state. Example output :
|
[
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ngx-fs-pxknkcix20-pod Pending ocs-external-storagecluster-cephfs 28h [...]",
"oc describe pvc ngx-fs-pxknkcix20-pod -n nginx-file",
"Name: ngx-fs-pxknkcix20-pod Namespace: nginx-file StorageClass: ocs-external-storagecluster-cephfs Status: Pending Volume: Labels: <none> Annotations: volume.beta.kubernetes.io/storage-provisioner: openshift-storage.cephfs.csi.ceph.com Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Mounted By: ngx-fs-oyoe047v2bn2ka42jfgg-pod-hqhzf Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 107m (x245 over 22h) openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-5f8b66cc96-hvcqp_6b7044af-c904-4795-9ce5-bf0cf63cc4a4 (combined from similar events): failed to provision volume with StorageClass \"ocs-external-storagecluster-cephfs\": rpc error: code = Internal desc = error (an error (exit status 1) occurred while running rados args: [-m 192.168.13.212:6789,192.168.13.211:6789,192.168.13.213:6789 --id csi-cephfs-provisioner --keyfile= stripped -c /etc/ceph/ceph.conf -p cephfs_metadata getomapval csi.volumes.default csi.volume.pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47 /tmp/omap-get-186436239 --namespace=csi]) occurred, command output streams is ( error getting omap value cephfs_metadata/csi.volumes.default/csi.volume.pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47: (1) Operation not permitted)",
"ceph osd pool ls detail --format=json | jq '.[] | select(.pool_name| startswith(\"cephfs\")) | .pool_name, .application_metadata' \"cephfs_data\" { \"cephfs\": {} } \"cephfs_metadata\" { \"cephfs\": {} }",
"ceph osd pool application set <cephfs metadata pool name> cephfs metadata cephfs",
"ceph osd pool application set <cephfs data pool name> cephfs data cephfs",
"ceph osd pool ls detail --format=json | jq '.[] | select(.pool_name| startswith(\"cephfs\")) | .pool_name, .application_metadata' \"cephfs_data\" { \"cephfs\": { \"data\": \"cephfs\" } } \"cephfs_metadata\" { \"cephfs\": { \"metadata\": \"cephfs\" } }",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ngx-fs-pxknkcix20-pod Bound pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47 1Mi RWO ocs-external-storagecluster-cephfs 29h [...]"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/troubleshooting_openshift_data_foundation/troubleshooting-cephfs-pvc-creation-in-external-mode_rhodf
|
Chapter 6. Summarizing cluster specifications
|
Chapter 6. Summarizing cluster specifications 6.1. Summarizing cluster specifications by using a cluster version object You can obtain a summary of OpenShift Container Platform cluster specifications by querying the clusterversion resource. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Query cluster version, availability, uptime, and general status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.13.8 True False 8h Cluster version is 4.13.8 Obtain a detailed summary of cluster specifications, update availability, and update history: USD oc describe clusterversion Example output Name: version Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: ClusterVersion # ... Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce URL: https://access.redhat.com/errata/RHSA-2023:4456 Version: 4.13.8 History: Completion Time: 2023-08-17T13:20:21Z Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce Started Time: 2023-08-17T12:59:45Z State: Completed Verified: false Version: 4.13.8 # ...
|
[
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.13.8 True False 8h Cluster version is 4.13.8",
"oc describe clusterversion",
"Name: version Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: ClusterVersion Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce URL: https://access.redhat.com/errata/RHSA-2023:4456 Version: 4.13.8 History: Completion Time: 2023-08-17T13:20:21Z Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce Started Time: 2023-08-17T12:59:45Z State: Completed Verified: false Version: 4.13.8"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/support/summarizing-cluster-specifications
|
Chapter 2. Getting started
|
Chapter 2. Getting started 2.1. Maintenance and support for monitoring Not all configuration options for the monitoring stack are exposed. The only supported way of configuring OpenShift Container Platform monitoring is by configuring the Cluster Monitoring Operator (CMO) using the options described in the Config map reference for the Cluster Monitoring Operator . Do not use other configurations, as they are unsupported. Configuration paradigms might change across Prometheus releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in the Config map reference for the Cluster Monitoring Operator , your changes will disappear because the CMO automatically reconciles any differences and resets any unsupported changes back to the originally defined state by default and by design. 2.1.1. Support considerations for monitoring Note Backward compatibility for metrics, recording rules, or alerting rules is not guaranteed. The following modifications are explicitly not supported: Creating additional ServiceMonitor , PodMonitor , and PrometheusRule objects in the openshift-* and kube-* projects. Modifying any resources or objects deployed in the openshift-monitoring or openshift-user-workload-monitoring projects. The resources created by the OpenShift Container Platform monitoring stack are not meant to be used by any other resources, as there are no guarantees about their backward compatibility. Note The Alertmanager configuration is deployed as the alertmanager-main secret resource in the openshift-monitoring namespace. If you have enabled a separate Alertmanager instance for user-defined alert routing, an Alertmanager configuration is also deployed as the alertmanager-user-workload secret resource in the openshift-user-workload-monitoring namespace. To configure additional routes for any instance of Alertmanager, you need to decode, modify, and then encode that secret. This procedure is a supported exception to the preceding statement. Modifying resources of the stack. The OpenShift Container Platform monitoring stack ensures its resources are always in the state it expects them to be. If they are modified, the stack will reset them. Deploying user-defined workloads to openshift-* , and kube-* projects. These projects are reserved for Red Hat provided components and they should not be used for user-defined workloads. Enabling symptom based monitoring by using the Probe custom resource definition (CRD) in Prometheus Operator. Manually deploying monitoring resources into namespaces that have the openshift.io/cluster-monitoring: "true" label. Adding the openshift.io/cluster-monitoring: "true" label to namespaces. This label is reserved only for the namespaces with core OpenShift Container Platform components and Red Hat certified components. Installing custom Prometheus instances on OpenShift Container Platform. A custom instance is a Prometheus custom resource (CR) managed by the Prometheus Operator. 2.1.2. Support policy for monitoring Operators Monitoring Operators ensure that OpenShift Container Platform monitoring resources function as designed and tested. If Cluster Version Operator (CVO) control of an Operator is overridden, the Operator does not respond to configuration changes, reconcile the intended state of cluster objects, or receive updates. While overriding CVO control for an Operator can be helpful during debugging, this is unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. Overriding the Cluster Version Operator The spec.overrides parameter can be added to the configuration for the CVO to allow administrators to provide a list of overrides to the behavior of the CVO for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state and prevents the monitoring stack from being reconciled to its intended state. This impacts the reliability features built into Operators and prevents updates from being received. Reported issues must be reproduced after removing any overrides for support to proceed. 2.1.3. Support version matrix for monitoring components The following matrix contains information about versions of monitoring components for OpenShift Container Platform 4.12 and later releases: Table 2.1. OpenShift Container Platform and component versions OpenShift Container Platform Prometheus Operator Prometheus Metrics Server Alertmanager kube-state-metrics agent monitoring-plugin node-exporter agent Thanos 4.16 0.73.2 2.52.0 0.7.1 0.26.0 2.12.0 1.0.0 1.8.0 0.35.0 4.15 0.70.0 2.48.0 0.6.4 0.26.0 2.10.1 1.0.0 1.7.0 0.32.5 4.14 0.67.1 2.46.0 N/A 0.25.0 2.9.2 1.0.0 1.6.1 0.30.2 4.13 0.63.0 2.42.0 N/A 0.25.0 2.8.1 N/A 1.5.0 0.30.2 4.12 0.60.1 2.39.1 N/A 0.24.0 2.6.0 N/A 1.4.0 0.28.1 Note The openshift-state-metrics agent and Telemeter Client are OpenShift-specific components. Therefore, their versions correspond with the versions of OpenShift Container Platform. 2.2. Core platform monitoring first steps After OpenShift Container Platform is installed, core platform monitoring components immediately begin collecting metrics, which you can query and view. The default in-cluster monitoring stack includes the core platform Prometheus instance that collects metrics from your cluster and the core Alertmanager instance that routes alerts, among other components. Depending on who will use the monitoring stack and for what purposes, as a cluster administrator, you can further configure these monitoring components to suit the needs of different users in various scenarios. 2.2.1. Configuring core platform monitoring: Postinstallation steps After OpenShift Container Platform is installed, cluster administrators typically configure core platform monitoring to suit their needs. These activities include setting up storage and configuring options for Prometheus, Alertmanager, and other monitoring components. Note By default, in a newly installed OpenShift Container Platform system, users can query and view collected metrics. You need only configure an alert receiver if you want users to receive alert notifications. Any other configuration options listed here are optional. Create the cluster-monitoring-config ConfigMap object if it does not exist. Configure notifications for default platform alerts so that Alertmanager can send alerts to an external notification system such as email, Slack, or PagerDuty. For shorter term data retention, configure persistent storage for Prometheus and Alertmanager to store metrics and alert data. Specify the metrics data retention parameters for Prometheus and Thanos Ruler. Important In multi-node clusters, you must configure persistent storage for Prometheus, Alertmanager, and Thanos Ruler to ensure high availability. By default, in a newly installed OpenShift Container Platform system, the monitoring ClusterOperator resource reports a PrometheusDataPersistenceNotConfigured status message to remind you that storage is not configured. For longer term data retention, configure the remote write feature to enable Prometheus to send ingested metrics to remote systems for storage. Important Be sure to add cluster ID labels to metrics for use with your remote write storage configuration. Grant monitoring cluster roles to any non-administrator users that need to access certain monitoring features. Assign tolerations to monitoring stack components so that administrators can move them to tainted nodes. Set the body size limit for metrics collection to help avoid situations in which Prometheus consumes excessive amounts of memory when scraped targets return a response that contains a large amount of data. Modify or create alerting rules for your cluster. These rules specify the conditions that trigger alerts, such as high CPU or memory usage, network latency, and so forth. Specify resource limits and requests for monitoring components to ensure that the containers that run monitoring components have enough CPU and memory resources. With the monitoring stack configured to suit your needs, Prometheus collects metrics from the specified services and stores these metrics according to your settings. You can go to the Observe pages in the OpenShift Container Platform web console to view and query collected metrics, manage alerts, identify performance bottlenecks, and scale resources as needed: View dashboards to visualize collected metrics, troubleshoot alerts, and monitor other information about your cluster. Query collected metrics by creating PromQL queries or using predefined queries. 2.3. User workload monitoring first steps As a cluster administrator, you can optionally enable monitoring for user-defined projects in addition to core platform monitoring. Non-administrator users such as developers can then monitor their own projects outside of core platform monitoring. Cluster administrators typically complete the following activities to configure user-defined projects so that users can view collected metrics, query these metrics, and receive alerts for their own projects: Enable user workload monitoring . Grant non-administrator users permissions to monitor user-defined projects by assigning the monitoring-rules-view , monitoring-rules-edit , or monitoring-edit cluster roles. Assign the user-workload-monitoring-config-edit role to grant non-administrator users permission to configure user-defined projects. Enable alert routing for user-defined projects so that developers and other users can configure custom alerts and alert routing for their projects. If needed, configure alert routing for user-defined projects to use an optional Alertmanager instance dedicated for use only by user-defined projects . Configure notifications for user-defined alerts . If you use the platform Alertmanager instance for user-defined alert routing, configure different alert receivers for default platform alerts and user-defined alerts. 2.4. Developer and non-administrator steps After monitoring for user-defined projects is enabled and configured, developers and other non-administrator users can then perform the following activities to set up and use monitoring for their own projects: Deploy and monitor services . Create and manage alerting rules . Receive and manage alerts for your projects. If granted the alert-routing-edit cluster role, configure alert routing . View dashboards by using the OpenShift Container Platform web console. Query the collected metrics by creating PromQL queries or using predefined queries.
|
[
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing."
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/monitoring/getting-started
|
Chapter 5. Red Hat Decision Manager projects
|
Chapter 5. Red Hat Decision Manager projects Red Hat Decision Manager projects contain the business assets that you develop in Red Hat Decision Manager and are assigned to a space (for example, MyProject within MySpace ). Projects also contain configuration files such as a Maven project object model file ( pom.xml ), which contains build, environment, and other information about the project, and a KIE module descriptor file ( kmodule.xml ), which contains the KIE Base and KIE Session configurations for the assets in the project.
| null |
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/projects-con_managing-projects
|
Chapter 4. Refining your view of systems in the advisor service
|
Chapter 4. Refining your view of systems in the advisor service The Systems view shows all of your systems that have the Insights client installed and reporting advisor data. The Systems list can be refined in the following ways. 4.1. Filter by name Search for the host or system name. 4.2. Sorting options Use the sorting arrows above the following columns to order your systems table: Name. Alphabetize by A to Z or Z to A. Number of recommendations. Order by the number of recommendations impacting each system. Last seen. Order by the number of minutes, hours, or days since an archive was last uploaded from the system to the advisor service. 4.3. Filtering systems by tags, SAP workloads, and groups in the advisor service Filter results in the advisor service UI by custom group tags, SAP workloads, and Satellite groups to quickly locate and view the systems you want to focus on. In the advisor service, access tag, workload, and group filters using the Filter results box, located in the upper left corner of the page in the Red Hat Insights for Red Hat Enterprise Linux application. The filter dropdown menu shows all of the tags associated with the account, allowing you to click one or more parameters by which to filter. To filter by tags in the advisor service, complete the following steps: Procedure Navigate to the Operations > Advisor > Systems page and log in if necessary. The Filter results box is in most views in the Red Hat Insights for Red Hat Enterprise Linux application and these procedures work anywhere you access Filter results . Click the arrow on the Filter results box and scroll to see the tags available for systems on this account. Select one or more tags to filter by SAP workloads, Satellite host group, or a custom group. Applied tags are visible to the Filter results box. View the filtered results throughout the advisor service. To remove the tag, click Clear filters . Additional resources To learn more about system-group tags in Insights for Red Hat Enterprise Linux, see chapter, System tags and groups .
| null |
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_rhel_configuration_issues_using_the_red_hat_insights_advisor_service_with_fedramp/assembly-adv-assess-refining-system-list
|
Chapter 2. Creating a custom Java runtime environment for non-modular applications
|
Chapter 2. Creating a custom Java runtime environment for non-modular applications You can create a custom Java runtime environment from a non-modular application by using the jlink tool. Prerequisites Install Installing Red Hat build of OpenJDK on RHEL using an archive . Note For best results, use portable Red Hat binaries as a basis for a Jlink runtime, because these binaries contain bundled libraries. Procedure Create a simple Hello World application by using the Logger class. Check the base Red Hat build of OpenJDK 17 binary exists in the jdk-17 folder: Create a directory for your application: Create hello-example/sample/HelloWorld.java file with the following content: package sample; import java.util.logging.Logger; public class HelloWorld { private static final Logger LOG = Logger.getLogger(HelloWorld.class.getName()); public static void main(String[] args) { LOG.info("Hello World!"); } } Compile your application: Run your application without a custom JRE: The example shows the base Red Hat build of OpenJDK requiring 311 MB to run a single class. (Optional) You can inspect the Red Hat build of OpenJDK and see many non-required modules for your application: This sample Hello World application has very few dependencies. You can use jlink to create custom runtime images for your application. With these images you can run your application with only the required Red Hat build of OpenJDK dependencies. Determine module dependencies of your application using jdeps command: Build a custom java runtime image for your application: Note Red Hat build of OpenJDK reduces the size of your custom Java runtime image from a 313 M runtime image to a 50 M runtime image. You can verify the reduced runtime of your application: The generated JRE with your sample application does not have any other dependencies. You can distribute your application together with your custom runtime for deployment. Note You must rebuild the custom Java runtime images for your application with every security update of your base Red Hat build of OpenJDK.
|
[
"ls jdk-17 bin conf demo include jmods legal lib man NEWS release ./jdk-17/bin/java -version openjdk version \"17.0.10\" 2021-01-19 LTS OpenJDK Runtime Environment 18.9 (build 17.0.10+9-LTS) OpenJDK 64-Bit Server VM 18.9 (build 17.0.10+9-LTS, mixed mode)",
"mkdir -p hello-example/sample",
"package sample; import java.util.logging.Logger; public class HelloWorld { private static final Logger LOG = Logger.getLogger(HelloWorld.class.getName()); public static void main(String[] args) { LOG.info(\"Hello World!\"); } }",
"./jdk-17/bin/javac -d . USD(find hello-example -name \\*.java)",
"./jdk-17/bin/java sample.HelloWorld Mar 09, 2021 10:48:59 AM sample.HelloWorld main INFO: Hello World!",
"du -sh jdk-17/ 313M jdk-17/",
"./jdk-17/bin/java --list-modules [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]",
"./jdk-17/bin/jdeps -s ./sample/HelloWorld.class HelloWorld.class -> java.base HelloWorld.class -> java.logging",
"./jdk-17/bin/jlink --add-modules java.base,java.logging --output custom-runtime du -sh custom-runtime 50M custom-runtime/ ./custom-runtime/bin/java --list-modules [email protected] [email protected]",
"./custom-runtime/bin/java sample.HelloWorld Jan 14, 2021 12:13:26 PM HelloWorld main INFO: Hello World!"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/using_jlink_to_customize_java_runtime_environment/creating-custom-jre
|
Chapter 1. Preparing the installation
|
Chapter 1. Preparing the installation To prepare a OpenShift Dev Spaces installation, learn about the OpenShift Dev Spaces ecosystem and deployment constraints: Section 1.1, "Supported platforms" Section 1.2, "Installing the dsc management tool" Section 1.3, "Architecture" Section 1.4, "Calculating Dev Spaces resource requirements" Section 3.1, "Understanding the CheCluster Custom Resource" 1.1. Supported platforms OpenShift Dev Spaces runs on OpenShift 4.12-4.16 on the following CPU architectures: AMD64 and Intel 64 ( x86_64 ) IBM Power ( ppc64le ) and IBM Z ( s390x ) Additional resources OpenShift Documentation 1.2. Installing the dsc management tool You can install dsc , the Red Hat OpenShift Dev Spaces command-line management tool, on Microsoft Windows, Apple MacOS, and Linux. With dsc , you can perform operations the OpenShift Dev Spaces server such as starting, stopping, updating, and deleting the server. Prerequisites Linux or macOS. Note For installing dsc on Windows, see the following pages: https://developers.redhat.com/products/openshift-dev-spaces/download https://github.com/redhat-developer/devspaces-chectl Procedure Download the archive from https://developers.redhat.com/products/openshift-dev-spaces/download to a directory such as USDHOME . Run tar xvzf on the archive to extract the /dsc directory. Add the extracted /dsc/bin subdirectory to USDPATH . Verification Run dsc to view information about it. Additional resources " dsc reference documentation " 1.3. Architecture Figure 1.1. High-level OpenShift Dev Spaces architecture with the Dev Workspace operator OpenShift Dev Spaces runs on three groups of components: OpenShift Dev Spaces server components Manage User project and workspaces. The main component is the User dashboard, from which users control their workspaces. Dev Workspace operator Creates and controls the necessary OpenShift objects to run User workspaces. Including Pods , Services , and PersistentVolumes . User workspaces Container-based development environments, the IDE included. The role of these OpenShift features is central: Dev Workspace Custom Resources Valid OpenShift objects representing the User workspaces and manipulated by OpenShift Dev Spaces. It is the communication channel for the three groups of components. OpenShift role-based access control (RBAC) Controls access to all resources. Additional resources Section 1.3.1, "Server components" Section 1.3.1.2, "Dev Workspace operator" Section 1.3.2, "User workspaces" Dev Workspace Operator repository Kubernetes documentation - Custom Resources 1.3.1. Server components The OpenShift Dev Spaces server components ensure multi-tenancy and workspaces management. Figure 1.2. OpenShift Dev Spaces server components interacting with the Dev Workspace operator Additional resources Section 1.3.1.1, "Dev Spaces operator" Section 1.3.1.2, "Dev Workspace operator" Section 1.3.1.3, "Gateway" Section 1.3.1.4, "User dashboard" Section 1.3.1.5, "Devfile registries" Section 1.3.1.6, "Dev Spaces server" Section 1.3.1.7, "Plug-in registry" 1.3.1.1. Dev Spaces operator The OpenShift Dev Spaces operator ensure full lifecycle management of the OpenShift Dev Spaces server components. It introduces: CheCluster custom resource definition (CRD) Defines the CheCluster OpenShift object. OpenShift Dev Spaces controller Creates and controls the necessary OpenShift objects to run a OpenShift Dev Spaces instance, such as pods, services, and persistent volumes. CheCluster custom resource (CR) On a cluster with the OpenShift Dev Spaces operator, it is possible to create a CheCluster custom resource (CR). The OpenShift Dev Spaces operator ensures the full lifecycle management of the OpenShift Dev Spaces server components on this OpenShift Dev Spaces instance: Section 1.3.1.2, "Dev Workspace operator" Section 1.3.1.3, "Gateway" Section 1.3.1.4, "User dashboard" Section 1.3.1.5, "Devfile registries" Section 1.3.1.6, "Dev Spaces server" Section 1.3.1.7, "Plug-in registry" Additional resources Section 3.1, "Understanding the CheCluster Custom Resource" Chapter 2, Installing Dev Spaces 1.3.1.2. Dev Workspace operator The Dev Workspace operator extends OpenShift to provide Dev Workspace support. It introduces: Dev Workspace custom resource definition Defines the Dev Workspace OpenShift object from the Devfile v2 specification. Dev Workspace controller Creates and controls the necessary OpenShift objects to run a Dev Workspace, such as pods, services, and persistent volumes. Dev Workspace custom resource On a cluster with the Dev Workspace operator, it is possible to create Dev Workspace custom resources (CR). A Dev Workspace CR is a OpenShift representation of a Devfile. It defines a User workspaces in a OpenShift cluster. Additional resources Devfile API repository 1.3.1.3. Gateway The OpenShift Dev Spaces gateway has following roles: Routing requests. It uses Traefik . Authenticating users with OpenID Connect (OIDC). It uses OpenShift OAuth2 proxy . Applying OpenShift Role based access control (RBAC) policies to control access to any OpenShift Dev Spaces resource. It uses `kube-rbac-proxy` . The OpenShift Dev Spaces operator manages it as the che-gateway Deployment. It controls access to: Section 1.3.1.4, "User dashboard" Section 1.3.1.5, "Devfile registries" Section 1.3.1.6, "Dev Spaces server" Section 1.3.1.7, "Plug-in registry" Section 1.3.2, "User workspaces" Figure 1.3. OpenShift Dev Spaces gateway interactions with other components Additional resources Section 3.10, "Managing identities and authorizations" 1.3.1.4. User dashboard The user dashboard is the landing page of Red Hat OpenShift Dev Spaces. OpenShift Dev Spaces users browse the user dashboard to access and manage their workspaces. It is a React application. The OpenShift Dev Spaces deployment starts it in the devspaces-dashboard Deployment. It needs access to: Section 1.3.1.5, "Devfile registries" Section 1.3.1.6, "Dev Spaces server" Section 1.3.1.7, "Plug-in registry" OpenShift API Figure 1.4. User dashboard interactions with other components When the user requests the user dashboard to start a workspace, the user dashboard executes this sequence of actions: Collects the devfile from the Section 1.3.1.5, "Devfile registries" , when the user is creating a workspace from a code sample. Sends the repository URL to Section 1.3.1.6, "Dev Spaces server" and expects a devfile in return, when the user is creating a workspace from a remote devfile. Reads the devfile describing the workspace. Collects the additional metadata from the Section 1.3.1.7, "Plug-in registry" . Converts the information into a Dev Workspace Custom Resource. Creates the Dev Workspace Custom Resource in the user project using the OpenShift API. Watches the Dev Workspace Custom Resource status. Redirects the user to the running workspace IDE. 1.3.1.5. Devfile registries Additional resources The OpenShift Dev Spaces devfile registries are services providing a list of sample devfiles to create ready-to-use workspaces. The Section 1.3.1.4, "User dashboard" displays the samples list on the Dashboard Create Workspace page. Each sample includes a Devfile v2. The OpenShift Dev Spaces deployment starts one devfile registry instance in the devfile-registry deployment. Figure 1.5. Devfile registries interactions with other components Additional resources Devfile v2 documentation devfile registry latest community version online instance OpenShift Dev Spaces devfile registry repository 1.3.1.6. Dev Spaces server The OpenShift Dev Spaces server main functions are: Creating user namespaces. Provisioning user namespaces with required secrets and config maps. Integrating with Git services providers, to fetch and validate devfiles and authentication. The OpenShift Dev Spaces server is a Java web service exposing an HTTP REST API and needs access to: Git service providers OpenShift API Figure 1.6. OpenShift Dev Spaces server interactions with other components Additional resources Section 3.3.2, "Advanced configuration options for Dev Spaces server" 1.3.1.7. Plug-in registry Each OpenShift Dev Spaces workspace starts with a specific editor and set of associated extensions. The OpenShift Dev Spaces plugin registry provides the list of available editors and editor extensions. A Devfile v2 describes each editor or extension. The Section 1.3.1.4, "User dashboard" is reading the content of the registry. Figure 1.7. Plugin registries interactions with other components Additional resources Editor definitions in the OpenShift Dev Spaces plugin registry repository Plugin registry latest community version online instance 1.3.2. User workspaces Figure 1.8. User workspaces interactions with other components User workspaces are web IDEs running in containers. A User workspace is a web application. It consists of microservices running in containers providing all the services of a modern IDE running in your browser: Editor Language auto-completion Language server Debugging tools Plug-ins Application runtimes A workspace is one OpenShift Deployment containing the workspace containers and enabled plugins, plus related OpenShift components: Containers ConfigMaps Services Endpoints Ingresses or Routes Secrets Persistent Volumes (PV) A OpenShift Dev Spaces workspace contains the source code of the projects, persisted in a OpenShift Persistent Volume (PV). Microservices have read/write access to this shared directory. Use the devfile v2 format to specify the tools and runtime applications of a OpenShift Dev Spaces workspace. The following diagram shows one running OpenShift Dev Spaces workspace and its components. Figure 1.9. OpenShift Dev Spaces workspace components In the diagram, there is one running workspaces. 1.4. Calculating Dev Spaces resource requirements The OpenShift Dev Spaces Operator, Dev Workspace Controller, and user workspaces consist of a set of pods. The pods contribute to the resource consumption in CPU and memory limits and requests. Note The following link to an example devfile is a pointer to material from the upstream community. This material represents the very latest available content and the most recent best practices. These tips have not yet been vetted by Red Hat's QE department, and they have not yet been proven by a wide user group. Please, use this information cautiously. It is best used for educational and 'developmental' purposes rather than 'production' purposes. Procedure Identify the workspace resource requirements which depend on the devfile that is used for defining the development environment. This includes identifying the workspace components explicitly specified in the components section of the devfile. Here is an example devfile with the following components: Example 1.1. tools The tools component of the devfile defines the following requests and limits: memoryLimit: 6G memoryRequest: 512M cpuRequest: 1000m cpuLimit: 4000m Example 1.2. postgresql The postgresql component does not define any requests and limits and therefore falls back on the defaults for the dedicated container: memoryLimit: 128M memoryRequest: 64M cpuRequest: 10m cpuLimit: 1000m During the workspace startup, an internal che-gateway container is implicitly provisioned with the following requests and limits: memoryLimit: 256M memoryRequest: 64M cpuRequest: 50m cpuLimit: 500m Calculate the sums of the resources required for each workspace. If you intend to use multiple devfiles, repeat this calculation for every expected devfile. Example 1.3. Workspace requirements for the example devfile in the step Purpose Pod Container name Memory limit Memory request CPU limit CPU request Developer tools workspace tools 6 GiB 512 MiB 4000 m 1000 m Database workspace postgresql 128 MiB 64 MiB 1000 m 10 m OpenShift Dev Spaces gateway workspace che-gateway 256 MiB 64 MiB 500 m 50 m Total 6.4 GiB 640 MiB 5500 m 1060 m Multiply the resources calculated per workspace by the number of workspaces that you expect all of your users to run simultaneously. Calculate the sums of the requirements for the OpenShift Dev Spaces Operator, Operands, and Dev Workspace Controller. Table 1.1. Default requirements for the OpenShift Dev Spaces Operator, Operands, and Dev Workspace Controller Purpose Pod name Container names Memory limit Memory request CPU limit CPU request OpenShift Dev Spaces operator devspaces-operator devspaces-operator 256 MiB 64 MiB 500 m 100 m OpenShift Dev Spaces Server devspaces devspaces-server 1 GiB 512 MiB 1000 m 100 m OpenShift Dev Spaces Dashboard devspaces-dashboard devspaces-dashboard 256 MiB 32 MiB 500 m 100 m OpenShift Dev Spaces Gateway devspaces-gateway traefik 4 GiB 128 MiB 1000 m 100 m OpenShift Dev Spaces Gateway devspaces-gateway configbump 256 MiB 64 MiB 500 m 50 m OpenShift Dev Spaces Gateway devspaces-gateway oauth-proxy 512 MiB 64 MiB 500 m 100 m OpenShift Dev Spaces Gateway devspaces-gateway kube-rbac-proxy 512 MiB 64 MiB 500 m 100 m Devfile registry devfile-registry devfile-registry 256 MiB 32 MiB 500 m 100 m Plugin registry plugin-registry plugin-registry 256 MiB 32 MiB 500 m 100 m Dev Workspace Controller Manager devworkspace-controller-manager devworkspace-controller 1 GiB 100 MiB 1000 m 250 m Dev Workspace Controller Manager devworkspace-controller-manager kube-rbac-proxy N/A N/A N/A N/A Dev Workspace webhook server devworkspace-webhook-server webhook-server 300 MiB 20 MiB 200 m 100 m Dev Workspace Operator Catalog devworkspace-operator-catalog registry-server N/A 50 MiB N/A 10 m Dev Workspace Webhook Server devworkspace-webhook-server webhook-server 300 MiB 20 MiB 200 m 100 m Dev Workspace Webhook Server devworkspace-webhook-server kube-rbac-proxy N/A N/A N/A N/A Total 9 GiB 1.2 GiB 6.9 1.3 Additional resources What is a devfile Benefits of devfile Devfile customization overview
|
[
"dsc",
"memoryLimit: 6G memoryRequest: 512M cpuRequest: 1000m cpuLimit: 4000m",
"memoryLimit: 128M memoryRequest: 64M cpuRequest: 10m cpuLimit: 1000m",
"memoryLimit: 256M memoryRequest: 64M cpuRequest: 50m cpuLimit: 500m"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.14/html/administration_guide/preparing-the-installation
|
Chapter 4. Red Hat OpenShift Cluster Manager
|
Chapter 4. Red Hat OpenShift Cluster Manager Red Hat OpenShift Cluster Manager is a managed service where you can install, modify, operate, and upgrade your Red Hat OpenShift clusters. This service allows you to work with all of your organization's clusters from a single dashboard. OpenShift Cluster Manager guides you to install OpenShift Container Platform, Red Hat OpenShift Service on AWS (ROSA), and OpenShift Dedicated clusters. It is also responsible for managing both OpenShift Container Platform clusters after self-installation as well as your ROSA and OpenShift Dedicated clusters. You can use OpenShift Cluster Manager to do the following actions: Create new clusters View cluster details and metrics Manage your clusters with tasks such as scaling, changing node labels, networking, authentication Manage access control Monitor clusters Schedule upgrades 4.1. Accessing Red Hat OpenShift Cluster Manager You can access OpenShift Cluster Manager with your configured OpenShift account. Prerequisites You have an account that is part of an OpenShift organization. If you are creating a cluster, your organization has specified quota. Procedure Log in to OpenShift Cluster Manager Hybrid Cloud Console using your login credentials. 4.2. General actions On the top right of the cluster page, there are some actions that a user can perform on the entire cluster: Open console launches a web console so that the cluster owner can issue commands to the cluster. Actions drop-down menu allows the cluster owner to rename the display name of the cluster, change the amount of load balancers and persistent storage on the cluster, if applicable, manually set the node count, and delete the cluster. Refresh icon forces a refresh of the cluster. 4.3. Cluster tabs Selecting an active, installed cluster shows tabs associated with that cluster. The following tabs display after the cluster's installation completes: Overview Access control Add-ons Networking Insights Advisor Machine pools Support Settings 4.3.1. Overview tab The Overview tab provides information about how your cluster was configured: Cluster ID is the unique identification for the created cluster. This ID can be used when issuing commands to the cluster from the command line. Type shows the OpenShift version that the cluster is using. Region is the server region. Provider shows which cloud provider that the cluster was built upon. Availability shows which type of availability zone that the cluster uses, either single or multizone. Version is the OpenShift version that is installed on the cluster. If there is an update available, you can update from this field. Created at shows the date and time that the cluster was created. Owner identifies who created the cluster and has owner rights. Subscription type shows the subscription model that was selected on creation. Infrastructure type is the type of account that the cluster uses. Status displays the current status of the cluster. Total vCPU shows the total available virtual CPU for this cluster. Total memory shows the total available memory for this cluster. Load balancers Persistent storage displays the amount of storage that is available on this cluster. Nodes shows the actual and desired nodes on the cluster. These numbers might not match due to cluster scaling. Network field shows the address and prefixes for network connectivity. Resource usage section of the tab displays the resources in use with a graph. Advisor recommendations section gives insight in relation to security, performance, availability, and stablility. This section requires the use of remote health functionality. See Using Insights to identify issues with your cluster . Cluster history section shows everything that has been done with the cluster including creation and when a new version is identified. 4.3.2. Access control tab The Access control tab allows the cluster owner to set up an identity provider, grant elevated permissions, and grant roles to other users. Prerequisites You must be the cluster owner or have the correct permissions to grant roles on the cluster. Procedure Select the Grant role button. Enter the Red Hat account login for the user that you wish to grant a role on the cluster. Select the Grant role button on the dialog box. The dialog box closes, and the selected user shows the "Cluster Editor" access. 4.3.3. Add-ons tab The Add-ons tab displays all of the optional add-ons that can be added to the cluster. Select the desired add-on, and then select Install below the description for the add-on that displays. 4.3.4. Insights Advisor tab The Insights Advisor tab uses the Remote Health functionality of the OpenShift Container Platform to identify and mitigate risks to security, performance, availability, and stability. See Using Insights to identify issues with your cluster in the OpenShift Container Platform documentation. 4.3.5. Machine pools tab The Machine pools tab allows the cluster owner to create new machine pools, if there is enough available quota, or edit an existing machine pool. Selecting the More options > Scale opens the "Edit node count" dialog. In this dialog, you can change the node count per availability zone. If autoscaling is enabled, you can also set the range for autoscaling. 4.3.6. Support tab In the Support tab, you can add notification contacts for individuals that should receive cluster notifications. The username or email address that you provide must relate to a user account in the Red Hat organization where the cluster is deployed. Also from this tab, you can open a support case to request technical support for your cluster. 4.3.7. Settings tab The Settings tab provides a few options for the cluster owner: Monitoring , which is enabled by default, allows for reporting done on user-defined actions. See Understanding the monitoring stack . Update strategy allows you to determine if the cluster automatically updates on a certain day of the week at a specified time or if all updates are scheduled manually. Node draining sets the duration that protected workloads are respected during updates. When this duration has passed, the node is forcibly removed. Update status shows the current version and if there are any updates available. 4.4. Additional resources For the complete documentation for OpenShift Cluster Manager, see OpenShift Cluster Manager documentation .
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/architecture/ocm-overview-ocp
|
Chapter 4. New features
|
Chapter 4. New features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 9.1. 4.1. Installer and image creation Automatic FCP SCSI LUN scanning support in installer The installer can now use the automatic LUN scanning when attaching FCP SCSI LUNs on IBM Z systems. Automatic LUN scanning is available for FCP devices operating in NPIV mode, if it is not disabled through the zfcp.allow_lun_scan kernel module parameter. It is enabled by default. It provides access to all SCSI devices found in the storage area network attached to the FCP device with the specified device bus ID. It is not necessary to specify WWPN and FCP LUNs anymore and it is sufficient to provide just the FCP device bus ID. (BZ#1937031) Image builder on-premise now supports the /boot partition customization Image builder on-premise version now supports building images with custom /boot mount point partition size. You can specify the size of the /boot mount point partition in the blueprint customization, to increase the size of the /boot partition in case the default boot partition size is too small. For example: (JIRA:RHELPLAN-130379) Added the --allow-ssh kickstart option to enable password-based SSH root logins During the graphical installation, you have an option to enable password-based SSH root logins. This functionality was not available in kickstart installations. With this update, an option --allow-ssh has been added to the rootpw kickstart command. This option enables the root user to login to the system using SSH with a password. ( BZ#2083269 ) Boot loader menu hidden by default The GRUB boot loader is now configured to hide the boot menu by default. This results in a smoother boot experience. The boot menu is hidden in all of the following cases: When you restart the system from the desktop environment or the login screen. During the first system boot after the installation. When the greenboot package is installed and enabled. If the system boot failed, GRUB always displays the boot menu during the boot. To access the boot menu manually, use either of the following options: Repeatedly press Esc during boot. Repeatedly press F8 during boot. Hold Shift during boot. To disable this feature and configure the boot loader menu to display by default, use the following command: (BZ#2059414) Minimal RHEL installation now installs only the s390utils-core package In RHEL 8.4 and later, the s390utils-base package is split into an s390utils-core package and an auxiliary s390utils-base package. As a result, setting the RHEL installation to minimal-environment installs only the necessary s390utils-core package and not the auxiliary s390utils-base package. If you want to use the s390utils-base package with a minimal RHEL installation, you must manually install the package after completing the RHEL installation or explicitly install s390utils-base using a kickstart file. (BZ#1932480) Image builder on-premise now supports uploading images to GCP With this enhancement, you can use image builder CLI to build a gce image, providing credentials for the user or service account that you want to use to upload the images. As a result, image builder creates the image and then uploads the gce image directly to the GCP environment that you specified. ( BZ#2049492 ) Image builder on-premise CLI supports pushing a container image directly to a registry With this enhancement, you can push RHEL for Edge container images directly to a container registry after it has been built, using the image builder CLI. To build the container image: Set up an upload provider and optionally, add credentials. Build the container image, passing the container registry and the repository to composer-cli as arguments. After the image is ready, it is available in the container registry you set up. (JIRA:RHELPLAN-130376) Image builder on-premise users now customize their blueprints during the image creation process With this update, the Edit Blueprint page was removed to unify the user experience in the image builder service and in the image builder app in cockpit-composer . Users can now create their blueprints and add their customization, such as adding packages, and create users, during the image creation process. The versioning of blueprints has also been removed so that blueprints only have one version: the current one. Users have access to older blueprint versions through their already created images. (JIRA:RHELPLAN-122735) 4.2. RHEL for Edge RHEL for Edge now supports the fdo-admin cli utility With this update, you can configure the FDO services directly across all deployment scenarios by using the CLI. Run the following commands to generate the certificates and keys for the services : Note This example takes into consideration that you already installed the fdo-admin-cli RPM package. If you used the source code and compiled it, the correct path is ./target/debug/fdo-admin-tool or ./target/debug/fdo-admin-tool , depending on your build options. As a result, after you install and start the service, it runs with the default settings. (JIRA:RHELPLAN-122776) 4.3. Subscription management The subscription-manager utility displays the current status of actions The subscription-manager utility now displays with progress information while it is processing the current operation. This is helpful when subscription-manager takes more than usual time to complete its operations related to server communication, for example, registration. To revert to the behavior, enter: ( BZ#2092014 ) 4.4. Software management The modulesync command is now available to replace certain workflows in RHEL 9 In RHEL 9, modular packages cannot be installed without modular metadata. Previously, you could use the dnf command to download packages, and then use the createrepo_c command to redistribute those packages. This enhancement introduces the modulesync command to ensure the presence of modular metadata, which ensures package installability. This command downloads RPM packages from modules and creates a repository with modular metadata in a working directory. (BZ#2066646) 4.5. Shells and command-line tools Cronie adds support for a randomized time within a selected range The Cronie utility now supports the ~ (random within range) operator for cronjob execution. As a result, you can start a cronjob on a randomized time within the selected range. ( BZ#2090691 ) ReaR adds new variables for executing commands before and after recovery With this enhancement, ReaR introduces two new variables for easier automation of commands to be executed before and after recovery: PRE_RECOVERY_COMMANDS accepts an array of commands. These commands will be executed before recovery starts. POST_RECOVERY_COMMANDS accepts an array of commands. These commands will be executed after recovery finishes. These variables are an alternative to PRE_RECOVERY_SCRIPT and POST_RECOVERY_SCRIPT with the following differences: The earlier PRE_RECOVERY_SCRIPT and POST_RECOVERY_SCRIPT variables accept a single shell command. To pass multiple commands to these variables, you must separate the commands by semicolons. The new PRE_RECOVERY_COMMANDS and POST_RECOVERY_COMMANDS variables accept arrays of commands, and each element of the array is executed as a separate command. As a result, providing multiple commands to be executed in the rescue system before and after recovery is now easier and less error-prone. For more information, see the default.conf file. ( BZ#2111059 ) A new package: xmlstarlet XMLStarlet is a set of command-line utilities for parsing, transforming, querying, validating, and editing XML files. The new xmlstarlet package provides a simple set of shell commands that you can use in a similar way as you use UNIX commands for plain text files such as grep , sed , awk , diff , patch , join , and other. (BZ#2069689) opencryptoki rebased to version 3.18.0 The opencryptoki package, which is an implementation of the Public-Key Cryptography Standard (PKCS) #11, has been updated to version 3.18.0. Notable improvements include: Default to Federal Information Processing Standards (FIPS) compliant token data format (tokversion = 3.12). Added support for restricting usage of mechanisms and keys with a global policy. Added support for statistics counting of mechanism usage. The ICA/EP11 tokens now support libica library version 4. The p11sak tool enables setting different attributes for public and private keys. The C_GetMechanismList does not return CKR_BUFFER_TOO_SMALL in the EP11 token. openCryptoki supports two different token data formats: the earlier data format, which uses non-FIPS-approved algorithms (such as DES and SHA1) the new data format, which uses FIPS-approved algorithms only. The earlier data format no longer works because the FIPS provider allows the use of only FIPS-approved algorithms. Important To make openCryptoki work on RHEL 9, migrate the tokens to use the new data format before enabling FIPS mode on the system. This is necessary because the earlier data format is still the default in openCryptoki 3.17 . Existing openCryptoki installations that use the earlier token data format will no longer function when the system is changed to FIPS-enabled. You can migrate the tokens to the new data format by using the pkcstok_migrate utility, which is provided with openCryptoki . Note that pkcstok_migrate uses non-FIPS-approved algorithms during the migration. Therefore, use this tool before enabling FIPS mode on the system. For additional information, see Migrating to FIPS compliance - pkcstok_migrate utility . (BZ#2044179) powerpc-utils rebased to version 1.3.10 The powerpc-utils package, which provides various utilities for a PowerPC platform, has been updated to version 1.3.10. Notable improvements include: Added the capability to parsing the Power architecture platform reference (PAPR) information for energy and frequency in the ppc64_cpu tool. Improved the lparstat utility to display enhanced error messages, when the lparstat -E command fails on max config systems. The lparstat command reports logical partition-related information. Fixed reported online memory in legacy format in the lparstat command. Added support for the acc command for changing the quality of service credits (QoS) dynamically for the NX GZIP accelerator. Added improvements to format specifiers in printf() and sprintf() calls. The hcnmgr utility, which provides the HMC tools to hybrid virtual network, includes following enhancements: Added the wicked feature to the Hybrid Network Virtualization HNV FEATURE list. The hcnmgr utility supports wicked hybrid network virtualization (HNV) to use the wicked functions for bonding. hcnmgr maintains an hcnid state for later cleanup. hcnmgr excludes NetworkManager (NM) nmcli code. The NM HNV primary slave setting was fixed. hcnmgr supports the virtual Network Interface Controller (vNIC) as a backup device. Fixed the invalid hexadecimal numbering system message in bootlist . The -l flag included in kpartx utility as -p delimiter value in the bootlist command. Fixes added to sslot utility to prevent memory leak when listing IO slots. Added the DRC type description strings for the latest peripheral component interconnect express (PCIe) slot types in the lsslot utility. Fixed the invalid config address to RTAS in errinjct tool. Added support for non-volatile memory over fabrics (NVMf) devices in the ofpathname utility. The utility provides a mechanism for converting a logical device name to an open firmware device path and the other way round. Added fixes to the non-volatile memory (NVMe) support in asymmetric namespace access (ANA) mode in the ofpathname utility. Installed smt.state file as a configuration file. (BZ#1920964) The Redfish modules are now part of the redhat.rhel_mgmt Ansible collection The redhat.rhel_mgmt Ansible collection now includes the following modules: redfish_info redfish_command redfish_config With that, users can benefit from the management automation, by using the Redfish modules to retrieve server health status, get information about hardware and firmware inventory, perform power management, change BIOS settings, configure Out-Of-Band (OOB) controllers, configure hardware RAID, and perform firmware updates. ( BZ#2112434 ) libvpd rebased to version 2.2.9 The libvpd package, which contains classes for accessing the Vital Product Data (VPD), has been updated to version 2.2.9. Notable improvements include: Fixed database locking Updated libtool utility version information (BZ#2051288) lsvpd rebased to version 1.7.14 The lsvpd package, which provides commands for constituting a hardware inventory system, has been updated to version 1.7.14. With this update, the lsvpd utility prevents corruption of the database file when you run the vpdupdate command. (BZ#2051289) ppc64-diag rebased to version 2.7.8 The ppc64-diag package for platform diagnostics has been updated to version 2.7.8. Notable improvements include: Updated build dependency to use libvpd utility version 2.2.9 or higher Fixed extract_opal_dump error message on unsupported platform Fixed build warning with GCC-8.5 and GCC-11 compilers (BZ#2051286) sysctl introduces identic syntax for arguments as systemd-sysctl The sysctl utility from the procps-ng package, which you can use to modify kernel parameters at runtime, now uses the same syntax for arguments as the systemd-sysctl utility. With this update, sysctl now parses configuration files that contain hyphens ( - ) or globs ( * ) on configuration lines. For more information about the systemd-sysctl syntax, see the sysctl.d(5) man page. ( BZ#2052536 ) Updated systemd-udevd assigns consistent network device names to InfiniBand interfaces Introduced in RHEL 9, the new version of the systemd package contains the updated systemd-udevd device manager. The device manager changes the default names of InfiniBand interfaces to consistent names selected by systemd-udevd . You can define custom naming rules for naming InfiniBand interfaces by following the Renaming IPoIB devices procedure. For more details of the naming scheme, see the systemd.net-naming-scheme(7) man page. ( BZ#2136937 ) 4.6. Infrastructure services chrony now uses DHCPv6 NTP servers The NetworkManager dispatcher script for chrony updates the Network time protocol (NTP) sources passed from Dynamic Host Configuration Protocol (DHCP) options. Since RHEL 9.1, the script uses NTP servers provided by DHCPv6 in addition to DHCPv4. The DHCP option 56 specifies the usage of DHCPv6, the DHCP option 42 is DHCPv4-specific. ( BZ#2047415 ) chrony rebased to version 4.2 The chrony suite has been updated to version 4.2. Notable enhancements over version 4.1 include: The server interleaved mode has been improved to be more reliable and supports multiple clients behind a single address translator (Network Address Translation - NAT). Experimental support for the Network Time Protocol Version 4 (NTPv4) extension field has been added to improve time synchronization stability and precision of estimated errors. You can enable this field, which extends the capabilities of the protocol NTPv4, by using the extfield F323 option. Experimental support for NTP forwarding over the Precision Time Protocol (PTP) has been added to enable full hardware timestamping on Network Interface Cards (NIC) that have timestamping limited to PTP packets. You can enable NTP over PTP by using the ptpport 319 directive. ( BZ#2051441 ) unbound rebased to version 1.16.2 The unbound component has been updated to version 1.16.2. unbound is a validating, recursive, and caching DNS resolver. Notable improvements include: With the ZONEMD Zone Verification with RFC 8976 support, recipients can now verify the zone contents for data integrity and origin authenticity. With unbound , you can now configure persistent TCP connections. The SVCB and HTTPS types and handling according to the Service binding and parameter specification through the DNS draft-ietf-dnsop-svcb-https document were added. unbound takes the default TLS ciphers from crypto policies. You can use a Special-Use Domain home.arpa. according to the RFC8375 . This domain is designated for non-unique use in residential home networks. unbound now supports selective enabling of tcp-upstream queries for stub or forward zones. The default of aggressive-nsec option is now yes . The ratelimit logic was updated. You can use a new rpz-signal-nxdomain-ra option for unsetting the RA flag when a query is blocked by an Unbound response policy zone (RPZ) nxdomain reply. With the basic support for Extended DNS Errors (EDE) according to the RFC8914 , you can benefit from additional error information. ( BZ#2087120 ) The password encryption function is now available in whois The whois package now provides the /usr/bin/mkpasswd binary, which you can use to encrypt a password with the crypt C library interface. ( BZ#2054043 ) frr rebased to version 8.2.2 The frr package for managing dynamic routing stack has been updated to version 8.2.2. Notable changes and enhancements over version 8.0 include: Added Ethernet VPN (EVPN) route type-5 gateway IP Overlay Index. Added Autonomous system border router (ASBR) summarization in the Open-shortest-path-first (OSPFv3) protocol. Improved usage of stub and not-so-stubby-areas (NSSA) in OSPFv3. Added the graceful restart capability in OSPFv2 and OSPFv3. The link bandwidth in the border gateway protocol (BGP) is now encoded according to the IEEE 754 standard. To use the encoding method, run the neighbor PEER disable-link-bw-encoding-ieee command in the existing configuration. Added the long-lived graceful restart capability in BGP. Implemented the extended administrative shutdown communication rfc9003 , and the extended optional parameters length rfc9072 in BGP. ( BZ#2069563 ) TuneD real-time profiles now auto determine initial CPU isolation setup TuneD is a service for monitoring your system and optimizing the performance profile. You can also isolate central processing units (CPUs) using the tuned-profiles-realtime package to give application threads the most execution time possible. Previously, the real-time profiles for systems running the real-time kernel did not load if you did not specify the list of CPUs to isolate in the isolated_cores parameter. With this enhancement, TuneD introduces the calc_isolated_cores built-in function that automatically calculates housekeeping and isolated cores lists, and applies the calculation to the isolated_cores parameter. With the automatic preset, one core from each socket is reserved for housekeeping, and you can start using the real-time profile without any additional steps. If you want to change the preset, customize the isolated_cores parameter by specifying the list of CPUs to isolate. ( BZ#2093847 ) 4.7. Security New packages: keylime RHEL 9.1 introduces Keylime, a tool for attestation of remote systems, which uses the trusted platform module (TPM) technology. With Keylime, you can verify and continuously monitor the integrity of remote systems. You can also specify encrypted payloads that Keylime delivers to the monitored machines, and define automated actions that trigger whenever a system fails the integrity test. See Ensuring system integrity with Keylime in the RHEL 9 Security hardening document for more information. (JIRA:RHELPLAN-92522) New option in OpenSSH supports setting the minimum RSA key length Accidentally using short RSA keys makes the system more vulnerable to attacks. With this update, you can set minimum RSA key lengths for OpenSSH servers and clients. To define the minimum RSA key length, use the new RequiredRSASize option in the /etc/ssh/sshd_config file for OpenSSH servers, and in the /etc/ssh/ssh_config file for OpenSSH clients. ( BZ#2066882 ) crypto-policies enforce 2048-bit RSA key length minimum for OpenSSH by default Using short RSA keys makes the system more vulnerable to attacks. Because OpenSSH now supports limiting minimum RSA key length, the system-wide cryptographic policies enforce the 2048-bit minimum key length for RSA by default. If you encounter OpenSSH failing connections with an Invalid key length error message, start using longer RSA keys. Alternatively, you can relax the restriction by using a custom subpolicy at the expense of security. For example, if the update-crypto-policies --show command reports that the current policy is DEFAULT : Define a custom subpolicy by inserting the min_rsa_size@openssh = 1024 parameter into the /etc/crypto-policies/policies/modules/RSA-OPENSSH-1024.pmod file. Apply the custom subpolicy using the update-crypto-policies --set DEFAULT:RSA-OPENSSH-1024 command. ( BZ#2102774 ) New option in OpenSSL supports SHA-1 for signatures OpenSSL 3.0.0 in RHEL 9 does not support SHA-1 for signature creation and verification by default (SHA-1 key derivation functions (KDF) and hash-based message authentication codes (HMAC) are still supported). However, to support backwards compatibility with RHEL 8 systems that still use SHA-1 for signatures, a new configuration option rh-allow-sha1-signatures is introduced to RHEL 9. This option, if enabled in the alg_section of openssl.cnf , permits the creation and verification of SHA-1 signatures. This option is automatically enabled if the LEGACY system-wide cryptographic policy (not legacy provider) is set. Note that this also affects the installation of RPM packages with SHA-1 signatures, which may require switching to the LEGACY system-wide cryptographic policy. (BZ#2060510, BZ#2055796 ) crypto-policies now support [email protected] This update of the system-wide cryptographic policies adds support for the [email protected] key exchange (KEX) method. The post-quantum sntrup761 algorithm is already available in the OpenSSH suite, and this method provides better security against attacks from quantum computers. To enable [email protected] , create and apply a subpolicy, for example: For more information, see the Customizing system-wide cryptographic policies with subpolicies section in the RHEL 9 Security hardening document. ( BZ#2070604 ) NSS no longer support RSA keys shorter than 1023 bits The update of the Network Security Services (NSS) libraries changes the minimum key size for all RSA operations from 128 to 1023 bits. This means that NSS no longer perform the following functions: Generate RSA keys shorter than 1023 bits. Sign or verify RSA signatures with RSA keys shorter than 1023 bits. Encrypt or decrypt values with RSA key shorter than 1023 bits. ( BZ#2091905 ) SELinux policy confines additional services The selinux-policy packages have been updated, and therefore the following services are now confined by SELinux: ksm nm-priv-helper rhcd stalld systemd-network-generator targetclid wg-quick (BZ#1965013, BZ#1964862, BZ#2020169, BZ#2021131, BZ#2042614, BZ#2053639 , BZ#2111069 ) SELinux supports the self keyword in type transitions SELinux tooling now supports type transition rules with the self keyword in the policy sources. Support for type transitions with the self keyword prepares the SELinux policy for labeling of anonymous inodes. ( BZ#2069718 ) SELinux user-space packages updated SELinux user-space packages libsepol , libselinux , libsemanage , policycoreutils , checkpolicy , and mcstrans were updated to the latest upstream release 3.4. The most notable changes are: Added support for parallel relabeling through the -T option in the setfiles , restorecon , and fixfiles tools. You can either specify the number of process threads in this option or use -T 0 for using the maximum of available processor cores. This reduces the time required for relabeling significantly. Added the new --checksum option, which prints SHA-256 hashes of modules. Added new policy utilities in the libsepol-utils package. ( BZ#2079276 ) SELinux automatic relabeling is now parallel by default Because the newly introduced parallel relabeling option significantly reduces the time required for the SELinux relabeling process on multi-core systems, the automatic relabeling script now contains the -T 0 option in the fixfiles command line. The -T 0 option ensures that the setfiles program uses the maximum of available processor cores for relabeling by default. To use only one process thread for relabeling as in the version of RHEL, override this setting by entering either the fixfiles -T 1 onboot command instead of just fixfiles onboot or the echo "-T 1" > /.autorelabel command instead of touch /.autorelabel . ( BZ#2115242 ) SCAP Security Guide rebased to 0.1.63 The SCAP Security Guide (SSG) packages have been rebased to upstream version 0.1.63. This version provides various enhancements and bug fixes, most notably: New compliance rules for sysctl , grub2 , pam_pwquality , and build time kernel configuration were added. Rules hardening the PAM stack now use authselect as the configuration tool. Note: With this change, the rules hardening the PAM stack are not applied if the PAM stack was edited by other means. ( BZ#2070563 ) Added a maximum size option for Rsyslog error files Using the new action.errorfile.maxsize option, you can specify a maximum number of bytes of the error file for the Rsyslog log processing system. When the error file reaches the specified size, Rsyslog cannot write any additional errors or other data in it. This prevents the error file from filling up the file system and making the host unusable. ( BZ#2064318 ) clevis-luks-askpass is now enabled by default The /lib/systemd/system-preset/90-default.preset file now contains the enable clevis-luks-askpass.path configuration option and the installation of the clevis-systemd sub-package ensures that the clevis-luks-askpass.path unit file is enabled. This enables the Clevis encryption client to unlock also LUKS-encrypted volumes that mount late in the boot process. Before this update, the administrator must use the systemctl enable clevis-luks-askpass.path command to enable Clevis to unlock such volumes. ( BZ#2107078 ) fapolicyd rebased to 1.1.3 The fapolicyd packages have been upgraded to version 1.1.3. Notable improvements and bug fixes include: Rules can now contain the new subject PPID attribute, which matches the parent PID (process ID) of a subject. The OpenSSL library replaced the Libgcrypt library as a cryptographic engine for hash computations. The fagenrules --load command now works correctly. ( BZ#2100041 ) 4.8. Networking The act_ctinfo kernel module has been added This enhancement adds the act_ctinfo kernel module to RHEL. Using the ctinfo action of the tc utility, administrators can copy the conntrack mark or the value of the differentiated services code point (DSCP) of network packets into the socket buffer's mark metadata field. As a result, you can use conditions based on the conntrack mark or the DSCP value to filter traffic. For further details, see the tc-ctinfo(8) man page. (BZ#2027894) cloud-init updates network configuration at every boot on Microsoft Azure Microsoft Azure does not change the instance ID when an administrator updates the network interface configuration while a VM is offline. With this enhancement, the cloud-init service always updates the network configuration when the VM boots to ensure that RHEL on Microsoft Azure uses the latest network settings. As a consequence, if you manually configure settings on interfaces, such as an additional search domain, cloud-init may override them when you reboot the VM. For further details and a workaround, see the cloud-init-22.1-5 updates network config on every boot solution. ( BZ#2144898 ) The PTP driver now supports virtual clocks and time stamping With this enhancement, the Precision Time Protocol (PTP) driver can create virtual PTP Hardware Clocks (PHCs) on top of a free-running PHC by writing to /sys/class/ptp/ptp*/n_vclocks . As a result, users can run multiple domain synchronization with hardware time stamps on one interface. (BZ#2066451) firewalld was rebased to version 1.1.1 The firewalld packages have been upgraded to version 1.1.1. This version provides multiple bug fixes and enhancements over the version: New features: Rich rules support NetFilter-log (NFLOG) target for user-space logging. Note that there is not any NFLOG capable logging daemon in RHEL. However, you can use the tcpdump -i nflog command to collect the logs you need. Support for port forwarding in policies with ingress-zones=HOST and egress-zones={ANY, source based zone } . Other notable changes include: Support for the afp , http3 , jellyfin , netbios-ns , ws-discovery , and ws-discovery-client services Tab-completion and sub-options in Z Shell for the policy option ( BZ#2040689 ) NetworkManager now supports advmss , rto_min , and quickack route attributes With this enhancement, administrators can configure the ipv4.routes setting with the following attributes: rto_min (TIME) - configure the minimum TCP re-transmission timeout in milliseconds when communicating with the route destination quickack (BOOL) - a per-route setting to enable or disable TCP quick ACKs advmss (NUMBER) - advertise maximum segment size (MSS) to the route destination when establishing TCP connections. If unspecified, Linux uses a default value calculated from the maximum transmission unit (MTU) of the first hop device Benefit of implementing the new functionality of ipv4.routes with the mentioned attributes is that there is no need to run the dispatcher script. Note that once you activate a connection with the mentioned route attributes, such changes are set in the kernel. (BZ#2068525) Support for the 802.ad vlan-protocol option in nmstate The nmstate API now supports creating the linux-bridge interfaces using the 802.ad vlan-protocol option. This feature enables the configuration of Service-Tag VLANs. The following example illustrates usage of this functionality in a yaml configuration file. ( BZ#2084474 ) The firewalld service can forward NAT packets originating from the local host to a different host and port You can forward packets sent from the localhost that runs the firewalld service to a different destination port and IP address. The functionality is useful, for example, to forward ports on the loopback device to a container or a virtual machine. Prior to this change, firewalld could only forward ports when it received a packet that originated from another host. For more details and an illustrative configuration, see Using DNAT to forward HTTPS traffic to a different host . ( BZ#2039542 ) NetworkManager now supports migration from ifcfg-rh to key file Users can migrate their existing connection profile files from the ifcfg-rh format to the key file format. This way, all connection profiles will be in one location and in the preferred format. The key file format has the following advantages: Closely resembles the way how NetworkManager expresses network configuration Guarantees compatibility with future RHEL releases Is easier to read Supports all connection profiles To migrate the connections, run: Note that the ifcfg-rh files will work correctly during the RHEL 9 lifetime. However, migrating the configuration to the key file format guarantees compatibility beyond RHEL 9. For more details, see the nmcli(1) , nm-settings-keyfile(5), and nm-settings-ifcfg-rh(5) manual pages. ( BZ#2059608 ) More DHCP and IPv6 auto-configuration attributes have been added to the nmstate API This enhancement adds support for the following attributes to the nmstate API: dhcp-client-id for DHCPv4 connections as described in RFC 2132 and 4361. dhcp-duid for DHCPv6 connections as described in RFC 8415. addr-gen-mode for IPv6 auto-configuration. You can set this attribute to: eui64 as described in RFC 4862 stable-privacy as described in RFC 7217 ( BZ#2082043 ) NetworkManager now clearly indicates that WEP support is not available in RHEL 9 The wpa_supplicant packages in RHEL 9.0 and later no longer contain the deprecated and insecure Wired Equivalent Privacy (WEP) security algorithm. This enhancement updates NetworkManager to reflect these changes. For example, the nmcli device wifi list command now returns WEP access points at the end of the list in gray color, and connecting to a WEP-protected network returns a meaningful error message. For secure encryption, use only wifi networks with Wi-Fi Protected Access 2 (WPA2) and WPA3 authentication. ( BZ#2030997 ) The MPTCP code has been updated The MultiPath TCP (MPTCP) code in the kernel has been updated and upstream Linux 5.19. This update provides a number of bug fixes and enhancements over the version: The FASTCLOSE option has been added to close MPTCP connections without a full three-way handshake. The MP_FAIL option has been added to enable fallback to TCP even after the initial handshake. The monitoring capabilities have been improved by adding additional Management Information Base (MIB) counters. Monitor support for MPTCP listener sockets has been added. Use the ss utility to monitor the sockets. (BZ#2079368) 4.9. Kernel Kernel version in RHEL 9.1 Red Hat Enterprise Linux 9.1 is distributed with the kernel version 5.14.0-162. ( BZ#2125549 ) Memory consumption of the list_lru has been optimized The internal kernel data structure, list_lru , tracks the "Least Recently Used" status of kernel inodes and directory entries for files. Previously, the number of list_lru allocated structures was directly proportional to the number of mount points and the number of present memory cgroups . Both these numbers increased with the number of running containers leading to memory consumption of O(n^2) where n is the number of running containers. This update optimizes the memory consumption of list_lru in the system to O(n) . As a result, sufficient memory is now available for the user applications, especially on the systems with a large number of running containers. (BZ#2013413) BPF rebased to Linux kernel version 5.16 The Berkeley Packet Filter (BPF) facility has been rebased to Linux kernel version 5.16 with multiple bug fixes and enhancements. The most notable changes include: Streamlined internal BPF program sections handling and bpf_program__set_attach_target() API in the libbpf userspace library. The bpf_program__set_attach_target() API sets the BTF based attach targets for BPF based programs. Added support for the BTF_KIND_TAG kind, which allows you to tag declarations. Added support for the bpf_get_branch_snapshot() helper, which enables the tracing program to capture the last branch records (LBR) from the hardware. Added the legacy kprobe events support in the libbpf userspace library that enables kprobe tracepoint events creation through the legacy interface. Added the capability to access hardware timestamps through BPF specific structures with the __sk_buff helper function. Added support for a batched interface for RX buffer allocation in AF_XDP buffer pool, with driver support for i40e and ice . Added the legacy uprobe support in libbpf userspace library to complement recently merged legacy kprobe . Added the bpf_trace_vprintk() as variadic printk helper. Added the libbpf opt-in for stricter BPF program section name handling as part of libbpf 1.0 effort. Added the libbpf support to locate specialized maps, such as perf RB and internally delete BTF type identifiers while creating them. Added the bloomfilter BPF map type to test if an element exists in a set. Added support for kernel module function calls from BPF. Added support for typeless and weak ksym in light skeleton. Added support for the BTF_KIND_DECL_TAG kind. For more information on the full list of BPF features available in the running kernel, use the bpftool feature command. (BZ#2069045) BTF data is now located in the kernel module BPF Type Format (BTF) is the metadata format that encodes the debug information related to BPF program and map. Previously, the BTF data for kernel modules was stored in the kernel-debuginfo package. As a consequence, it was necessary to install the corresponding kernel-debuginfo package in order to use BTF for kernel modules. With this update, the BTF data is now located directly in the kernel module. As a result, you do not need to install any additional packages for BTF to work. (BZ#2097188) The kernel-rt source tree has been updated to RHEL 9.1 tree The kernel-rt sources have been updated to use the latest Red Hat Enterprise Linux kernel source tree. The real-time patch set has also been updated to the latest upstream version, v5.15-rt . These updates provide a number of bug fixes and enhancements. (BZ#2061574) Dynamic preemptive scheduling enabled on ARM and AMD and Intel 64-bit architectures RHEL 9 provides the dynamic scheduling feature on the ARM and AMD and Intel 64-bit architectures. This enhancement enables changing the preemption mode of the kernel at boot or runtime instead of the compile time. The /sys/kernel/debug/sched/preempt file contains the current setting and allows runtime modification. Using the DYNAMIC_PREEMPT option, you can set the preempt= variable at boot time to either none , voluntary or full with voluntary preemption being the default. Using dynamic preemptive handling, you can override the default preemption model to improve scheduling latency. (BZ#2065226) stalld rebased to version 1.17 The stalld program, which provides the stall daemon, is a mechanism to prevent the starvation state of operating system threads in a Linux system. This version monitors the threads for the starvation state. Starvation occurs when a thread is on a CPU run queue for longer than the starvation threshold. This stalld version includes many improvements and bug fixes over the version. The notable change includes the capability to detect runnable dying tasks. When stalld detects a starving thread, the program changes the scheduling class of the thread to the SCHED_DEADLINE policy, which gives the thread a small slice of time for the specified CPU to run the thread. When the timeslice is used, the thread returns to its original scheduling policy and stalld continues to monitor the thread states. ( BZ#2107275 ) The tpm2-tools package has been rebased to tpm2-tools-5.2-1 version The tpm2-tools package has been rebased to version tpm2-tools-5.2-1 . This upgrade provides many significant enhancements and bug fixes. Most notable changes include: Adds support for public-key output at primary object creation using the tpm2_createprimary and tpm2_create tools. Adds support for the tpm2_print tool to print public-key output formats. tpm2_print decodes a Trusted Platform Module (TPM) data structure and prints enclosed elements. Adds support to the tpm2_eventlog tool for reading logs larger than 64 KB. Adds the tpm2_sessionconfig tool to support displaying and configuring session attributes. For more information on notable changes, see the /usr/share/doc/tpm2-tools/Changelog.md file. (BZ#2090748) Intel E800 devices now support iWARP and RoCE protocols With this enhancement, you can now use the enable_iwarp and enable_roce devlink parameters to turn on and off iWARP or RoCE protocol support. With this mandatory feature, you can configure the device with one of the protocols. The Intel E800 devices do not support both protocols simultaneously on the same port. To enable or disable the iWARP protocol for a specific E800 device, first obtain the PCI location of the card: Then enable, or disable, the protocol. You can use use pci/0000:44:00.0 for the first port, and pci/0000:44:00.1 for second port of the card as argument to the devlink command To enable or disable the RoCE protocol for a specific E800 device, obtain the PCI location of the card as shown above. Then use one of the following commands: (BZ#2096127) 4.10. Boot loader GRUB is signed by new keys Due to security reasons, GRUB is now signed by new keys. As a consequence, you need to update the RHEL firmware to version FW1010.30 (or later) or FW1020 to be able to boot the little-endian variant of IBM Power Systems with the Secure Boot feature enabled. (BZ#2074761) Configurable disk access retries when booting a VM on IBM POWER You can now configure how many times the GRUB boot loader retries accessing a remote disk when a logical partition ( lpar ) virtual machine (VM) boots on the IBM POWER architecture. Lowering the number of retries can prevent a slow boot in certain situations. Previously, GRUB retried accessing disks 20 times when disk access failed at boot. This caused problems if you performed a Live Partition Mobility (LPM) migration on an lpar system that connected to slow Storage Area Network (SAN) disks. As a consequence, the boot might have taken very long on the system until the 20 retries finished. With this update, you can now configure and decrease the number of disk access retries using the ofdisk_retries GRUB option. For details, see Configure disk access retries when booting a VM on IBM POWER . As a result, the lpar boot is no longer slow after LPM on POWER, and the lpar system boots without the failed disks. ( BZ#2070725 ) 4.11. File systems and storage Stratis now enables setting the file system size upon creation You can now set the required size when creating a file system. Previously, the automatic default size was 1 TiB. With this enhancement, users can set an arbitrary filesystem size. The lower limit must not go below 512 MiB. ( BZ#1990905 ) Improved overprovision management of Stratis pools With the improvements to the management of thin provisioning, you can now have improved warnings, precise allocation of space for the pool metadata, improved predictability, overall safety, and reliability of thin pool management. A new distinct mode disables overprovisioning. With this enhancement, the user can disable overprovisioning to ensure that a pool contains enough space to support all its file systems, even if these are completely full. ( BZ#2040352 ) Stratis now provides improved individual pool management You can now stop and start stopped individual Stratis pools. Previously, stratisd attempted to start all available pools for all devices it detected. This enhancement provides more flexible management of individual pools within Stratis, better debugging and recovery capabilities. The system no longer requires a reboot to perform recovery and maintenance operations for a single pool. ( BZ#2039960 ) Enabled protocol specific configuration of multipath device paths Previously due to different optimal configurations for the different protocols, it was impossible to set the configuration correctly without setting an option for each individual protocol. With this enhancement, users can now configure multipath device paths based on their path transport protocol. Use the protocol subsection of the overrides section in the /etc/multipath.conf file to correctly configure multipath device paths, based on their protocol. ( BZ#2084365 ) New libnvme feature library Previously, the NVMe storage command line interface utility ( nvme-cli ) included all of the helper functions and definitions. This enhancement brings a new libnvme library to RHEL 9.1. The library includes: Type definitions for NVMe specification structures Enumerations and bit fields Helper functions to construct, dispatch, and decode commands and payloads Utilities to connect, scan, and manage NVMe devices With this update, users do not need to duplicate the code and multiple projects and packages, such as nvme-stas , and can rely on this common library. (BZ#2099619) A new library libnvme is now available With this update, nvme-cli is divided in two different projects: * nvme-cli now only contains the code specific to the nvme tool * libnvme library now contains all type definitions for NVMe specification structures, enumerations, bit fields, helper functions to construct, dispatch, decode commands and payloads, and utilities to connect, scan, and manage NVMe devices. ( BZ#2090121 ) 4.12. High availability and clusters Support for High Availability on Red Hat OpenStack platform You can now configure a high availability cluster on the Red Hat OpenStack platform. In support of this feature, Red Hat provides the following new cluster agents: fence_openstack : fencing agent for HA clusters on OpenStack openstack-info : resource agent to configure the openstack-info cloned resource, which is required for an HA cluster on OpenStack openstack-virtual-ip : resource agent to configure a virtual IP address resource openstack-floating-ip : resource agent to configure a floating IP address resource openstack-cinder-volume : resource agent to configure a block storage resource ( BZ#2121838 ) pcs supports updating multipath SCSI devices without requiring a system restart You can now update multipath SCSI devices with the pcs stonith update-scsi-devices command. This command updates SCSI devices without causing a restart of other cluster resources running on the same node. ( BZ#2024522 ) Support for cluster UUID During cluster setup, the pcs command now generates a UUID for every cluster. Since a cluster name is not a unique cluster identifier, you can use the cluster UUID to identify clusters with the same name when you administer multiple clusters. You can display the current cluster UUID with the pcs cluster config [show] command. You can add a UUID to an existing cluster or regenerate a UUID if it already exists by using the pcs cluster config uuid generate command. ( BZ#2054671 ) New pcs resource config command option to display the pcs commands that re-create configured resources The pcs resource config command now accepts the --output-format=cmd option. Specifying this option displays the pcs commands you can use to re-create configured resources on a different system. ( BZ#2058251 ) New pcs stonith config command option to display the pcs commands that re-create configured fence devices The pcs stonith config command now accepts the --output-format=cmd option. Specifying this option displays the pcs commands you can use to re-create configured fence devices on a different system. ( BZ#2058252 ) Pacemaker rebased to version 2.1.4 The Pacemaker packages have been upgraded to the upstream version of Pacemaker 2.1.4. Notable changes include: The multiple-active resource parameter now accepts a value of stop_unexpected , The multiple-active resource parameter determines recovery behavior when a resource is active on more than one node when it should not be. By default, this situation requires a full restart of the resource, even if the resource is running successfully where it should be. A value of stop_unexpected for this parameter specifies that only unexpected instances of a multiply-active resource are stopped. It is the user's responsibility to verify that the service and its resource agent can function with extra active instances without requiring a full restart. Pacemaker now supports the allow-unhealthy-node resource meta-attribute. When this meta-attribute is set to true , the resource is not forced off a node due to degraded node health. When health resources have this attribute set, the cluster can automatically detect if the node's health recovers and move resources back to it. Users can now specify Access Control Lists (ACLS) for a system group using the pcs acl group command. Pacemaker previously allowed ACLs to be specified for individual users, but it is sometimes simpler and would conform better with local policies to specify ACLs for a system group, and to have them apply to all users in that group. This command was present in earlier releases but had no effect. ( BZ#2072108 ) Samba no longer automatically installed with cluster packages As of this release, installing the packages for the RHEL High Availability Add-On no longer installs the Samba packages automatically. This also allows you to remove the Samba packages without automatically removing the HA packages as well. If your cluster uses Samba resources you must now manually install them. (BZ#1826455) 4.13. Dynamic programming languages, web and database servers The nodejs:18 module stream is now fully supported The nodejs:18 module stream, previously available as a Technology Preview, is fully supported with the release of the RHSA-2022:8832 advisory. The nodejs:18 module stream now provides Node.js 18.12 , which is a Long Term Support (LTS) version. Node.js 18 included in RHEL 9.1 provides numerous new features together with bug and security fixes over Node.js 16 . Notable changes include: The V8 engine has been upgraded to version 10.2. The npm package manager has been upgraded to version 8.19.2. Node.js now provides a new experimental fetch API. Node.js now provides a new experimental node:test module, which facilitates the creation of tests that report results in the Test Anything Protocol (TAP) format. Node.js now prefers IPv6 addresses over IPv4. To install the nodejs:18 module stream, use: (BZ#2083072) A new module stream: php:8.1 RHEL 9.1 adds PHP 8.1 as a new php:8.1 module stream. With PHP 8.1 , you can: Define a custom type that is limited to one of a discrete number of possible values using the Enumerations (Enums) feature Declare a property with the readonly modifier to prevent modification of the property after initialization Use fibers, full-stack, interruptible functions To install the php:8.1 module stream, use: For details regarding PHP usage on RHEL 9, see Using the PHP scripting language . (BZ#2070040) A new module stream: ruby:3.1 RHEL 9.1 introduces Ruby 3.1.2 in a new ruby:3.1 module stream. This version provides a number of performance improvements, bug and security fixes, and new features over Ruby 3.0 distributed with RHEL 9.0. Notable enhancements include: The Interactive Ruby (IRB) utility now provides an autocomplete feature and a documentation dialog A new debug gem, which replaces lib/debug.rb , provides improved performance, and supports remote debugging and multi-process/multi-thread debugging The error_highlight gem now provides a fine-grained error location in the backtrace Values in the hash literal data types and keyword arguments can now be omitted The pin operator ( ^ ) now accepts an expression in pattern matching Parentheses can now be omitted in one-line pattern matching YJIT, a new experimental in-process Just-in-Time (JIT) compiler, is now available on the AMD and Intel 64-bit architectures The TypeProf For IDE utility has been introduced, which is an experimental static type analysis tool for Ruby code in IDEs The following performance improvements have been implemented in Method Based Just-in-Time Compiler (MJIT): For workloads like Rails , the default maximum JIT cache value has increased from 100 to 10000 Code compiled using JIT is no longer canceled when a TracePoint for class events is enabled Other notable changes include: The tracer.rb file has been removed Since version 4.0, the Psych YAML parser uses the safe_load method by default To install the ruby:3.1 module stream, use: (BZ#2063773) httpd rebased to version 2.4.53 The Apache HTTP Server has been updated to version 2.4.53, which provides bug fixes, enhancements, and security fixes over version 2.4.51 distributed with RHEL 9.0. Notable changes in the mod_proxy and mod_proxy_connect modules include: mod_proxy : The length limit of the name of the controller has been increased mod_proxy : You can now selectively configure timeouts for backend and frontend mod_proxy : You can now disable TCP connections redirection by setting the SetEnv proxy-nohalfclose parameter mod_proxy and mod_proxy_connect : It is forbidden to change a status code after sending it to a client In addition, a new ldap function has been added to the expression API, which can help prevent the LDAP injection vulnerability. ( BZ#2079939 ) A new default for the LimitRequestBody directive in httpd configuration To fix CVE-2022-29404 , the default value for the LimitRequestBody directive in the Apache HTTP Server has been changed from 0 (unlimited) to 1 GiB. On systems where the value of LimitRequestBody is not explicitly specified in an httpd configuration file, updating the httpd package sets LimitRequestBody to the default value of 1 GiB. As a consequence, if the total size of the HTTP request body exceeds this 1 GiB default limit, httpd returns the 413 Request Entity Too Large error code. If the new default allowed size of an HTTP request message body is insufficient for your use case, update your httpd configuration files within the respective context (server, per-directory, per-file, or per-location) and set your preferred limit in bytes. For example, to set a new 2 GiB limit, use: Systems already configured to use any explicit value for the LimitRequestBody directive are unaffected by this change. (BZ#2128016) New package: httpd-core Starting with RHEL 9.1, the httpd binary file with all essential files has been moved to the new httpd-core package to limit the Apache HTTP Server's dependencies in scenarios where only the basic httpd functionality is needed, for example, in containers. The httpd package now provides systemd -related files, including mod_systemd , mod_brotli , and documentation. With this change, the httpd package no longer provides the httpd Module Magic Number (MMN) value. Instead, the httpd-core package now provides the httpd-mmn value. As a consequence, fetching httpd-mmn from the httpd package is no longer possible. To obtain the httpd-mmn value of the installed httpd binary, you can use the apxs binary, which is a part of the httpd-devel package. To obtain the httpd-mmn value, use the following command: (BZ#2065677) pcre2 rebased to version 10.40 The pcre2 package, which provides the Perl Compatible Regular Expressions library v2, has been updated to version 10.40. With this update, the use of the \K escape sequence in lookaround assertions is forbidden, in accordance with the respective change in Perl 5.32 . If you rely on the behavior, you can use the PCRE2_EXTRA_ALLOW_LOOKAROUND_BSK option. Note that when this option is set, \K is accepted only inside positive assertions but is ignored in negative assertions. ( BZ#2086494 ) 4.14. Compilers and development tools The updated GCC compiler is now available for RHEL 9.1 The system GCC compiler, version 11.2.1, has been updated to include numerous bug fixes and enhancements available in the upstream GCC. The GNU Compiler Collection (GCC) provides tools for developing applications with the C, C++, and Fortran programming languages. For usage information, see Developing C and C++ applications in RHEL 9 . ( BZ#2063255 ) New GCC Toolset 12 GCC Toolset 12 is a compiler toolset that provides recent versions of development tools. It is available as an Application Stream in the form of a Software Collection in the AppStream repository. The GCC compiler has been updated to version 12.1.1, which provides many bug fixes and enhancements that are available in upstream GCC. The following tools and versions are provided by GCC Toolset 12: Tool Version GCC 12.1.1 GDB 11.2 binutils 2.35 dwz 0.14 annobin 10.76 To install GCC Toolset 12, run the following command as root: To run a tool from GCC Toolset 12: To run a shell session where tool versions from GCC Toolset 12 override system versions of these tools: For more information, see GCC Toolset 12 . (BZ#2077465) GCC Toolset 12: Annobin rebased to version 10.76 In GCC Toolset 12, the Annobin package has been updated to version 10.76. Notable bug fixes and enhancements include: A new command line option for annocheck tells it to avoid using the debuginfod service, if it is unable to find debug information in another way. Using debuginfod provides annocheck with more information, but it can also cause significant slow downs in annocheck's performance if the debuginfod server is unavailable. The Annobin sources can now be built using meson and ninja rather than configure and make if desired. Annocheck now supports binaries built by the Rust 1.18 compiler. Additionally, the following known issue has been reported in the GCC Toolset 12 version of Annobin: Under some circumstances it is possible for a compilation to fail with an error message that looks similar to the following: To work around the problem, create a symbolic link in the plugin directory from annobin.so to gcc-annobin.so : Where architecture is replaced with the architecture being used: aarch64 i686 ppc64le s390x x86_64 (BZ#2077438) GCC Toolset 12: binutils rebased to version 2.38 In GCC Toolset 12, the binutils package has been updated to version 2.38. Notable bug fixes and enhancements include: All tools in the binutils package now support options to display or warn about the presence of multibyte characters. The readelf and objdump tools now automatically follow any links to separate debuginfo files by default. This behavior can be disabled by using the --debug-dump=no-follow-links option for readelf or the --dwarf=no-follow-links option for objdump . (BZ#2077445) GCC 12 and later supports _FORTIFY_SOURCE level 3 With this enhancement, users can build applications with -D_FORTIFY_SOURCE=3 in the compiler command line when building with GCC version 12 or later. _FORTIFY_SOURCE level 3 improves coverage of source code fortification, thus improving security for applications built with -D_FORTIFY_SOURCE=3 in the compiler command line. This is supported in GCC versions 12 and later and all Clang in RHEL 9 with the __builtin_dynamic_object_size builtin. ( BZ#2033683 ) DNS stub resolver option now supports no-aaaa option With this enhancement, glibc now recognizes the no-aaaa stub resolver option in /etc/resolv.conf and the RES_OPTIONS environment variable. When this option is active, no AAAA queries will be sent over the network. System administrators can disable AAAA DNS lookups for diagnostic purposes, such as ruling out that the superfluous lookups on IPv4-only networks do not contribute to DNS issues. ( BZ#2096191 ) Added support for IBM Z Series z16 The support is now available for the s390 instruction set with the IBM z16 platform. IBM z16 provides two additional hardware capabilities in glibc that are HWCAP_S390_VXRS_PDE2 and HWCAP_S390_NNPA . As a result, applications can now use these capabilities to deliver optimized libraries and functions. (BZ#2077838) Applications can use the restartable sequence features through the new glibc interfaces To accelerate the sched_getcpu function (especially on aarch64), it is necessary to use the restartable sequences (rseq) kernel feature by default in glibc . To allow applications to continuously use the shared rseq area, glibc now provides the __rseq_offset , __rseq_size and __rseq_flags symbols which were first added in glibc 2.35 upstream version. With this enhancement, the performance of the sched_getcpu function is increased and applications can now use the restartable sequence features through the new glibc interfaces. ( BZ#2085529 ) GCC Toolset 12: GDB rebased to version 11.2 In GCC Toolset 12, the GDB package has been updated to version 11.2. Notable bug fixes and enhancements include: New support for the 64-bit ARM architecture Memory Tagging Extension (MTE). See new commands with the memory-tag prefix. --qualified option for -break-insert and -dprintf-insert . This option looks for an exact match of the user's event location instead of searching in all scopes. For example, break --qualified foo will look for a symbol named foo in the global scope. Without --qualified , GDB will search all scopes for a symbol with that name. --force-condition : Any supplied condition is defined even if it is currently invalid. -break-condition --force : Likewise for the MI command. -file-list-exec-source-files accepts optional REGEXP to limit output. .gdbinit search path includes the config directory. The order is: USDXDG_CONFIG_HOME/gdb/gdbinit USDHOME/.config/gdb/gdbinit USDHOME/.gdbinit Support for ~/.config/gdb/gdbearlyinit or ~/.gdbearlyinit . -eix and -eiex early initialization file options. Terminal user interface (TUI): Support for mouse actions inside terminal user interface (TUI) windows. Key combinations that do not act on the focused window are now passed to GDB. New commands: show print memory-tag-violations set print memory-tag-violations memory-tag show-logical-tag memory-tag with-logical-tag memory-tag show-allocation-tag memory-tag check show startup-quietly and set startup-quietly : A way to specify -q or -quiet in GDB scripts. Only valid in early initialization files. show print type hex and set print type hex : Tells GDB to print sizes or offsets for structure members in hexadecimal instead of decimal. show python ignore-environment and set python ignore-environment : If enabled, GDB's Python interpreter ignores Python environment variables, much like passing -E to the Python executable. Only valid in early initialization files. show python dont-write-bytecode and set python dont-write-bytecode : If off , these commands suppress GDB's Python interpreter from writing bytecode compiled objects of imported modules, much like passing -B to the Python executable. Only valid in early initialization files. Changed commands: break LOCATION if CONDITION : If CONDITION is invalid, GDB refuses to set a breakpoint. The -force-condition option overrides this. CONDITION -force N COND : Same as the command. inferior [ID] : When ID is omitted, this command prints information about the current inferior. Otherwise, unchanged. ptype[ /FLAGS ] TYPE | EXPRESSION : Use the /x flag to use hexadecimal notation when printing sizes and offsets of struct members. Use the /d flag to do the same but using decimal. info sources : Output has been restructured. Python API: Inferior objects contain a read-only connection_num attribute. New gdb.Frame.level() method. New gdb.PendingFrame.level() method. gdb.BreakpoiontEvent emitted instead of gdb.Stop . (BZ#2077494) GDB supports Power 10 PLT instructions GDB now supports Power 10 PLT instructions. With this update, users are able to step into shared library functions and inspect stack backtraces using GDB version 10.2-10 and later. (BZ#1870017) The dyninst packaged rebased to version 12.1 The dyninst package has been rebased to version 12.1. Notable bug fixes and enhancements include: Initial support for glibc-2.35 multiple namespaces Concurrency fixes for DWARF parallel parsing Better support for the CUDA and CDNA2 GPU binaries Better support for IBM POWER Systems (little endian) register access Better support for PIE binaries Corrected parsing for catch blocks Corrected access to 64-bit Arm ( aarch64 ) floating point registers ( BZ#2057675 ) A new fileset /etc/profile.d/debuginfod.* Added new fileset for activating organizational debuginfod services. To get a system-wide debuginfod client activation you must add the URL to /etc/debuginfod/FOO.urls file. ( BZ#2088774 ) Rust Toolset rebased to version 1.62.1 Rust Toolset has been updated to version 1.62.1. Notable changes include: Destructuring assignment allows patterns to assign to existing variables in the left-hand side of an assignment. For example, a tuple assignment can swap to variables: (a, b) = (b, a); Inline assembly is now supported on 64-bit x86 and 64-bit ARM using the core::arch::asm! macro. See more details in the "Inline assembly" chapter of the reference, /usr/share/doc/rust/html/reference/inline-assembly.html (online at https://doc.rust-lang.org/reference/inline-assembly.html ). Enums can now derive the Default trait with an explicitly annotated #[default] variant. Mutex , CondVar , and RwLock now use a custom futex -based implementation rather than pthreads, with new optimizations made possible by Rust language guarantees. Rust now supports custom exit codes from main , including user-defined types that implement the newly-stabilized Termination trait. Cargo supports more control over dependency features. The dep: prefix can refer to an optional dependency without exposing that as a feature, and a ? only enables a dependency feature if that dependency is enabled elsewhere, like package-name?/feature-name . Cargo has a new cargo add subcommand for adding dependencies to Cargo.toml . For more details, please see the series of upstream release announcements: Announcing Rust 1.59.0 Announcing Rust 1.60.0 Announcing Rust 1.61.0 Announcing Rust 1.62.0 Announcing Rust 1.62.1 (BZ#2075337) LLVM Toolset rebased to version 14.0.6 LLVM Toolset has been rebased to version 14.0.6. Notable changes include: On 64-bit x86, support for AVX512-FP16 instructions has been added. Support for the Armv9-A, Armv9.1-A and Armv9.2-A architectures has been added. On PowerPC, added the __ibm128 type to represent IBM double-double format, also available as __attribute__((mode(IF))) . clang changes: if consteval for C++2b is now implemented. On 64-bit x86, support for AVX512-FP16 instructions has been added. Completed support of OpenCL C 3.0 and C++ for OpenCL 2021 at experimental state. The -E -P preprocessor output now always omits blank lines, matching GCC behavior. Previously, up to 8 consecutive blank lines could appear in the output. Support -Wdeclaration-after-statement with C99 and later standards, and not just C89, matching GCC's behavior. A notable use case is supporting style guides that forbid mixing declarations and code, but want to move to newer C standards. For more information, see the LLVM Toolset and Clang upstream release notes. (BZ#2061041) Go Toolset rebased to version 1.18.2 Go Toolset has been rebased to version 1.18.2. Notable changes include: The introduction of generics while maintaining backwards compatibility with earlier versions of Go. A new fuzzing library. New debug / buildinfo and net / netip packages. The go get tool no longer builds or installs packages. Now, it only handles dependencies in go.mod . If the main module's go.mod file specifies go 1.17 or higher, the go mod download command used without any additional arguments only downloads source code for the explicitly required modules in the main module's go.mod file. To also download source code for transitive dependencies, use the go mod download all command. The go mod vendor subcommand now supports a -o option to set the output directory. The go mod tidy command now retains additional checksums in the go.sum file for modules whose source code is required to verify that only one module in the build list provides each imported package. This change is not conditioned on the Go version in the main module's go.mod file. (BZ#2075169) A new module stream: maven:3.8 RHEL 9.1 introduces Maven 3.8 as a new module stream. To install the maven:3.8 module stream, use: (BZ#2083112) .NET version 7.0 is available Red Hat Enterprise Linux 9.1 is distributed with .NET version 7.0. Notable improvements include: Support for IBM Power ( ppc64le ) For more information, see Release Notes for .NET 7.0 RPM packages and Release Notes for .NET 7.0 containers . (BZ#2112027) 4.15. Identity Management SSSD now supports memory caching for SID requests With this enhancement, SSSD now supports memory caching for SID requests, which are GID and UID lookups by SID and vice versa. Memory caching results in improved performance, for example, when copying large amounts of files to or from a Samba server. (JIRA:RHELPLAN-123369) The ipaservicedelegationtarget and ipaservicedelegationrule Ansible modules are now available You can now use the ipaservicedelegationtarget and ipaservicedelegationrule ansible-freeipa modules to, for example, configure a web console client to allow an Identity Management (IdM) user that has authenticated with a smart card to do the following: Use sudo on the RHEL host on which the web console service is running without being asked to authenticate again. Access a remote host using SSH and access services on the host without being asked to authenticate again. The ipaservicedelegationtarget and ipaservicedelegationrule modules utilize the Kerberos S4U2proxy feature, also known as constrained delegation. IdM traditionally uses this feature to allow the web server framework to obtain an LDAP service ticket on the user's behalf. The IdM-AD trust system uses the feature to obtain a cifs principal. (JIRA:RHELPLAN-117109) SSSD support for anonymous PKINIT for FAST With this enhancement, SSSD now supports anonymous PKINIT for Flexible Authentication via Secure Tunneling (FAST), also called Kerberos armoring in Active Directory. Until now, to use FAST, a Kerberos keytab was needed to request the required credentials. You can now use anonymous PKINIT to create this credential cache to establish the FAST session. To enable anonymous PKINIT, perform the following steps: Set krb5_fast_use_anonymous_pkinit to true in the [domain] section of the sssd.conf file. Restart SSSD. In an IdM environment, you can verify that anonymous PKINIT was used to establish the FAST session by logging in as the IdM user. A cache file with the FAST ticket is created and the Default principal: WELLKNOWN/ANONYMOUS@WELLKNOWN:ANONYMOUS indicates that anonymous PKINIT was used: (JIRA:RHELPLAN-123368) IdM now supports Random Serial Numbers With this update, Identity Management (IdM) now includes dogtagpki 11.2.0 , which allows you to use Random Serial Numbers version 3 (RSNv3). You can enable RSNv3 by using the --random-serial-numbers option when running ipa-server-install or ipa-ca-install . With RSNv3 enabled, IdM generates fully random serial numbers for certificates and requests in PKI without range management. Using RSNv3, you can avoid range management in large IdM installations and prevent common collisions when reinstalling IdM. Important RSNv3 is supported only for new IdM installations. If enabled, it is required to use RSNv3 on all PKI services. ( BZ#747959 ) IdM now supports a limit on the number of LDAP binds allowed after a user password has expired With this enhancement, you can set the number of LDAP binds allowed when the password of an Identity Management (IdM) user has expired: -1 IdM grants the user unlimited LDAP binds before the user must reset the password. This is the default value, which matches the behavior. 0 This value disables all LDAP binds once a password is expired. In effect, the users must reset their password immediately. 1-MAXINT The value entered allows exactly that many binds post-expiration. The value can be set in the global password policy and in group policies. Note that the count is stored per server. In order for a user to reset their own password they need to bind with their current, expired password. If the user has exhausted all post-expiration binds, then the password must be administratively reset. ( BZ#2091988 ) New ipasmartcard_server and ipasmartcard_client roles With this update, the ansible-freeipa package provides Ansible roles to configure Identity Management (IdM) servers and clients for smart card authentication. The ipasmartcard_server and ipasmartcard_client roles replace the ipa-advise scripts to automate and simplify the integration. The same inventory and naming scheme are used as in the other ansible-freeipa roles. ( BZ#2076567 ) IdM now supports configuring an AD Trust with Windows Server 2022 With this enhancement, you can establish a cross-forest trust between Identity Management (IdM) domains and Active Directory forests that use Domain Controllers running Windows Server 2022. ( BZ#2122716 ) The ipa-dnskeysyncd and ipa-ods-exporter debug messages are no longer logged to /var/log/messages by default Previously, ipa-dnskeysyncd , the service that is responsible for the LDAP-to-OpenDNSSEC synchronization, and ipa-ods-exporter , the Identity Management (IdM) OpenDNSSEC exporter service, logged all debug messages to /var/log/messages by default. As a consequence, log files grew substantially. With this enhancement, you can configure the log level by setting debug=True in the /etc/ipa/dns.conf file. For more information, refer to default.conf(5) , the man page for the IdM configuration file. ( BZ#2083218 ) samba rebased to version 4.16.1 The samba packages have been upgraded to upstream version 4.16.1, which provides bug fixes and enhancements over the version: By default, the smbd process automatically starts the new samba-dcerpcd process on demand to serve Distributed Computing Environment / Remote Procedure Calls (DCERPC). Note that Samba 4.16 and later always requires samba-dcerpcd to use DCERPC. If you disable the rpc start on demand helpers setting in the [global] section in the /etc/samba/smb.conf file, you must create a systemd service unit to run samba-dcerpcd in standalone mode. The Cluster Trivial Database (CTDB) recovery master role has been renamed to leader . As a result, the following ctdb sub-commands have been renamed: recmaster to leader setrecmasterrole to setleaderrole The CTDB recovery lock configuration has been renamed to cluster lock . CTDB now uses leader broadcasts and an associated timeout to determine if an election is required. Note that the server message block version 1 (SMB1) protocol is deprecated since Samba 4.11 and will be removed in a future release. Back up the database files before starting Samba. When the smbd , nmbd , or winbind services start, Samba automatically updates its tdb database files. Note that Red Hat does not support downgrading tdb database files. After updating Samba, verify the /etc/samba/smb.conf file using the testparm utility. For further information about notable changes, read the upstream release notes before updating. ( BZ#2077487 ) SSSD now supports direct integration with Windows Server 2022 With this enhancement, you can use SSSD to directly integrate your RHEL system with Active Directory forests that use Domain Controllers running Windows Server 2022. ( BZ#2070793 ) Improved SSSD multi-threaded performance Previously, SSSD serialized parallel requests from multi-threaded applications, such as Red Hat Directory Server and Identity Management. This update fixes all SSSD client libraries, such as nss and pam , so they do not serialize requests, therefore allowing requests from multiple threads to be executed in parallel for better performance. To enable the behavior of serialization, set the environment variable SSS_LOCKFREE to NO . (BZ#1978119) Directory Server now supports canceling the Auto Membership plug-in task. Previously, the Auto Membership plug-in task could generate high CPU usage on the server if Directory Server has complex configuration (large groups, complex rules and interaction with other plugins). With this enhancement, you can cancel the Auto Membership plug-in task. As a result, performance issues no longer occur. ( BZ#2052527 ) Directory Server now supports recursive delete operations when using ldapdelete With this enhancement, Directory Server now supports the Tree Delete Control [1.2.840.113556.1.4.805] OpenLDAP control. As a result, you can use the ldapdelete utility to recursively delete subentries of a parent entry. ( BZ#2057063 ) You can now set basic replication options during the Directory Server installation With this enhancement, you can configure basic replication options like authentication credentials and changelog trimming during an instance installation using an .inf file. ( BZ#2057066 ) Directory Server now supports instance creation by a non-root user Previously, non-root users were not able to create Directory Server instances. With this enhancement, a non-root user can use the dscreate ds-root subcommand to configure an environment where dscreate , dsctl , dsconf commands are used as usual to create and administer Directory Server instances. ( BZ#1872451 ) pki packages renamed to idm-pki The following pki packages are now renamed to idm-pki to better distinguish between IDM packages and Red Hat Certificate System ones: idm-pki-tools idm-pki-acme idm-pki-base idm-pki-java idm-pki-ca idm-pki-kra idm-pki-server python3-idm-pki ( BZ#2139877 ) 4.16. Graphics infrastructures Wayland is now enabled with Matrox GPUs The desktop session now enables the Wayland back end with Matrox GPUs. In releases, Wayland was disabled with Matrox GPUs due to performance and other limitations. These problems have now been fixed. You can still switch the desktop session from Wayland back to Xorg. For more information, see Overview of GNOME environments . ( BZ#2097308 ) 12th generation Intel Core GPUs are now supported This release adds support for several integrated GPUs for the 12th Gen Intel Core CPUs. This includes Intel UHD Graphics and Intel Xe integrated GPUs found with the following CPU models: Intel Core i3 12100T through Intel Core i9 12900KS Intel Pentium Gold G7400 and G7400T Intel Celeron G6900 and G6900T Intel Core i5-12450HX through Intel Core i9-12950HX Intel Core i3-1220P through Intel Core i7-1280P (JIRA:RHELPLAN-135601) Support for new AMD GPUs This release adds support for several AMD Radeon RX 6000 Series GPUs and integrated graphics of the AMD Ryzen 6000 Series CPUs. The following AMD Radeon RX 6000 Series GPU models are now supported: AMD Radeon RX 6400 AMD Radeon RX 6500 XT AMD Radeon RX 6300M AMD Radeon RX 6500M AMD Ryzen 6000 Series includes integrated GPUs found with the following CPU models: AMD Ryzen 5 6600U AMD Ryzen 5 6600H AMD Ryzen 5 6600HS AMD Ryzen 7 6800U AMD Ryzen 7 6800H AMD Ryzen 7 6800HS AMD Ryzen 9 6900HS AMD Ryzen 9 6900HX AMD Ryzen 9 6980HS AMD Ryzen 9 6980HX (JIRA:RHELPLAN-135602) 4.17. The web console Update progress page in the web console now supports an automatic restart option The update progress page now has a Reboot after completion switch. This reboots the system automatically after installing the updates. ( BZ#2056786 ) 4.18. Red Hat Enterprise Linux system roles The network RHEL system role supports network configuration using the nmstate API With this update, the network RHEL system role supports network configuration through the nmstate API. Users can now directly apply the configuration of the required network state to a network interface instead of creating connection profiles. The feature also allows partial configuration of a network. As a result, the following benefits exist: decreased network configuration complexity reliable way to apply the network state changes no need to track the entire network configuration ( BZ#2072385 ) Users can create connections with IPoIB capability using the network RHEL system role The infiniband connection type of the network RHEL system role now supports the Internet Protocol over Infiniband (IPoIB) capability. To enable this feature, define a value to the p_key option of infiniband . Note that if you specify p_key , the interface_name option of the network_connections variable must be left unset. The implementation of the network RHEL system role did not properly validate the p_key value and the interface_name option for the infiniband connection type. Therefore, the IPoIB functionality never worked before. For more information, see a README file in the /usr/share/doc/rhel-system-roles/network/ directory. ( BZ#2086965 ) HA Cluster RHEL system role now supports SBD fencing and configuration of Corosync settings The HA Cluster system role now supports the following features: SBD fencing Fencing is a crucial part of HA cluster configuration. SBD provides a means for nodes to reliably self-terminate when fencing is required. SBD fencing can be particularly useful in environments where traditional fencing mechanisms are not possible. It is now possible to configure SBD fencing with the HA Cluster system role. Corosync settings The HA Cluster system role now supports the configuration of Corosync settings, such as transport, compression, encryption, links, totem, and quorum. These settings are required to match cluster configuration with customers' needs and environment when the default settings are not suitable. ( BZ#2065337 , BZ#2070452 , BZ#2079626 , BZ#2098212 , BZ#2120709 , BZ#2120712 ) The network RHEL role now configures network settings for routing rules Previously, you could route the packet based on the destination address field in the packet, but you could not define the source routing and other policy routing rules. With this enhancement, network RHEL role supports routing rules so that the users have control over the packet transmission or route selection. ( BZ#2079622 ) The new :replaced configuration enables firewall system role to reset the firewall settings to default System administrators who manage different sets of machines, where each machine has different pre-existing firewall settings, can now use the : replaced configuration in the firewall role to ensure that all machines have the same firewall configuration settings. The : replaced configuration can erase all the existing firewall settings and replace them with consistent settings. ( BZ#2043010 ) New option in the postfix RHEL system role for overwriting configuration If you manage a group of systems which have inconsistent postfix configurations, you may want to make the configuration consistent on all of them. With this enhancement, you can specify the : replaced option within the postfix_conf dictionary to remove any existing configuration and apply the desired configuration on top of a clean postfix installation. As a result, you can erase any existing postfix configuration and ensure consistency on all the systems being managed. ( BZ#2065383 ) Enhanced microsoft.sql.server RHEL system role The following new variables are now available for the microsoft.sql.server RHEL system role: Variables with the mssql_ha_ prefix to control configuring a high availability cluster. The mssql_tls_remote_src variable to search for mssql_tls_cert and mssql_tls_private_key values on managed nodes. If you keep the default false setting, the role searches for these files on the control node. The mssql_manage_firewall variable to manage firewall ports automatically. If this variable is set to false , you must enable firewall ports manually. The mssql_pre_input_sql_file and mssql_post_input_sql_file variables to control whether you want to run the SQL scripts before the role execution or after it. These new variables replace the former mssql_input_sql_file variable, which did not allow you to influence the time of SQL script execution. ( BZ#2066337 ) The logging RHEL system role supports options startmsg.regex and endmsg.regex in files inputs With this enhancement, you can now filter log messages coming from files by using regular expressions. Options startmsg_regex and endmsg_regex are now included in the files' input. The startmsg_regex represents the regular expression that matches the start part of a message, and the endmsg_regex represents the regular expression that matches the last part of a message. As a result, you can now filter messages based upon properties such as date-time, priority, and severity. ( BZ#2112145 ) The sshd RHEL system role verifies the include directive for the drop-in directory The sshd RHEL system role on RHEL 9 manages only a file in the drop-in directory, but previously did not verify that the directory is included from the main sshd_config file. With this update, the role verifies that sshd_config contains the include directive for the drop-in directory. As a result, the role more reliably applies the provided configuration. ( BZ#2052081 ) The sshd RHEL system role can be managed through /etc/ssh/sshd_config The sshd RHEL system role applied to a RHEL 9 managed node places the SSHD configuration in a drop-in directory ( /etc/ssh/sshd_config.d/00-ansible_system_role.conf by default). Previously, any changes to the /etc/ssh/sshd_config file overwrote the default values in 00-ansible_system_role.conf . With this update, you can manage SSHD by using /etc/ssh/sshd_config instead of 00-ansible_system_role.conf while preserving the system default values in 00-ansible_system_role.conf . ( BZ#2052086 ) The metrics role consistently uses "Ansible_managed" comment in its managed configuration files With this update, the metrics role inserts the "Ansible managed" comment to the configuration files, using the Ansible standard ansible_managed variable. The comment indicates that the configuration files should not be directly edited because the metrics role can overwrite the file. As a result, the configuration files contain a declaration stating that the configuration files are managed by Ansible. ( BZ#2065392 ) The storage RHEL system role now supports managing the pool members The storage RHEL system role can now add or remove disks from existing LVM pools without removing the pool first. To increase the pool capacity, the storage RHEL system role can add new disks to the pool and free currently allocated disks in the pool for another use. ( BZ#2072742 ) Support for thinly provisioned volumes is now available in the storage RHEL system role The storage RHEL system role can now create and manage thinly provisioned LVM logical volumes (LVs). Thin provisioned LVs are allocated as they are written, allowing better flexibility when creating volumes as physical storage provided for thin provisioned LVs can be increased later as the need arises. LVM thin provisioning also allows creating more efficient snapshots because the data blocks common to a thin LV and any of its snapshots are shared. ( BZ#2072745 ) Better support for cached volumes is available in the storage RHEL system role The storage RHEL system role can now attach cache to existing LVM logical volumes. LVM cache can be used to improve performance of slower logical volumes by temporarily storing subsets of an LV's data on a smaller, faster device, for example an SSD. This enhances the previously added support for creating cached volumes by allowing adding (attaching) a cache to an existing, previously uncached volume. ( BZ#2072746 ) The logging RHEL system role now supports template , severity and facility options The logging RHEL system role now features new useful severity and facility options to the files inputs as well as a new template option to the files and forwards outputs. Use the template option to specify the traditional time format by using the parameter traditional , the syslog protocol 23 format by using the parameter syslog , and the modern style format by using the parameter modern . As a result, you can now use the logging role to filter by the severity and facility as well as to specify the output format by template. ( BZ#2075119 ) RHEL system roles now available also in playbooks with fact gathering disabled Ansible fact gathering might be disabled in your environment for performance or other reasons. Previously, it was not possible to use RHEL system roles in such configurations. With this update, the system detects the ANSIBLE_GATHERING=explicit parameter in your configuration and gather_facts: false parameter in your playbooks, and use the setup: module to gather only the facts required by the given role, if not available from the fact cache. Note If you have disabled Ansible fact gathering due to performance, you can enable Ansible fact caching instead, which does not cause a performance hit of retrieving them from source. ( BZ#2078989 ) The storage role now has less verbosity by default The storage role output is now less verbose by default. With this update, users can increase the verbosity of storage role output to only produce debugging output if they are using Ansible verbosity level 1 or above. ( BZ#2079627 ) The firewall RHEL system role does not require the state parameter when configuring masquerade or icmp_block_inversion When configuring custom firewall zones, variables masquerade and icmp_block_inversion are boolean settings. A value of true implies state: present and a value of false implies state: absent . Therefore, the state parameter is not required when configuring masquerade or icmp_block_inversion . ( BZ#2093423 ) You can now add, update, or remove services using absent and present states in the firewall RHEL system role With this enhancement, you can use the present state to add ports, modules, protocols, services, and destination addresses, or use the absent state to remove them. Note that to use the absent and present states in the firewall RHEL system role, set the permanent option to true . With the permanent option set to true , the state settings apply until changed, and remain unaffected by role reloads. ( BZ#2100292 ) The firewall system role can add or remove an interface to the zone using PCI device ID Using the PCI device ID, the firewall system role can now assign or remove a network interface to or from a zone. Previously, if only the PCI device ID was known instead of the interface name, users had to first identify the corresponding interface name to use the firewall system role. With this update, the firewall system role can now use the PCI device ID to manage a network interface in a zone. ( BZ#2100942 ) The firewall RHEL system role can provide Ansible facts With this enhancement, you can now gather the firewall RHEL system role's Ansible facts from all of your systems by including the firewall: variable in the playbook with no arguments. To gather a more detailed version of the Ansible facts, use the detailed: true argument, for example: ( BZ#2115154 ) Added setting of seuser and selevel to the selinux RHEL system role Sometimes, it is necessary to set seuser and selevel parameters when setting SELinux context file system mappings. With this update, you can use the seuser and selevel optional arguments in selinux_fcontext to specify SELinux user and level in the SELinux context file system mappings. ( BZ#2115157 ) New cockpit system role variable for setting a custom listening port The cockpit system role introduces the cockpit_port variable that allows you to set a custom listening port other than the default 9090 port. Note that if you decide to set a custom listening port, you will also need to adjust your SELinux policy to allow the web console to listen on that port. ( BZ#2115152 ) The metrics role can export postfix performance data You can now use the new metrics_from_postfix boolean variable in the metrics role for recording and detailed performance analysis. With this enhancement, setting the variable enables the pmdapostfix metrics agent on the system, making statistics about postfix available. ( BZ#2051737 ) The postfix role consistently uses "Ansible_managed" comment in its managed configuration files The postfix role generates the /etc/postfix/main.cf configuration file. With this update, the postfix role inserts the "Ansible managed" comment to the configuration files, using the Ansible standard ansible_managed variable. The comment indicates that the configuration files should not be directly edited because the postfix role can overwrite the file. As a result, the configuration files contain a declaration stating that the configuration files are managed by Ansible. ( BZ#2065393 ) The nbde-client RHEL system role supports static IP addresses In versions of RHEL, restarting a system with a static IP address and configured with the nbde_client RHEL system role changed the system's IP address. With this update, systems with static IP addresses are supported by the nbde_client role, and their IP addresses do not change after a reboot. Note that by default, the nbde_client role uses DHCP when booting, and switches to the configured static IP after the system is booted. (BZ#2070462) 4.19. Virtualization RHEL web console now features RHEL as an option for the Download an OS VM workflow With this enhancement, the RHEL web console now supports the installation of RHEL virtual machines (VMs) using the default Download an OS workflow. As a result, you can download and install the RHEL OS as a VM directly within the web console. (JIRA:RHELPLAN-121982) Improved KVM architectural compliance With this update, the architectural compliance of the KVM hypervisor has now been enhanced and made stricter. As a result, the hypervisor is now better prepared to address future changes to Linux-based and other operating systems. (JIRA:RHELPLAN-117713) ap-check is now available in RHEL 9 The mdevctl tool now provides a new ap-check support utility. You can use mdevctl to persistently configure cryptographic adapters and domains that are allowed for pass-through usage into virtual machines as well as the matrix and vfio-ap devices. With mdevctl , you do not have to reconfigure these adapters, domains, and devices after every IPL. In addition, mdevctl prevents the distributor from inventing other ways to reconfigure them. When invoking mdevctl commands for vfio-ap devices, the new ap-check support utility is invoked as part of the mdevctl command to perform additional validity checks against vfio-ap device configurations. In addition, the chzdev tool now provides the ability to manage the system-wide Adjunct Processor (AP) mask settings, which determine what AP resources are available for vfio-ap devices. When used, chzdev makes it possible to persist these settings by generating an associated udev rule. Using lszdev , you can can now also query the system-wide AP mask settings. (BZ#1870699) open-vm-tools rebased to 12.0.5 The open-vm-tools packages have been upgraded to version 12.0.5, which introduces a number of bug fixes and new features. Most notably, support has been added for the Salt Minion tool to be managed through guest OS variables. (BZ#2061193) Selected VMs on IBM Z can now boot with kernel command lines longer than 896 bytes Previously, booting a virtual machine (VM) on a RHEL 9 IBM Z host always failed if the kernel command line of the VM was longer than 896 bytes. With this update, the QEMU emulator can handle kernel command lines longer than 896 bytes. As a result, you can now use QEMU direct kernel boot for VMs with very long kernel command lines, if the VM kernel supports it. Specifically, to use a command line longer than 896 bytes, the VM must use Linux kernel version 5.16-rc1 or later. (BZ#2044218) The Secure Execution feature on IBM Z now supports remote attestation The Secure Execution feature on the IBM Z architecture now supports remote attestation. The pvattest utility can create a remote attestation request to verify the integrity of a guest that has Secure Execution enabled. Additionally, it is now possible to inject interrupts to guests with Secure Execution through the use of GISA. (BZ#2001936, BZ#2044300) VM memory preallocation using multiple threads You can now define multiple CPU threads for virtual machine (VM) memory allocation in the domain XML configuration, for example as follows: This ensures that more than one thread is used for allocating memory pages when starting a VM. As a result, VMs with multiple allocation threads configured start significantly faster, especially if the VMs has large amounts of RAM assigned and backed by hugepages. (BZ#2064194) RHEL 9 guests now support SEV-SNP On virtual machines (VMs) that use RHEL 9 as a guest operating system, you can now use AMD Secure Encrypted Virtualization (SEV) with the Secure Nested Paging (SNP) feature. Among other benefits, SNP enhances SEV by improving its memory integrity protection, which helps prevent hypervisor-based attacks such as data replay or memory re-mapping. Note that for SEV-SNP to work on a RHEL 9 VM, the host running the VM must support SEV-SNP as well. (BZ#2169738) 4.20. RHEL in cloud environments New SSH module for cloud-init With this update, an SSH module has been added to the cloud-init utility, which automatically generates host keys during instance creation. Note that with this change, the default cloud-init configuration has been updated. Therefore, if you had a local modification, make sure the /etc/cloud/cloud.cfg contains "ssh_genkeytypes: ['rsa', 'ecdsa', 'ed25519']" line. Otherwise, cloud-init creates an image which fails to start the sshd service. If this occurs, do the following to work around the problem: Make sure the /etc/cloud/cloud.cfg file contains the following line: Check whether /etc/ssh/ssh_host_* files exist in the instance. If the /etc/ssh/ssh_host_* files do not exist, use the following command to generate host keys: Restart the sshd service: (BZ#2115791) 4.21. Containers The Container Tools packages have been updated The Container Tools packages which contain the Podman, Buildah, Skopeo, crun, and runc tools are now available. This update provides a list of bug fixes and enhancements over the version. Notable changes include: The podman pod create command now supports setting the CPU and memory limits. You can set a limit for all containers in the pod, while individual containers within the pod can have their own limits. The podman pod clone command creates a copy of an existing pod. The podman play kube command now supports the security context settings using the BlockDevice and CharDevice volumes. Pods created by the podman play kube can now be managed by systemd unit files using a podman-kube@<service>.service (for example systemctl --user start podman-play-kube@USD(systemd-escape my.yaml).service ). The podman push and podman push manifest commands now support the sigstore signatures. The Podman networks can now be isolated by using the podman network --opt isolate command. Podman has been upgraded to version 4.2, for further information about notable changes, see the upstream release notes . (JIRA:RHELPLAN-118462) GitLab Runner is now available on RHEL using Podman Beginning with GitLab Runner 15.1, you can use Podman as the container runtime in the GitLab Runner Docker Executor. For more details, see GitLab's Release Note . (JIRA:RHELPLAN-101140) Podman now supports the --health-on-failure option The podman run and podman create commands now support the --health-on-failure option to determine the actions to be performed when the status of a container becomes unhealthy. The --health-on-failure option supports four actions: none : Take no action, this is the default action. kill : Kill the container. restart : Restart the container. stop : Stop the container. Note Do not combine the restart action with the --restart option. When running inside of a systemd unit, consider using the kill or stop action instead to make use of systemd's restart policy. ( BZ#2097708 ) Netavark network stack is now available The Netavark stack is a network configuration tool for containers. In RHEL 9, the Netavark stack is fully supported and enabled by default. This network stack has the following capabilities: Configuration of container networks using the JSON configuration file Creating, managing, and removing network interfaces, including bridge and MACVLAN interfaces Configuring firewall settings, such as network address translation (NAT) and port mapping rules IPv4 and IPv6 Improved capability for containers in multiple networks Container DNS resolution using the aardvark-dns project Note You have to use the same version of Netavark stack and the aardvark-dns authoritative DNS server. (JIRA:RHELPLAN-132023) New package: catatonit in the CRB repository A new catatonit package is now available in the CodeReady Linux Builder (CRB) repository. The catatonit package is used as a minimal init program for containers and can be included within the application container image. Note that packages included in the CodeReady Linux Builder repository are unsupported. Note that since RHEL 9.0, the podman-catonit package is available in the AppStream repository. The podman-catatonit package is used only by the Podman tool. (BZ#2074193)
|
[
"[[customizations.filesystem]] mountpoint = \"/boot\" size = \"20 GiB\"",
"grub2-editenv - unset menu_auto_hide",
"mkdir keys for i in \"diun\" \"manufacturer\" \"device_ca\" \"owner\"; do fdo-admin-tool generate-key-and-cert USDi; done ls keys device_ca_cert.pem device_ca_key.der diun_cert.pem diun_key.der manufacturer_cert.pem manufacturer_key.der owner_cert.pem owner_key.der",
"subscription-manager config --rhsm.progress_messages=0",
"echo 'key_exchange = +SNTRUP' > /etc/crypto-policies/policies/modules/SNTRUP.pmod update-crypto-policies --set DEFAULT:SNTRUP",
"--- interfaces: - name: br0 type: linux-bridge state: up bridge: options: vlan-protocol: 802.1ad port: - name: eth1 vlan: mode: trunk trunk-tags: - id: 500",
"nmcli connection migrate",
"lspci | awk '/E810/ {print USD1}' 44:00.0 44:00.1 USD",
"devlink dev param set pci/0000:44:00.0 name enable_iwarp value true cmode runtime devlink dev param set pci/0000:44:00.0 name enable_iwarp value false cmode runtime",
"devlink dev param set pci/0000:44:00.0 name enable_roce value true cmode runtime devlink dev param set pci/0000:44:00.0 name enable_roce value false cmode runtime",
"dnf module install nodejs:18",
"dnf module install php:8.1",
"dnf module install ruby:3.1",
"LimitRequestBody 2147483648",
"apxs -q HTTPD_MMN 20120211",
"dnf install gcc-toolset-12",
"scl enable gcc-toolset-12 tool",
"scl enable gcc-toolset-12 bash",
"cc1: fatal error: inaccessible plugin file opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin/gcc-annobin.so expanded from short plugin name gcc-annobin: No such file or directory",
"cd /opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin ln -s annobin.so gcc-annobin.so",
"dnf module install maven:3.8",
"klist /var/lib/sss/db/fast_ccache_IPA.VM Ticket cache: FILE:/var/lib/sss/db/fast_ccache_IPA.VM Default principal: WELLKNOWN/ANONYMOUS@WELLKNOWN:ANONYMOUS Valid starting Expires Service principal 03/10/2022 10:33:45 03/10/2022 10:43:45 krbtgt/[email protected]",
"vars: firewall: detailed: true",
"<memoryBacking> <allocation threads='8'/> </memoryBacking>",
"ssh_genkeytypes: ['rsa', 'ecdsa', 'ed25519']",
"cloud-init single --name cc_ssh",
"systemctl restart sshd"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.1_release_notes/new-features
|
Installing the automation services catalog worker
|
Installing the automation services catalog worker Red Hat Ansible Automation Platform 2.3 Extend your Red Hat Ansible Automation Platform to connect with automation services catalog on cloud.redhat.com using the Ansible Automation Platform 2.0 Setup or Setup Bundle Installers Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/installing_the_automation_services_catalog_worker/index
|
Chapter 1. About Red Hat Insights
|
Chapter 1. About Red Hat Insights Red Hat Insights is a Software-as-a-Service (SaaS) application included with almost every subscription to Red Hat Enterprise Linux, Red Hat OpenShift, and Red Hat Ansible Automation Platform. Powered by predictive analytics, Red Hat Insights gets smarter with every additional piece of intelligence and data. It can automatically discover relevant insights, recommend tailored, proactive, actions, and even automate tasks. Using Red Hat Insights, customers can benefit from the experience and technical knowledge of Red Hat Certified Engineers, making it easier to identify, prioritize and resolve issues before business operations are affected. As a SaaS offering, located at Red Hat Hybrid Cloud Console , Red Hat Insights is regularly updated. Regular updates expand the Insights knowledge archive in real time to reflect new IT challenges that can impact the stability of mission-critical systems.
| null |
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/getting_started_with_red_hat_insights/release-notes-insights
|
Chapter 5. Customizing the Tech Radar page in Red Hat Developer Hub
|
Chapter 5. Customizing the Tech Radar page in Red Hat Developer Hub In Red Hat Developer Hub, the Tech Radar page is provided by the tech-radar dynamic plugin, which is disabled by default. For information about enabling dynamic plugins in Red Hat Developer Hub see Configuring plugins in Red Hat Developer Hub . In Red Hat Developer Hub, you can configure Learning Paths by passing the data into the app-config.yaml file as a proxy. The base Tech Radar URL must include the /developer-hub/tech-radar proxy. Note Due to the use of overlapping pathRewrites for both the tech-radar and homepage quick access proxies, you must create the tech-radar configuration ( ^api/proxy/developer-hub/tech-radar ) before you create the homepage configuration ( ^/api/proxy/developer-hub ). For more information about customizing the Home page in Red Hat Developer Hub, see Customizing the Home page in Red Hat Developer Hub . You can provide data to the Tech Radar page from the following sources: JSON files hosted on GitHub or GitLab. A dedicated service that provides the Tech Radar data in JSON format using an API. 5.1. Using hosted JSON files to provide data to the Tech Radar page Prerequisites You have installed Red Hat Developer Hub by using either the Operator or Helm chart. For more information, see Installing Red Hat Developer Hub on OpenShift Container Platform . Procedure To access the data from the JSON files, complete the following step: Add the following code to the app-config.yaml file: proxy: endpoints: # Other Proxies # customize developer hub instance '/developer-hub': target: <DOMAIN_URL> # i.e https://raw.githubusercontent.com/ pathRewrite: '^/api/proxy/developer-hub/tech-radar': <path to json file> # i.e /redhat-developer/rhdh/main/packages/app/public/tech-radar/data-default.json '^/api/proxy/developer-hub': <path to json file> # i.e /redhat-developer/rhdh/main/packages/app/public/homepage/data.json changeOrigin: true secure: true # Change to "false" in case of using self hosted cluster with a self-signed certificate headers: <HEADER_KEY>: <HEADER_VALUE> # optional and can be passed as needed i.e Authorization can be passed for private GitHub repo and PRIVATE-TOKEN can be passed for private GitLab repo 5.2. Using a dedicated service to provide data to the Tech Radar page When using a dedicated service, you can do the following: Use the same service to provide the data to all configurable Developer Hub pages or use a different service for each page. Use the red-hat-developer-hub-customization-provider as an example service, which provides data for both the Home and Tech Radar pages. The red-hat-developer-hub-customization-provider service provides the same data as default Developer Hub data. You can fork the red-hat-developer-hub-customization-provider service repository from GitHub and modify it with your own data, if required. Deploy the red-hat-developer-hub-customization-provider service and the Developer Hub Helm chart on the same cluster. Prerequisites You have installed the Red Hat Developer Hub using Helm Chart. For more information, see Installing Red Hat Developer Hub on OpenShift Container Platform with the Helm chart . Procedure To use a separate service to provide the Tech Radar data, complete the following steps: Add the following code to the app-config-rhdh.yaml file: proxy: endpoints: # Other Proxies '/developer-hub/tech-radar': target: USD{TECHRADAR_DATA_URL} changeOrigin: true # Change to "false" in case of using self hosted cluster with a self-signed certificate secure: true where the TECHRADAR_DATA_URL is defined as http://<SERVICE_NAME>/tech-radar , for example, http://rhdh-customization-provider/tech-radar . Note You can define the TECHRADAR_DATA_URL by adding it to rhdh-secrets or by directly replacing it with its value in your custom ConfigMap. Delete the Developer Hub pod to ensure that the new configurations are loaded correctly.
|
[
"proxy: endpoints: # Other Proxies # customize developer hub instance '/developer-hub': target: <DOMAIN_URL> # i.e https://raw.githubusercontent.com/ pathRewrite: '^/api/proxy/developer-hub/tech-radar': <path to json file> # i.e /redhat-developer/rhdh/main/packages/app/public/tech-radar/data-default.json '^/api/proxy/developer-hub': <path to json file> # i.e /redhat-developer/rhdh/main/packages/app/public/homepage/data.json changeOrigin: true secure: true # Change to \"false\" in case of using self hosted cluster with a self-signed certificate headers: <HEADER_KEY>: <HEADER_VALUE> # optional and can be passed as needed i.e Authorization can be passed for private GitHub repo and PRIVATE-TOKEN can be passed for private GitLab repo",
"proxy: endpoints: # Other Proxies '/developer-hub/tech-radar': target: USD{TECHRADAR_DATA_URL} changeOrigin: true # Change to \"false\" in case of using self hosted cluster with a self-signed certificate secure: true"
] |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/getting_started_with_red_hat_developer_hub/proc-customize-rhdh-tech-radar-page_rhdh-getting-started
|
Chapter 4. Installing director on the undercloud
|
Chapter 4. Installing director on the undercloud To configure and install director, set the appropriate parameters in the undercloud.conf file and run the undercloud installation command. After you have installed director, import the overcloud images that director will use to write to bare metal nodes during node provisioning. 4.1. Configuring director The director installation process requires certain settings in the undercloud.conf configuration file, which director reads from the home directory of the stack user. Complete the following steps to copy default template as a foundation for your configuration. Procedure Copy the default template to the home directory of the stack user's: Edit the undercloud.conf file. This file contains settings to configure your undercloud. If you omit or comment out a parameter, the undercloud installation uses the default value. 4.2. Director configuration parameters The following list contains information about parameters for configuring the undercloud.conf file. Keep all parameters within their relevant sections to avoid errors. Important At minimum, you must set the container_images_file parameter to the environment file that contains your container image configuration. Without this parameter properly set to the appropriate file, director cannot obtain your container image rule set from the ContainerImagePrepare parameter nor your container registry authentication details from the ContainerImageRegistryCredentials parameter. Defaults The following parameters are defined in the [DEFAULT] section of the undercloud.conf file: additional_architectures A list of additional (kernel) architectures that an overcloud supports. Currently the overcloud supports ppc64le architecture in addition to the default x86_64 architecture. Note When you enable support for ppc64le, you must also set ipxe_enabled to False . For more information on configuring your undercloud with multiple CPU architectures, see Configuring a multiple CPU architecture overcloud . certificate_generation_ca The certmonger nickname of the CA that signs the requested certificate. Use this option only if you have set the generate_service_certificate parameter. If you select the local CA, certmonger extracts the local CA certificate to /etc/pki/ca-trust/source/anchors/cm-local-ca.pem and adds the certificate to the trust chain. clean_nodes Defines whether to wipe the hard drive between deployments and after introspection. cleanup Delete temporary files. Set this to False to retain the temporary files used during deployment. The temporary files can help you debug the deployment if errors occur. container_cli The CLI tool for container management. Leave this parameter set to podman . Red Hat Enterprise Linux 8.4 only supports podman . container_healthcheck_disabled Disables containerized service health checks. Red Hat recommends that you enable health checks and leave this option set to false . container_images_file Heat environment file with container image information. This file can contain the following entries: Parameters for all required container images The ContainerImagePrepare parameter to drive the required image preparation. Usually the file that contains this parameter is named containers-prepare-parameter.yaml . container_insecure_registries A list of insecure registries for podman to use. Use this parameter if you want to pull images from another source, such as a private container registry. In most cases, podman has the certificates to pull container images from either the Red Hat Container Catalog or from your Satellite Server if the undercloud is registered to Satellite. container_registry_mirror An optional registry-mirror configured that podman uses. custom_env_files Additional environment files that you want to add to the undercloud installation. deployment_user The user who installs the undercloud. Leave this parameter unset to use the current default user stack . discovery_default_driver Sets the default driver for automatically enrolled nodes. Requires the enable_node_discovery parameter to be enabled and you must include the driver in the enabled_hardware_types list. enable_ironic; enable_ironic_inspector; enable_mistral; enable_nova; enable_tempest; enable_validations; enable_zaqar Defines the core services that you want to enable for director. Leave these parameters set to true . enable_node_discovery Automatically enroll any unknown node that PXE-boots the introspection ramdisk. New nodes use the fake driver as a default but you can set discovery_default_driver to override. You can also use introspection rules to specify driver information for newly enrolled nodes. enable_novajoin Defines whether to install the novajoin metadata service in the undercloud. enable_routed_networks Defines whether to enable support for routed control plane networks. enable_swift_encryption Defines whether to enable Swift encryption at-rest. enable_telemetry Defines whether to install OpenStack Telemetry services (gnocchi, aodh, panko) in the undercloud. Set the enable_telemetry parameter to true if you want to install and configure telemetry services automatically. The default value is false , which disables telemetry on the undercloud. This parameter is required if you use other products that consume metrics data, such as Red Hat CloudForms. Warning RBAC is not supported by every component. The Alarming service (aodh) and Gnocchi do not take secure RBAC rules into account. enabled_hardware_types A list of hardware types that you want to enable for the undercloud. generate_service_certificate Defines whether to generate an SSL/TLS certificate during the undercloud installation, which is used for the undercloud_service_certificate parameter. The undercloud installation saves the resulting certificate /etc/pki/tls/certs/undercloud-[undercloud_public_vip].pem . The CA defined in the certificate_generation_ca parameter signs this certificate. heat_container_image URL for the heat container image to use. Leave unset. heat_native Run host-based undercloud configuration using heat-all . Leave as true . hieradata_override Path to hieradata override file that configures Puppet hieradata on the director, providing custom configuration to services beyond the undercloud.conf parameters. If set, the undercloud installation copies this file to the /etc/puppet/hieradata directory and sets it as the first file in the hierarchy. For more information about using this feature, see Configuring hieradata on the undercloud . inspection_extras Defines whether to enable extra hardware collection during the inspection process. This parameter requires the python-hardware or python-hardware-detect packages on the introspection image. inspection_interface The bridge that director uses for node introspection. This is a custom bridge that the director configuration creates. The LOCAL_INTERFACE attaches to this bridge. Leave this as the default br-ctlplane . inspection_runbench Runs a set of benchmarks during node introspection. Set this parameter to true to enable the benchmarks. This option is necessary if you intend to perform benchmark analysis when inspecting the hardware of registered nodes. ipa_otp Defines the one-time password to register the undercloud node to an IPA server. This is required when enable_novajoin is enabled. ipv6_address_mode IPv6 address configuration mode for the undercloud provisioning network. The following list contains the possible values for this parameter: dhcpv6-stateless - Address configuration using router advertisement (RA) and optional information using DHCPv6. dhcpv6-stateful - Address configuration and optional information using DHCPv6. ipxe_enabled Defines whether to use iPXE or standard PXE. The default is true , which enables iPXE. Set this parameter to false to use standard PXE. For PowerPC deployments, or for hybrid PowerPC and x86 deployments, set this value to false . local_interface The chosen interface for the director Provisioning NIC. This is also the device that director uses for DHCP and PXE boot services. Change this value to your chosen device. To see which device is connected, use the ip addr command. For example, this is the result of an ip addr command: In this example, the External NIC uses em0 and the Provisioning NIC uses em1 , which is currently not configured. In this case, set the local_interface to em1 . The configuration script attaches this interface to a custom bridge defined with the inspection_interface parameter. local_ip The IP address defined for the director Provisioning NIC. This is also the IP address that director uses for DHCP and PXE boot services. Leave this value as the default 192.168.24.1/24 unless you use a different subnet for the Provisioning network, for example, if this IP address conflicts with an existing IP address or subnet in your environment. For IPv6, the local IP address prefix length must be /64 to support both stateful and stateless connections. local_mtu The maximum transmission unit (MTU) that you want to use for the local_interface . Do not exceed 1500 for the undercloud. local_subnet The local subnet that you want to use for PXE boot and DHCP interfaces. The local_ip address should reside in this subnet. The default is ctlplane-subnet . net_config_override Path to network configuration override template. If you set this parameter, the undercloud uses a JSON or YAML format template to configure the networking with os-net-config and ignores the network parameters set in undercloud.conf . Use this parameter when you want to configure bonding or add an option to the interface. For more information about customizing undercloud network interfaces, see Configuring undercloud network interfaces . networks_file Networks file to override for heat . output_dir Directory to output state, processed heat templates, and Ansible deployment files. overcloud_domain_name The DNS domain name that you want to use when you deploy the overcloud. Note When you configure the overcloud, you must set the CloudDomain parameter to a matching value. Set this parameter in an environment file when you configure your overcloud. roles_file The roles file that you want to use to override the default roles file for undercloud installation. It is highly recommended to leave this parameter unset so that the director installation uses the default roles file. scheduler_max_attempts The maximum number of times that the scheduler attempts to deploy an instance. This value must be greater or equal to the number of bare metal nodes that you expect to deploy at once to avoid potential race conditions when scheduling. service_principal The Kerberos principal for the service using the certificate. Use this parameter only if your CA requires a Kerberos principal, such as in FreeIPA. subnets List of routed network subnets for provisioning and introspection. The default value includes only the ctlplane-subnet subnet. For more information, see Subnets . templates Heat templates file to override. undercloud_admin_host The IP address or hostname defined for director Admin API endpoints over SSL/TLS. The director configuration attaches the IP address to the director software bridge as a routed IP address, which uses the /32 netmask. If the undercloud_admin_host is not in the same IP network as the local_ip , you must set the ControlVirtualInterface parameter to the interface on which you want the admin APIs on the undercloud to listen. By default, the admin APIs listen on the br-ctlplane interface. Set the ControlVirtualInterface parameter in a custom environment file, and include the custom environment file in the undercloud.conf file by configuring the custom_env_files parameter. For information about customizing undercloud network interfaces, see Configuring undercloud network interfaces . undercloud_debug Sets the log level of undercloud services to DEBUG . Set this value to true to enable DEBUG log level. undercloud_enable_selinux Enable or disable SELinux during the deployment. It is highly recommended to leave this value set to true unless you are debugging an issue. undercloud_hostname Defines the fully qualified host name for the undercloud. If set, the undercloud installation configures all system host name settings. If left unset, the undercloud uses the current host name, but you must configure all system host name settings appropriately. undercloud_log_file The path to a log file to store the undercloud install and upgrade logs. By default, the log file is install-undercloud.log in the home directory. For example, /home/stack/install-undercloud.log . undercloud_nameservers A list of DNS nameservers to use for the undercloud hostname resolution. undercloud_ntp_servers A list of network time protocol servers to help synchronize the undercloud date and time. undercloud_public_host The IP address or hostname defined for director Public API endpoints over SSL/TLS. The director configuration attaches the IP address to the director software bridge as a routed IP address, which uses the /32 netmask. If the undercloud_public_host is not in the same IP network as the local_ip , you must set the PublicVirtualInterface parameter to the public-facing interface on which you want the public APIs on the undercloud to listen. By default, the public APIs listen on the br-ctlplane interface. Set the PublicVirtualInterface parameter in a custom environment file, and include the custom environment file in the undercloud.conf file by configuring the custom_env_files parameter. For information about customizing undercloud network interfaces, see Configuring undercloud network interfaces . undercloud_service_certificate The location and filename of the certificate for OpenStack SSL/TLS communication. Ideally, you obtain this certificate from a trusted certificate authority. Otherwise, generate your own self-signed certificate. undercloud_timezone Host timezone for the undercloud. If you do not specify a timezone, director uses the existing timezone configuration. undercloud_update_packages Defines whether to update packages during the undercloud installation. Subnets Each provisioning subnet is a named section in the undercloud.conf file. For example, to create a subnet called ctlplane-subnet , use the following sample in your undercloud.conf file: You can specify as many provisioning networks as necessary to suit your environment. Important Director cannot change the IP addresses for a subnet after director creates the subnet. cidr The network that director uses to manage overcloud instances. This is the Provisioning network, which the undercloud neutron service manages. Leave this as the default 192.168.24.0/24 unless you use a different subnet for the Provisioning network. masquerade Defines whether to masquerade the network defined in the cidr for external access. This provides the Provisioning network with network address translation (NAT) so that the Provisioning network has external access through director. Note The director configuration also enables IP forwarding automatically using the relevant sysctl kernel parameter. dhcp_start; dhcp_end The start and end of the DHCP allocation range for overcloud nodes. Ensure that this range contains enough IP addresses to allocate to your nodes. If not specified for the subnet, director determines the allocation pools by removing the values set for the local_ip , gateway , undercloud_admin_host , undercloud_public_host , and inspection_iprange parameters from the subnets full IP range. You can configure non-contiguous allocation pools for undercloud control plane subnets by specifying a list of start and end address pairs. Alternatively, you can use the dhcp_exclude option to exclude IP addresses within an IP address range. For example, the following configurations both create allocation pools 172.20.0.100-172.20.0.150 and 172.20.0.200-172.20.0.250 : Option 1 Option 2 dhcp_exclude IP addresses to exclude in the DHCP allocation range. For example, the following configuration excludes the IP address 172.20.0.105 and the IP address range 172.20.0.210-172.20.0.219 : dns_nameservers DNS nameservers specific to the subnet. If no nameservers are defined for the subnet, the subnet uses nameservers defined in the undercloud_nameservers parameter. gateway The gateway for the overcloud instances. This is the undercloud host, which forwards traffic to the External network. Leave this as the default 192.168.24.1 unless you use a different IP address for director or want to use an external gateway directly. host_routes Host routes for the Neutron-managed subnet for the overcloud instances on this network. This also configures the host routes for the local_subnet on the undercloud. inspection_iprange Temporary IP range for nodes on this network to use during the inspection process. This range must not overlap with the range defined by dhcp_start and dhcp_end but must be in the same IP subnet. Modify the values of these parameters to suit your configuration. When complete, save the file. 4.3. Configuring the undercloud with environment files You configure the main parameters for the undercloud through the undercloud.conf file. You can also perform additional undercloud configuration with an environment file that contains heat parameters. Procedure Create an environment file named /home/stack/templates/custom-undercloud-params.yaml . Edit this file and include your heat parameters. For example, to enable debugging for certain OpenStack Platform services include the following snippet in the custom-undercloud-params.yaml file: Save this file when you have finished. Edit your undercloud.conf file and scroll to the custom_env_files parameter. Edit the parameter to point to your custom-undercloud-params.yaml environment file: Note You can specify multiple environment files using a comma-separated list. The director installation includes this environment file during the undercloud installation or upgrade operation. 4.4. Common heat parameters for undercloud configuration The following table contains some common heat parameters that you might set in a custom environment file for your undercloud. Parameter Description AdminPassword Sets the undercloud admin user password. AdminEmail Sets the undercloud admin user email address. Debug Enables debug mode. Set these parameters in your custom environment file under the parameter_defaults section: 4.5. Configuring hieradata on the undercloud You can provide custom configuration for services beyond the available undercloud.conf parameters by configuring Puppet hieradata on the director. Procedure Create a hieradata override file, for example, /home/stack/hieradata.yaml . Add the customized hieradata to the file. For example, add the following snippet to modify the Compute (nova) service parameter force_raw_images from the default value of True to False : If there is no Puppet implementation for the parameter you want to set, then use the following method to configure the parameter: For example: Set the hieradata_override parameter in the undercloud.conf file to the path of the new /home/stack/hieradata.yaml file: 4.6. Configuring the undercloud for bare metal provisioning over IPv6 If you have IPv6 nodes and infrastructure, you can configure the undercloud and the provisioning network to use IPv6 instead of IPv4 so that director can provision and deploy Red Hat OpenStack Platform onto IPv6 nodes. However, there are some considerations: Dual stack IPv4/6 is not available. Tempest validations might not perform correctly. IPv4 to IPv6 migration is not available during upgrades. Modify the undercloud.conf file to enable IPv6 provisioning in Red Hat OpenStack Platform. Prerequisites An IPv6 address on the undercloud. For more information, see Configuring an IPv6 address on the undercloud in the IPv6 Networking for the Overcloud guide. Procedure Open your undercloud.conf file. Specify the IPv6 address mode as either stateless or stateful: Replace <address_mode> with dhcpv6-stateless or dhcpv6-stateful , based on the mode that your NIC supports. Note When you use the stateful address mode, the firmware, chain loaders, and operating systems might use different algorithms to generate an ID that the DHCP server tracks. DHCPv6 does not track addresses by MAC, and does not provide the same address back if the identifier value from the requester changes but the MAC address remains the same. Therefore, when you use stateful DHCPv6 you must also complete the step to configure the network interface. If you configured your undercloud to use stateful DHCPv6, specify the network interface to use for bare metal nodes: Set the default network interface for bare metal nodes: Specify whether or not the undercloud should create a router on the provisioning network: Replace <true/false> with true to enable routed networks and prevent the undercloud creating a router on the provisioning network. When true , the data center router must provide router advertisements. Replace <true/false> with false to disable routed networks and create a router on the provisioning network. Configure the local IP address, and the IP address for the director Admin API and Public API endpoints over SSL/TLS: Replace <ipv6_address> with the IPv6 address of the undercloud. Optional: Configure the provisioning network that director uses to manage instances: Replace <ipv6_address> with the IPv6 address of the network to use for managing instances when not using the default provisioning network. Replace <ipv6_prefix> with the IP address prefix of the network to use for managing instances when not using the default provisioning network. Configure the DHCP allocation range for provisioning nodes: Replace <ipv6_address_dhcp_start> with the IPv6 address of the start of the network range to use for the overcloud nodes. Replace <ipv6_address_dhcp_end> with the IPv6 address of the end of the network range to use for the overcloud nodes. Optional: Configure the gateway for forwarding traffic to the external network: Replace <ipv6_gateway_address> with the IPv6 address of the gateway when not using the default gateway. Configure the DHCP range to use during the inspection process: Replace <ipv6_address_inspection_start> with the IPv6 address of the start of the network range to use during the inspection process. Replace <ipv6_address_inspection_end> with the IPv6 address of the end of the network range to use during the inspection process. Note This range must not overlap with the range defined by dhcp_start and dhcp_end , but must be in the same IP subnet. Configure an IPv6 nameserver for the subnet: Replace <ipv6_dns> with the DNS nameservers specific to the subnet. 4.7. Configuring undercloud network interfaces Include custom network configuration in the undercloud.conf file to install the undercloud with specific networking functionality. For example, some interfaces might not have DHCP. In this case, you must disable DHCP for these interfaces in the undercloud.conf file so that os-net-config can apply the configuration during the undercloud installation process. Procedure Log in to the undercloud host. Create a new file undercloud-os-net-config.yaml and include the network configuration that you require. For more information, see Network interface reference in the Advanced Overcloud Customization guide. Here is an example: To create a network bond for a specific interface, use the following sample: Include the path to the undercloud-os-net-config.yaml file in the net_config_override parameter in the undercloud.conf file: Note Director uses the file that you include in the net_config_override parameter as the template to generate the /etc/os-net-config/config.yaml file. os-net-config manages the interfaces that you define in the template, so you must perform all undercloud network interface customization in this file. Install the undercloud. Verification After the undercloud installation completes successfully, verify that the /etc/os-net-config/config.yaml file contains the relevant configuration: 4.8. Installing director Complete the following steps to install director and perform some basic post-installation tasks. Procedure Run the following command to install director on the undercloud: This command launches the director configuration script. Director installs additional packages, configures its services according to the configuration in the undercloud.conf , and starts all the RHOSP service containers. This script takes several minutes to complete. The script generates two files: undercloud-passwords.conf - A list of all passwords for the director services. stackrc - A set of initialization variables to help you access the director command line tools. Confirm that the RHOSP service containers are running: The following command output indicates that the RHOSP service containers are running ( Up ): To initialize the stack user to use the command line tools, run the following command: The prompt now indicates that OpenStack commands authenticate and execute against the undercloud; The director installation is complete. You can now use the director command line tools. 4.9. Configuring the CPU architecture for the overcloud Red Hat OpenStack Platform (RHOSP) configures the CPU architecture of an overcloud as x86_64 by default. You can also deploy overcloud Compute nodes on POWER (ppc64le) hardware. For the Compute node cluster, you can use the same architecture, or use a combination of x86_64 and ppc64le systems. Note The undercloud, Controller nodes, Ceph Storage nodes, and all other systems are supported only on x86_64 hardware. 4.9.1. Configuring POWER (ppc64le) as the single CPU architecture for the overcloud The default CPU architecture of the Compute nodes on an overcloud is x86_64. To deploy overcloud Compute nodes on POWER (ppc64le) hardware, you can change the architecture to ppc64le. Procedure Disable iPXE in the undercloud.conf file: Note For RHOSP 16.2.1 and earlier, this configuration causes any x86_64 nodes in your deployment to also boot in PXE/legacy mode. To configure a multiple CPU architecture for your overcloud, see Configuring a multiple CPU architecture overcloud . Install the undercloud: For more information, see Installing director on the undercloud . Wait until the installation script completes. Obtain and upload the images for the overcloud nodes. For more information, see Obtaining images for overcloud nodes . 4.9.2. Configuring a multiple CPU architecture overcloud For RHOSP 16.2.2 and later, you can configure your undercloud to support both PXE and iPXE boot modes when your architecture includes both POWER (ppc64le) and x86_64 UEFI nodes. Note When your architecture includes POWER (ppc64le) nodes, RHOSP 16.2.1 and earlier supports only PXE boot. Procedure Enable iPXE in the undercloud.conf file: Create a custom environment file for the undercloud, undercloud_noIronicIPXEEnabled.yaml . To change the default Bare Metal Provisioning service (ironic) iPXE setting to PXE , add the following configuration to undercloud_noIronicIPXEEnabled.yaml : If your architecture includes ppc64le nodes, add the following configuration to undercloud_noIronicIPXEEnabled.yaml to disable the boot timeout: Include the custom environment file in the undercloud.conf file: Install the undercloud: For more information, see Installing director on the undercloud . Wait until the installation script completes. Register your overcloud nodes: For more information on registering overcloud nodes, see Registering nodes for the overcloud . Wait for the node registration and configuration to complete. Confirm that director has successfully registered the nodes: Check the existing capabilities of each registered node: Set the boot mode to uefi for each registered node by adding boot_mode:uefi to the existing capabilities of the node: Replace <node> with the ID of the bare metal node. Replace <capability_1> , and all capabilities up to <capability_n> , with each capability that you retrieved in step 6. Obtain and upload the images for the overcloud nodes. For more information, see Multiple CPU architecture overcloud images . Set the boot mode for each node: For legacy/PXE: For iPXE: 4.9.3. Using Ceph Storage in a multi-architecture overcloud When you configure access to external Ceph in a multi-architecture cloud, set the CephAnsiblePlaybook parameter to /usr/share/ceph-ansible/site.yml.sample and include your client key and other Ceph-specific parameters. For example: 4.9.4. Using composable services in a multi-architecture overcloud The following services typically form part of the Controller node and are available for use in custom roles as Technology Preview: Block Storage service (cinder) Image service (glance) Identity service (keystone) Networking service (neutron) Object Storage service (swift) Note Red Hat does not support features in Technology Preview. For more information about composable services, see composable services and custom roles in the Advanced Overcloud Customization guide. Use the following example to understand how to move the listed services from the Controller node to a dedicated ppc64le node: 4.10. Obtaining images for overcloud nodes Director requires several disk images to provision overcloud nodes: An introspection kernel and ramdisk for bare metal system introspection over PXE boot. A deployment kernel and ramdisk for system provisioning and deployment. An overcloud kernel, ramdisk, and full image, which form a base overcloud system that director writes to the hard disk of the node. You can obtain and install the images you need based on your CPU architecture. You can also obtain and install a basic image to provision a bare OS when you do not want to run any other Red Hat OpenStack Platform (RHOSP) services or consume one of your subscription entitlements. 4.10.1. Single CPU architecture overcloud images Your Red Hat OpenStack Platform (RHOSP) installation includes packages that provide you with the following overcloud images for director: overcloud-full overcloud-full-initrd overcloud-full-vmlinuz These images are necessary for deployment of the overcloud with the default CPU architecture, x86-64. Importing these images into director also installs introspection images on the director PXE server. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Install the rhosp-director-images and rhosp-director-images-ipa-x86_64 packages: Create the images directory in the home directory of the stack user ( /home/stack/images ). Extract the images archives to the images directory: Import the images into director: Verify that the images are uploaded: Verify that director has copied the introspection PXE images to /var/lib/ironic/httpboot : 4.10.2. Multiple CPU architecture overcloud images Your Red Hat OpenStack Platform (RHOSP) installation includes packages that provide you with the following images that are necessary for deployment of the overcloud with the default CPU architecture, x86-64: overcloud-full overcloud-full-initrd overcloud-full-vmlinuz Your RHOSP installation also includes packages that provide you with the following images that are necessary for deployment of the overcloud with the POWER (ppc64le) CPU architecture: ppc64le-overcloud-full Importing these images into director also installs introspection images on the director PXE server. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Install the rhosp-director-images-all package: Extract the archives to an architecture specific directory in the images directory in the home directory of the stack user ( /home/stack/images ): Import the images into director: Verify that the images are uploaded: Verify that director has copied the introspection PXE images to /var/lib/ironic/tftpboot : 4.10.3. Enabling multiple CPU architectures on container images If your Red Hat OpenStack Platform (RHOSP) deployment has a multiple CPU architecture, and it uses container images, you must update the container images to enable multiple architectures. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Add the additional architectures to the containers-prepare-parameter.yaml file to enable multiple architectures: Replace <list_of_architectures> with a comma-separated list of the architectures supported by your overcloud environment, for example, [ppc64le] . Prepare and upload the containers: 4.10.4. Minimal overcloud image You can use the overcloud-minimal image to provision a bare OS where you do not want to run any other Red Hat OpenStack Platform (RHOSP) services or consume one of your subscription entitlements. Your RHOSP installation includes the overcloud-minimal package that provides you with the following overcloud images for director: overcloud-minimal overcloud-minimal-initrd overcloud-minimal-vmlinuz Note The default overcloud-full.qcow2 image is a flat partition image. However, you can also import and use whole disk images. For more information, see Chapter 24, Creating whole-disk images . Procedure Log in to the undercloud as the stack user. Source the stackrc file: Install the overcloud-minimal package: Extract the images archives to the images directory in the home directory of the stack user ( /home/stack/images ): Import the images into director: Verify that the images are uploaded: 4.11. Setting a nameserver for the control plane If you intend for the overcloud to resolve external hostnames, such as cdn.redhat.com , set a nameserver on the overcloud nodes. For a standard overcloud without network isolation, the nameserver is defined using the undercloud control plane subnet. Complete the following procedure to define nameservers for the environment. Procedure Source the stackrc file to enable the director command line tools: Set the nameservers for the ctlplane-subnet subnet: Use the --dns-nameserver option for each nameserver. View the subnet to verify the nameserver: Important If you aim to isolate service traffic onto separate networks, the overcloud nodes must use the DnsServers parameter in your network environment files. You must also set the control plane nameserver and the DnsServers parameter to the same DNS server. 4.12. Updating the undercloud configuration If you need to change the undercloud configuration to suit new requirements, you can make changes to your undercloud configuration after installation, edit the relevant configuration files and re-run the openstack undercloud install command. Procedure Modify the undercloud configuration files. For example, edit the undercloud.conf file and add the idrac hardware type to the list of enabled hardware types: Run the openstack undercloud install command to refresh your undercloud with the new changes: Wait until the command runs to completion. Initialize the stack user to use the command line tools,: The prompt now indicates that OpenStack commands authenticate and execute against the undercloud: Verify that director has applied the new configuration. For this example, check the list of enabled hardware types: The undercloud re-configuration is complete. 4.13. Undercloud container registry Red Hat Enterprise Linux 8.4 no longer includes the docker-distribution package, which installed a Docker Registry v2. To maintain the compatibility and the same level of feature, the director installation creates an Apache web server with a vhost called image-serve to provide a registry. This registry also uses port 8787/TCP with SSL disabled. The Apache-based registry is not containerized, which means that you must run the following command to restart the registry: You can find the container registry logs in the following locations: /var/log/httpd/image_serve_access.log /var/log/httpd/image_serve_error.log. The image content is served from /var/lib/image-serve . This location uses a specific directory layout and apache configuration to implement the pull function of the registry REST API. The Apache-based registry does not support podman push nor buildah push commands, which means that you cannot push container images using traditional methods. To modify images during deployment, use the container preparation workflow, such as the ContainerImagePrepare parameter. To manage container images, use the container management commands: openstack tripleo container image list Lists all images stored on the registry. openstack tripleo container image show Show metadata for a specific image on the registry. openstack tripleo container image push Push an image from a remote registry to the undercloud registry. openstack tripleo container image delete Delete an image from the registry.
|
[
"[stack@director ~]USD cp /usr/share/python-tripleoclient/undercloud.conf.sample ~/undercloud.conf",
"2: em0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:75:24:09 brd ff:ff:ff:ff:ff:ff inet 192.168.122.178/24 brd 192.168.122.255 scope global dynamic em0 valid_lft 3462sec preferred_lft 3462sec inet6 fe80::5054:ff:fe75:2409/64 scope link valid_lft forever preferred_lft forever 3: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noop state DOWN link/ether 42:0b:c2:a5:c1:26 brd ff:ff:ff:ff:ff:ff",
"[ctlplane-subnet] cidr = 192.168.24.0/24 dhcp_start = 192.168.24.5 dhcp_end = 192.168.24.24 inspection_iprange = 192.168.24.100,192.168.24.120 gateway = 192.168.24.1 masquerade = true",
"dhcp_start = 172.20.0.100,172.20.0.200 dhcp_end = 172.20.0.150,172.20.0.250",
"dhcp_start = 172.20.0.100 dhcp_end = 172.20.0.250 dhcp_exclude = 172.20.0.151-172.20.0.199",
"dhcp_exclude = 172.20.0.105,172.20.0.210-172.20.0.219",
"parameter_defaults: Debug: True",
"custom_env_files = /home/stack/templates/custom-undercloud-params.yaml",
"parameter_defaults: Debug: True AdminPassword: \"myp@ssw0rd!\" AdminEmail: \"[email protected]\"",
"nova::compute::force_raw_images: False",
"nova::config::nova_config: DEFAULT/<parameter_name>: value: <parameter_value>",
"nova::config::nova_config: DEFAULT/network_allocate_retries: value: 20 ironic/serial_console_state_timeout: value: 15",
"hieradata_override = /home/stack/hieradata.yaml",
"[DEFAULT] ipv6_address_mode = <address_mode>",
"[DEFAULT] ipv6_address_mode = dhcpv6-stateful ironic_enabled_network_interfaces = neutron,flat",
"[DEFAULT] ironic_default_network_interface = neutron",
"[DEFAULT] enable_routed_networks: <true/false>",
"[DEFAULT] local_ip = <ipv6_address> undercloud_admin_host = <ipv6_address> undercloud_public_host = <ipv6_address>",
"[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix>",
"[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix> dhcp_start = <ipv6_address_dhcp_start> dhcp_end = <ipv6_address_dhcp_end>",
"[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix> dhcp_start = <ipv6_address_dhcp_start> dhcp_end = <ipv6_address_dhcp_end> gateway = <ipv6_gateway_address>",
"[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix> dhcp_start = <ipv6_address_dhcp_start> dhcp_end = <ipv6_address_dhcp_end> gateway = <ipv6_gateway_address> inspection_iprange = <ipv6_address_inspection_start>,<ipv6_address_inspection_end>",
"[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix> dhcp_start = <ipv6_address_dhcp_start> dhcp_end = <ipv6_address_dhcp_end> gateway = <ipv6_gateway_address> inspection_iprange = <ipv6_address_inspection_start>,<ipv6_address_inspection_end> dns_nameservers = <ipv6_dns>",
"network_config: - name: br-ctlplane type: ovs_bridge use_dhcp: false dns_servers: - 192.168.122.1 domain: lab.example.com ovs_extra: - \"br-set-external-id br-ctlplane bridge-id br-ctlplane\" addresses: - ip_netmask: 172.20.0.1/26 members: - type: interface name: nic2",
"network_config: - name: br-ctlplane type: ovs_bridge use_dhcp: false dns_servers: - 192.168.122.1 domain: lab.example.com ovs_extra: - \"br-set-external-id br-ctlplane bridge-id br-ctlplane\" addresses: - ip_netmask: 172.20.0.1/26 members: - name: bond-ctlplane type: linux_bond use_dhcp: false bonding_options: \"mode=active-backup\" mtu: 1500 members: - type: interface name: nic2 - type: interface name: nic3",
"[DEFAULT] net_config_override=undercloud-os-net-config.yaml",
"network_config: - name: br-ctlplane type: ovs_bridge use_dhcp: false dns_servers: - 192.168.122.1 domain: lab.example.com ovs_extra: - \"br-set-external-id br-ctlplane bridge-id br-ctlplane\" addresses: - ip_netmask: 172.20.0.1/26 members: - type: interface name: nic2",
"[stack@director ~]USD openstack undercloud install",
"[stack@director ~]USD sudo podman ps -a --format \"{{.Names}} {{.Status}}\"",
"memcached Up 3 hours (healthy) haproxy Up 3 hours rabbitmq Up 3 hours (healthy) mysql Up 3 hours (healthy) iscsid Up 3 hours (healthy) keystone Up 3 hours (healthy) keystone_cron Up 3 hours (healthy) neutron_api Up 3 hours (healthy) logrotate_crond Up 3 hours (healthy) neutron_dhcp Up 3 hours (healthy) neutron_l3_agent Up 3 hours (healthy) neutron_ovs_agent Up 3 hours (healthy) ironic_api Up 3 hours (healthy) ironic_conductor Up 3 hours (healthy) ironic_neutron_agent Up 3 hours (healthy) ironic_pxe_tftp Up 3 hours (healthy) ironic_pxe_http Up 3 hours (unhealthy) ironic_inspector Up 3 hours (healthy) ironic_inspector_dnsmasq Up 3 hours (healthy) neutron-dnsmasq-qdhcp-30d628e6-45e6-499d-8003-28c0bc066487 Up 3 hours",
"[stack@director ~]USD source ~/stackrc",
"(undercloud) [stack@director ~]USD",
"[DEFAULT] ipxe_enabled = False",
"[stack@director ~]USD openstack undercloud install",
"[DEFAULT] ipxe_enabled = True",
"parameter_defaults: IronicIPXEEnabled: false IronicInspectorIPXEEnabled: true",
"parameter_defaults: ExtraConfig: ironic::config::ironic_config: ipmi/disable_boot_timeout: value: 'false'",
"[DEFAULT] custom_env_files = undercloud_noIronicIPXEEnabled.yaml",
"[stack@director ~]USD openstack undercloud install",
"(undercloud)USD openstack overcloud node import ~/nodes.json",
"(undercloud)USD openstack baremetal node list",
"openstack baremetal node show <node> -f json -c properties | jq -r .properties.capabilities",
"openstack baremetal node set --property capabilities=\"boot_mode:uefi,<capability_1>,...,<capability_n>\" <node>",
"openstack baremetal node set --boot-interface pxe <node_name>",
"openstack baremetal node set --boot-interface ipxe <node_name>",
"parameter_defaults: CephAnsiblePlaybook: /usr/share/ceph-ansible/site.yml.sample CephClientKey: AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ== CephClusterFSID: 4b5c8c0a-ff60-454b-a1b4-9747aa737d19 CephExternalMonHost: 172.16.1.7, 172.16.1.8",
"(undercloud) [stack@director ~]USD rsync -a /usr/share/openstack-tripleo-heat-templates/. ~/templates (undercloud) [stack@director ~]USD cd ~/templates/roles (undercloud) [stack@director roles]USD cat <<EO_TEMPLATE >ControllerPPC64LE.yaml ############################################################################### Role: ControllerPPC64LE # ############################################################################### - name: ControllerPPC64LE description: | Controller role that has all the controller services loaded and handles Database, Messaging and Network functions. CountDefault: 1 tags: - primary - controller networks: - External - InternalApi - Storage - StorageMgmt - Tenant # For systems with both IPv4 and IPv6, you may specify a gateway network for # each, such as ['ControlPlane', 'External'] default_route_networks: ['External'] HostnameFormatDefault: '%stackname%-controllerppc64le-%index%' ImageDefault: ppc64le-overcloud-full ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::CinderApi - OS::TripleO::Services::CinderBackendDellPs - OS::TripleO::Services::CinderBackendDellSc - OS::TripleO::Services::CinderBackendDellEMCUnity - OS::TripleO::Services::CinderBackendDellEMCVMAXISCSI - OS::TripleO::Services::CinderBackendDellEMCVNX - OS::TripleO::Services::CinderBackendDellEMCXTREMIOISCSI - OS::TripleO::Services::CinderBackendNetApp - OS::TripleO::Services::CinderBackendScaleIO - OS::TripleO::Services::CinderBackendVRTSHyperScale - OS::TripleO::Services::CinderBackup - OS::TripleO::Services::CinderHPELeftHandISCSI - OS::TripleO::Services::CinderScheduler - OS::TripleO::Services::CinderVolume - OS::TripleO::Services::Collectd - OS::TripleO::Services::Docker - OS::TripleO::Services::Fluentd - OS::TripleO::Services::GlanceApi - OS::TripleO::Services::GlanceRegistry - OS::TripleO::Services::Ipsec - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Kernel - OS::TripleO::Services::Keystone - OS::TripleO::Services::LoginDefs - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::NeutronApi - OS::TripleO::Services::NeutronBgpVpnApi - OS::TripleO::Services::NeutronSfcApi - OS::TripleO::Services::NeutronCorePlugin - OS::TripleO::Services::NeutronDhcpAgent - OS::TripleO::Services::NeutronL2gwAgent - OS::TripleO::Services::NeutronL2gwApi - OS::TripleO::Services::NeutronL3Agent - OS::TripleO::Services::NeutronLbaasv2Agent - OS::TripleO::Services::NeutronLbaasv2Api - OS::TripleO::Services::NeutronLinuxbridgeAgent - OS::TripleO::Services::NeutronMetadataAgent - OS::TripleO::Services::NeutronML2FujitsuCfab - OS::TripleO::Services::NeutronML2FujitsuFossw - OS::TripleO::Services::NeutronOvsAgent - OS::TripleO::Services::NeutronVppAgent - OS::TripleO::Services::Ntp - OS::TripleO::Services::ContainersLogrotateCrond - OS::TripleO::Services::OpenDaylightOvs - OS::TripleO::Services::Rhsm - OS::TripleO::Services::RsyslogSidecar - OS::TripleO::Services::Securetty - OS::TripleO::Services::SensuClient - OS::TripleO::Services::SkydiveAgent - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::SwiftProxy - OS::TripleO::Services::SwiftDispersion - OS::TripleO::Services::SwiftRingBuilder - OS::TripleO::Services::SwiftStorage - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::Tuned - OS::TripleO::Services::Vpp - OS::TripleO::Services::OVNController - OS::TripleO::Services::OVNMetadataAgent - OS::TripleO::Services::Ptp EO_TEMPLATE (undercloud) [stack@director roles]USD sed -i~ -e '/OS::TripleO::Services::\\(Cinder\\|Glance\\|Swift\\|Keystone\\|Neutron\\)/d' Controller.yaml (undercloud) [stack@director roles]USD cd ../ (undercloud) [stack@director templates]USD openstack overcloud roles generate --roles-path roles -o roles_data.yaml Controller Compute ComputePPC64LE ControllerPPC64LE BlockStorage ObjectStorage CephStorage",
"[stack@director ~]USD source ~/stackrc",
"(undercloud) [stack@director ~]USD sudo dnf install rhosp-director-images rhosp-director-images-ipa-x86_64",
"(undercloud) [stack@director ~]USD mkdir /home/stack/images",
"(undercloud) [stack@director ~]USD cd ~/images (undercloud) [stack@director images]USD for i in /usr/share/rhosp-director-images/overcloud-full-latest-16.2.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-16.2.tar; do tar -xvf USDi; done",
"(undercloud) [stack@director images]USD openstack overcloud image upload --image-path /home/stack/images/",
"(undercloud) [stack@director images]USD openstack image list +--------------------------------------+------------------------+ | ID | Name | +--------------------------------------+------------------------+ | ef793cd0-e65c-456a-a675-63cd57610bd5 | overcloud-full | | 9a51a6cb-4670-40de-b64b-b70f4dd44152 | overcloud-full-initrd | | 4f7e33f4-d617-47c1-b36f-cbe90f132e5d | overcloud-full-vmlinuz | +--------------------------------------+------------------------+",
"(undercloud) [stack@director images]USD ls -l /var/lib/ironic/httpboot total 417296 -rwxr-xr-x. 1 root root 6639920 Jan 29 14:48 agent.kernel -rw-r--r--. 1 root root 420656424 Jan 29 14:48 agent.ramdisk -rw-r--r--. 1 42422 42422 758 Jan 29 14:29 boot.ipxe -rw-r--r--. 1 42422 42422 488 Jan 29 14:16 inspector.ipxe",
"[stack@director ~]USD source ~/stackrc",
"(undercloud) [stack@director ~]USD sudo dnf install rhosp-director-images-all",
"(undercloud) [stack@director ~]USD cd ~/images (undercloud) [stack@director images]USD for arch in x86_64 ppc64le ; do mkdir USDarch ; done (undercloud) [stack@director images]USD for arch in x86_64 ppc64le ; do for i in /usr/share/rhosp-director-images/overcloud-full-latest-16.1-USD{arch}.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-16.1-USD{arch}.tar ; do tar -C USDarch -xf USDi ; done ; done",
"(undercloud) [stack@director ~]USD cd ~/images (undercloud) [stack@director images]USD openstack overcloud image upload --image-path ~/images/ppc64le --architecture ppc64le --whole-disk --http-boot /var/lib/ironic/tftpboot/ppc64le (undercloud) [stack@director images]USD openstack overcloud image upload --image-path ~/images/ppc64le --architecture ppc64le --whole-disk --image-type ironic-python-agent --http-boot /var/lib/ironic/httpboot/ppc64le (undercloud) [stack@director images]USD openstack overcloud image upload --image-path ~/images/x86_64/ --architecture x86_64 --http-boot /var/lib/ironic/tftpboot (undercloud) [stack@director images]USD openstack overcloud image upload --image-path ~/images/x86_64 --architecture x86_64 --image-type ironic-python-agent --http-boot /var/lib/ironic/httpboot",
"(undercloud) [stack@director images]USD openstack image list +--------------------------------------+---------------------------+--------+ | ID | Name | Status | +--------------------------------------+---------------------------+--------+ | 6a6096ba-8f79-4343-b77c-4349f7b94960 | overcloud-full | active | | de2a1bde-9351-40d2-bbd7-7ce9d6eb50d8 | overcloud-full-initrd | active | | 67073533-dd2a-4a95-8e8b-0f108f031092 | overcloud-full-vmlinuz | active | | f0fedcd0-3f28-4b44-9c88-619419007a03 | ppc64le-overcloud-full | active | +--------------------------------------+---------------------------+--------+",
"(undercloud) [stack@director images]USD ls -l /var/lib/ironic/tftpboot /var/lib/ironic/tftpboot/ppc64le/ /var/lib/ironic/tftpboot: total 422624 -rwxr-xr-x. 1 root root 6385968 Aug 8 19:35 agent.kernel -rw-r--r--. 1 root root 425530268 Aug 8 19:35 agent.ramdisk -rwxr--r--. 1 42422 42422 20832 Aug 8 02:08 chain.c32 -rwxr--r--. 1 42422 42422 715584 Aug 8 02:06 ipxe.efi -rw-r--r--. 1 root root 22 Aug 8 02:06 map-file drwxr-xr-x. 2 42422 42422 62 Aug 8 19:34 ppc64le -rwxr--r--. 1 42422 42422 26826 Aug 8 02:08 pxelinux.0 drwxr-xr-x. 2 42422 42422 21 Aug 8 02:06 pxelinux.cfg -rwxr--r--. 1 42422 42422 69631 Aug 8 02:06 undionly.kpxe /var/lib/ironic/tftpboot/ppc64le/: total 457204 -rwxr-xr-x. 1 root root 19858896 Aug 8 19:34 agent.kernel -rw-r--r--. 1 root root 448311235 Aug 8 19:34 agent.ramdisk -rw-r--r--. 1 42422 42422 336 Aug 8 02:06 default",
"[stack@director ~]USD source ~/stackrc",
"parameter_defaults: ContainerImageRegistryLogin: true AdditionalArchitectures: [<list_of_architectures>] ContainerImagePrepare: - push_destination: true",
"openstack tripleo container image prepare -e ~/containers-prepare-parameter.yaml",
"[stack@director ~]USD source ~/stackrc",
"(undercloud) [stack@director ~]USD sudo dnf install rhosp-director-images-minimal",
"(undercloud) [stack@director ~]USD cd ~/images (undercloud) [stack@director images]USD tar xf /usr/share/rhosp-director-images/overcloud-minimal-latest-16.2.tar",
"(undercloud) [stack@director images]USD openstack overcloud image upload --image-path /home/stack/images/ --image-type os --os-image-name overcloud-minimal.qcow2",
"(undercloud) [stack@director images]USD openstack image list +--------------------------------------+---------------------------+ | ID | Name | +--------------------------------------+---------------------------+ | ef793cd0-e65c-456a-a675-63cd57610bd5 | overcloud-full | | 9a51a6cb-4670-40de-b64b-b70f4dd44152 | overcloud-full-initrd | | 4f7e33f4-d617-47c1-b36f-cbe90f132e5d | overcloud-full-vmlinuz | | 32cf6771-b5df-4498-8f02-c3bd8bb93fdd | overcloud-minimal | | 600035af-dbbb-4985-8b24-a4e9da149ae5 | overcloud-minimal-initrd | | d45b0071-8006-472b-bbcc-458899e0d801 | overcloud-minimal-vmlinuz | +--------------------------------------+---------------------------+",
"[stack@director ~]USD source ~/stackrc",
"(undercloud) [stack@director images]USD openstack subnet set --dns-nameserver [nameserver1-ip] --dns-nameserver [nameserver2-ip] ctlplane-subnet",
"(undercloud) [stack@director images]USD openstack subnet show ctlplane-subnet +-------------------+-----------------------------------------------+ | Field | Value | +-------------------+-----------------------------------------------+ | ... | | | dns_nameservers | 8.8.8.8 | | ... | | +-------------------+-----------------------------------------------+",
"enabled_hardware_types = ipmi,redfish,idrac",
"[stack@director ~]USD openstack undercloud install",
"[stack@director ~]USD source ~/stackrc",
"(undercloud) [stack@director ~]USD",
"(undercloud) [stack@director ~]USD openstack baremetal driver list +---------------------+----------------------+ | Supported driver(s) | Active host(s) | +---------------------+----------------------+ | idrac | director.example.com | | ipmi | director.example.com | | redfish | director.example.com | +---------------------+----------------------+",
"sudo systemctl restart httpd"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_installing-director-on-the-undercloud
|
function::proc_mem_data_pid
|
function::proc_mem_data_pid Name function::proc_mem_data_pid - Program data size (data + stack) in pages Synopsis Arguments pid The pid of process to examine Description Returns the given process data size (data + stack) in pages, or zero when the process doesn't exist or the number of pages couldn't be retrieved.
|
[
"proc_mem_data_pid:long(pid:long)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-proc-mem-data-pid
|
3.3. Deploy a VDB via File Deployment
|
3.3. Deploy a VDB via File Deployment Prerequisites Red Hat JBoss Data Virtualization must be installed. Procedure 3.1. Deploy a VDB via File Deployment Copy your VDB into the deploy directory Copy your VDB file into the EAP_HOME /standalone/deployments directory. Create a marker file Create an empty marker file of the same name with extension .dodeploy in the same directory. For example, if your VDB name is enterprise.vdb , then the marker file name must be enterprise.vdb.dodeploy . Note This only works in standalone mode. For domain mode, you must use one of the other available methods.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/deploy_a_vdb_via_file_deployment1
|
Chapter 55. Using the KDC Proxy in IdM
|
Chapter 55. Using the KDC Proxy in IdM Some administrators might choose to make the default Kerberos ports inaccessible in their deployment. To allow users, hosts, and services to obtain Kerberos credentials, you can use the HTTPS service as a proxy that communicates with Kerberos via the HTTPS port 443. In Identity Management (IdM), the Kerberos Key Distribution Center Proxy (KKDCP) provides this functionality. On an IdM server, KKDCP is enabled by default and available at https:// server.idm.example.com /KdcProxy . On an IdM client, you must change its Kerberos configuration to access the KKDCP. 55.1. Configuring an IdM client to use KKDCP As an Identity Management (IdM) system administrator, you can configure an IdM client to use the Kerberos Key Distribution Center Proxy (KKDCP) on an IdM server. This is useful if the default Kerberos ports are not accessible on the IdM server and the HTTPS port 443 is the only way of accessing the Kerberos service. Prerequisites You have root access to the IdM client. Procedure Open the /etc/krb5.conf file for editing. In the [realms] section, enter the URL of the KKDCP for the kdc , admin_server , and kpasswd_server options: For redundancy, you can add the parameters kdc , admin_server , and kpasswd_server multiple times to indicate different KKDCP servers. Restart the sssd service to make the changes take effect: 55.2. Verifying that KKDCP is enabled on an IdM server On an Identity Management (IdM) server, the Kerberos Key Distribution Center Proxy (KKDCP) is automatically enabled each time the Apache web server starts if the attribute and value pair ipaConfigString=kdcProxyEnabled exists in the directory. In this situation, the symbolic link /etc/httpd/conf.d/ipa-kdc-proxy.conf is created. You can verify if the KKDCP is enabled on the IdM server, even as an unprivileged user. Procedure Check that the symbolic link exists: The output confirms that KKDCP is enabled. 55.3. Disabling KKDCP on an IdM server As an Identity Management (IdM) system administrator, you can disable the Kerberos Key Distribution Center Proxy (KKDCP) on an IdM server. Prerequisites You have root access to the IdM server. Procedure Remove the ipaConfigString=kdcProxyEnabled attribute and value pair from the directory: Restart the httpd service: KKDCP is now disabled on the current IdM server. Verification Verify that the symbolic link does not exist: 55.4. Re-enabling KKDCP on an IdM server On an IdM server, the Kerberos Key Distribution Center Proxy (KKDCP) is enabled by default and available at https:// server.idm.example.com /KdcProxy . If KKDCP has been disabled on a server, you can re-enable it. Prerequisites You have root access to the IdM server. Procedure Add the ipaConfigString=kdcProxyEnabled attribute and value pair to the directory: Restart the httpd service: KKDCP is now enabled on the current IdM server. Verification Verify that the symbolic link exists: 55.5. Configuring the KKDCP server I With the following configuration, you can enable TCP to be used as the transport protocol between the IdM KKDCP and the Active Directory (AD) realm, where multiple Kerberos servers are used. Prerequisites You have root access. Procedure Set the use_dns parameter in the [global] section of the /etc/ipa/kdcproxy/kdcproxy.conf file to false . Put the proxied realm information into the /etc/ipa/kdcproxy/kdcproxy.conf file. For example, for the [AD. EXAMPLE.COM ] realm with proxy list the realm configuration parameters as follows: Important The realm configuration parameters must list multiple servers separated by a space, as opposed to /etc/krb5.conf and kdc.conf , in which certain options may be specified multiple times. Restart Identity Management (IdM) services: Additional resources Configure IPA server as a KDC Proxy for AD Kerberos communication (Red Hat Knowledgebase) 55.6. Configuring the KKDCP server II The following server configuration relies on the DNS service records to find Active Directory (AD) servers to communicate with. Prerequisites You have root access. Procedure In the /etc/ipa/kdcproxy/kdcproxy.conf file, the [global] section, set the use_dns parameter to true . The configs parameter allows you to load other configuration modules. In this case, the configuration is read from the MIT libkrb5 library. Optional: In case you do not want to use DNS service records, add explicit AD servers to the [realms] section of the /etc/krb5.conf file. If the realm with proxy is, for example, AD. EXAMPLE.COM , you add: Restart Identity Management (IdM) services: Additional resources Configure IPA server as a KDC Proxy for AD Kerberos communication (Red Hat Knowledgebase)
|
[
"[realms] EXAMPLE.COM = { kdc = https://kdc.example.com/KdcProxy admin_server = https://kdc.example.com/KdcProxy kpasswd_server = https://kdc.example.com/KdcProxy default_domain = example.com }",
"~]# systemctl restart sssd",
"ls -l /etc/httpd/conf.d/ipa-kdc-proxy.conf lrwxrwxrwx. 1 root root 36 Jun 21 2020 /etc/httpd/conf.d/ipa-kdc-proxy.conf -> /etc/ipa/kdcproxy/ipa-kdc-proxy.conf",
"ipa-ldap-updater /usr/share/ipa/kdcproxy-disable.uldif Update complete The ipa-ldap-updater command was successful",
"systemctl restart httpd.service",
"ls -l /etc/httpd/conf.d/ipa-kdc-proxy.conf ls: cannot access '/etc/httpd/conf.d/ipa-kdc-proxy.conf': No such file or directory",
"ipa-ldap-updater /usr/share/ipa/kdcproxy-enable.uldif Update complete The ipa-ldap-updater command was successful",
"systemctl restart httpd.service",
"ls -l /etc/httpd/conf.d/ipa-kdc-proxy.conf lrwxrwxrwx. 1 root root 36 Jun 21 2020 /etc/httpd/conf.d/ipa-kdc-proxy.conf -> /etc/ipa/kdcproxy/ipa-kdc-proxy.conf",
"[global] use_dns = false",
"[AD. EXAMPLE.COM ] kerberos = kerberos+tcp://1.2.3.4:88 kerberos+tcp://5.6.7.8:88 kpasswd = kpasswd+tcp://1.2.3.4:464 kpasswd+tcp://5.6.7.8:464",
"ipactl restart",
"[global] configs = mit use_dns = true",
"[realms] AD. EXAMPLE.COM = { kdc = ad-server.ad.example.com kpasswd_server = ad-server.ad.example.com }",
"ipactl restart"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/using-the-kdc-proxy-in-idm_configuring-and-managing-idm
|
6.4. SMB
|
6.4. SMB You can access Red Hat Gluster Storage volumes using the Server Message Block (SMB) protocol by exporting directories in Red Hat Gluster Storage volumes as SMB shares on the server. This section describes how to enable SMB shares, how to mount SMB shares manually and automatically on Microsoft Windows and macOS based clients, and how to verify that the share has been mounted successfully. Warning When performance translators are enabled, data inconsistency is observed when multiple clients access the same data. To avoid data inconsistency, you can either disable the performance translators or avoid such workloads. Follow the process outlined below. The details of this overview are provided in the rest of this section. Overview of configuring SMB shares Verify that your system fulfils the requirements outlined in Section 6.4.1, "Requirements for using SMB with Red Hat Gluster Storage" . If you want to share volumes that use replication, set up CTDB: Section 6.4.2, "Setting up CTDB for Samba" . Configure your volumes to be shared using SMB: Section 6.4.3, "Sharing Volumes over SMB" . If you want to mount volumes on macOS clients: Section 6.4.4.1, "Configuring the Apple Create Context for macOS users" . Set up permissions for user access: Section 6.4.4.2, "Configuring read/write access for a non-privileged user" . Mount the shared volume on a client: Section 6.4.5.1, "Manually mounting volumes exported with SMB on Red Hat Enterprise Linux" Section 6.4.5.4, "Configuring automatic mounting for volumes exported with SMB on Red Hat Enterprise Linux" Section 6.4.5.2.1, "Using Microsoft Windows Explorer to manually mount a volume" Section 6.4.5.2.2, "Using Microsoft Windows command line interface to manually mount a volume" Section 6.4.5.5, "Configuring automatic mounting for volumes exported with SMB on Microsoft Windows" Section 6.4.5.3, "Manually mounting volumes exported with SMB on macOS" Section 6.4.5.6, "Configuring automatic mounting for volumes exported with SMB on macOS" Verify that your shared volume is working properly: Section 6.4.6, "Starting and Verifying your Configuration" 6.4.1. Requirements for using SMB with Red Hat Gluster Storage Samba is required to provide support and interoperability for the SMB protocol on Red Hat Gluster Storage. Additionally, CTDB is required when you want to share replicated volumes using SMB. See Subscribing to the Red Hat Gluster Storage server channels in the Red Hat Gluster Storage 3.5 Installation Guide for information on subscribing to the correct channels for SMB support. Enable the Samba firewall service in the active zones for runtime and permanent mode. The following commands are for systems based on Red Hat Enterprise Linux 7. To get a list of active zones, run the following command: To allow the firewall services in the active zones, run the following commands 6.4.2. Setting up CTDB for Samba If you want to share volumes that use replication using SMB, you need to configure CTDB (Cluster Trivial Database) to provide high availability and lock synchronization. CTDB provides high availability by adding virtual IP addresses (VIPs) and a heartbeat service. When a node in the trusted storage pool fails, CTDB enables a different node to take over the virtual IP addresses that the failed node was hosting. This ensures the IP addresses for the services provided are always available. Important Amazon Elastic Compute Cloud (EC2) does not support VIPs and is hence not compatible with this solution. Prerequisites If you already have an older version of CTDB (version <= ctdb1.x), then remove CTDB by executing the following command: After removing the older version, proceed with installing the latest CTDB. Note Ensure that the system is subscribed to the samba channel to get the latest CTDB packages. Install CTDB on all the nodes that are used as Samba servers to the latest version using the following command: In a CTDB based high availability environment of Samba , the locks will not be migrated on failover. Enable the CTDB firewall service in the active zones for runtime and permanent mode. The following commands are for systems based on Red Hat Enterprise Linux 7. To get a list of active zones, run the following command: To add ports to the active zones, run the following commands: Best Practices CTDB requires a different broadcast domain from the Gluster internal network. The network used by the Windows clients to access the Gluster volumes exported by Samba, must be different from the internal Gluster network. Failing to do so can lead to an excessive time when there is a failover of CTDB between the nodes, and a degraded performance accessing the shares in Windows. For example an incorrect setup where CTDB is running in Network 192.168.10.X: Note The host names, node1, node2, and node3 are used to setup the bricks and resolve the IPs in the same network 192.168.10.X. The Windows clients are accessing the shares using the internal Gluster network and this should not be the case. Additionally, the CTDB network and the Gluster internal network must run in separate physical interfaces. Red Hat recommends 10GbE interfaces for better performance. It is recommended to use the same network bandwidth for Gluster and CTDB networks. Using different network speeds can lead to performance bottlenecks.The same amount of network traffic is expected in both internal and external networks. Configuring CTDB on Red Hat Gluster Storage Server Create a new replicated volume to house the CTDB lock file. The lock file has a size of zero bytes, so use small bricks. To create a replicated volume run the following command, replacing N with the number of nodes to replicate across: For example: In the following files, replace all in the statement META="all" with the newly created volume name, for example, META="ctdb" . In the /etc/samba/smb.conf file, add the following line in the global section on all the nodes: Start the volume. The S29CTDBsetup.sh script runs on all Red Hat Gluster Storage servers, adds an entry in /etc/fstab for the mount, and mounts the volume at /gluster/lock on all the nodes with Samba server. It also enables automatic start of CTDB service on reboot. Note When you stop the special CTDB volume, the S29CTDB-teardown.sh script runs on all Red Hat Gluster Storage servers and removes an entry in /etc/fstab for the mount and unmounts the volume at /gluster/lock . Verify that the /etc/ctdb directory exists on all nodes that are used as a Samba server. This file contains CTDB configuration details recommended for Red Hat Gluster Storage. Create the /etc/ctdb/nodes file on all the nodes that are used as Samba servers and add the IP addresses of these nodes to the file. The IP addresses listed here are the private IP addresses of Samba servers. On nodes that are used as Samba servers and require IP failover, create the /etc/ctdb/public_addresses file. Add any virtual IP addresses that CTDB should create to the file in the following format: For example: Start the CTDB service on all the nodes. On RHEL 7 and RHEL 8, run On RHEL 6, run 6.4.3. Sharing Volumes over SMB After you follow this process, any gluster volumes configured on servers that run Samba are exported automatically on volume start. See the below example for a default volume share section added to /etc/samba/smb.conf : The configuration options are described in the following table: Table 6.8. Configuration Options Configuration Options Required? Default Value Description Path Yes n/a It represents the path that is relative to the root of the gluster volume that is being shared. Hence / represents the root of the gluster volume. Exporting a subdirectory of a volume is supported and /subdir in path exports only that subdirectory of the volume. glusterfs:volume Yes n/a The volume name that is shared. glusterfs:logfile No NULL Path to the log file that will be used by the gluster modules that are loaded by the vfs plugin. Standard Samba variable substitutions as mentioned in smb.conf are supported. glusterfs:loglevel No 7 This option is equivalent to the client-log-level option of gluster. 7 is the default value and corresponds to the INFO level. glusterfs:volfile_server No localhost The gluster server to be contacted to fetch the volfile for the volume. It takes the value, which is a list of white space separated elements, where each element is unix+/path/to/socket/file or [tcp+]IP|hostname|\[IPv6\][:port] The procedure to share volumes over samba differs depending on the Samba version you would choose. If you are using an older version of Samba: Enable SMB specific caching: You can also enable generic metadata caching to improve performance. See Section 19.7, "Directory Operations" for details. Restart the glusterd service on each Red Hat Gluster Storage node. Verify proper lock and I/O coherence: Note For RHEL based Red Hat Gluster Storage upgrading to 3.5 batch update 4 with Samba, the write-behind translator has to manually disabled for all existing samba volumes. If you are using Samba-4.8.5-104 or later: To export gluster volume as SMB share via Samba, one of the following volume options, user.cifs or user.smb is required. To enable user.cifs volume option, run: And to enable user.smb, run: Red Hat Gluster Storage 3.4 introduces a group command samba for configuring the necessary volume options for Samba-CTDB setup. Execute the following command to configure the volume options for the Samba-CTDB: This command will enable the following option for Samba-CTDB setup: performance.readdir-ahead: on performance.parallel-readdir: on performance.nl-cache-timeout: 600 performance.nl-cache: on performance.cache-samba-metadata: on network.inode-lru-limit: 200000 performance.md-cache-timeout: 600 performance.cache-invalidation: on features.cache-invalidation-timeout: 600 features.cache-invalidation: on performance.stat-prefetch: on If you are using Samba-4.9.8-109 or later: Below mentioned steps are strictly optional and are to be followed in environments where large number of clients are connecting to volumes and/or more volumes are being used. Red Hat Gluster Storage 3.5 introduces an optional method for configuring volume shares out of corresponding FUSE mounted paths. Following steps need to be performed on every node in the cluster. Have a local mount using native Gluster protocol Fuse on every Gluster node that shares the Gluster volume via Samba. Mount GlusterFS volume via FUSE and record the FUSE mountpoint for further steps: Add an entry in /etc/fstab : For example: Where gluster volume is myvol that will be mounted on /mylocal Section 6.2.3.3, "Mounting Volumes Automatically" Section 6.2.3.2, "Mounting Volumes Manually" Edit the samba share configuration file located at /etc/samba/smb.conf Edit the vfs objects parameter value to glusterfs_fuse Edit the path parameter value to the FUSE mountpoint recorded previously. For example: With SELinux in Enforcing mode, turn on the SELinux boolean samba_share_fusefs : Note New volumes being created will be automatically configured with the use of default vfs objects parameter. Modifications to samba share configuration file are retained over restart of volumes until these volumes are deleted using Gluster CLI. The Samba hook scripts invoked as part of Gluster CLI operations on a volume VOLNAME will only operate on a Samba share named [gluster-VOLNAME] . In other words, hook scripts will never delete or change the samba share configuration file for a samba share called [VOLNAME] . Then, for all Samba versions: Verify that the volume can be accessed from the SMB/CIFS share: For example: Verify that the SMB/CIFS share can be accessed by the user, run the following command: For example: 6.4.4. Configuring User Access to Shared Volumes 6.4.4.1. Configuring the Apple Create Context for macOS users Add the following lines to the [global] section of the smb.conf file. Note that the indentation level shown is required. Load the vfs_fruit module and its dependencies by adding the following line to your volume's export configuration block in the smb.conf file. For example: 6.4.4.2. Configuring read/write access for a non-privileged user Add the user on all the Samba servers based on your configuration: Add the user to the list of Samba users on all Samba servers and assign password by executing the following command: From any other Samba server, mount the volume using the FUSE protocol. For example: Use the setfacl command to provide the required permissions for directory access to the user. For example: 6.4.5. Mounting Volumes using SMB 6.4.5.1. Manually mounting volumes exported with SMB on Red Hat Enterprise Linux Install the cifs-utils package on the client. Run mount -t cifs to mount the exported SMB share, using the syntax example as guidance. The sec=ntlmssp parameter is also required when mounting a volume on Red Hat Enterprise Linux 6. For example: Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide Run # smbstatus -S on the server to display the status of the volume: 6.4.5.2. Manually mounting volumes exported with SMB on Microsoft Windows 6.4.5.2.1. Using Microsoft Windows Explorer to manually mount a volume In Windows Explorer, click Tools Map Network Drive... . to open the Map Network Drive screen. Choose the drive letter using the Drive drop-down list. In the Folder text box, specify the path of the server and the shared resource in the following format: \\ SERVER_NAME \ VOLNAME . Click Finish to complete the process, and display the network drive in Windows Explorer. Navigate to the network drive to verify it has mounted correctly. 6.4.5.2.2. Using Microsoft Windows command line interface to manually mount a volume Click Start Run , and then type cmd . Enter net use z : \\ SERVER_NAME \ VOLNAME , where z: is the drive letter to assign to the shared volume. For example, net use y: \\server1\test-volume Navigate to the network drive to verify it has mounted correctly. 6.4.5.3. Manually mounting volumes exported with SMB on macOS Prerequisites Ensure that your Samba configuration allows the use the SMB Apple Create Context. Ensure that the username you're using is on the list of allowed users for the volume. Manual mounting process In the Finder , click Go > Connect to Server . In the Server Address field, type the IP address or hostname of a Red Hat Gluster Storage server that hosts the volume you want to mount. Click Connect . When prompted, select Registered User to connect to the volume using a valid username and password. If required, enter your user name and password, then select the server volumes or shared folders that you want to mount. To make it easier to connect to the computer in the future, select Remember this password in my keychain to add your user name and password for the computer to your keychain. For further information about mounting volumes on macOS, see the Apple Support documentation: https://support.apple.com/en-in/guide/mac-help/mchlp1140/mac . 6.4.5.4. Configuring automatic mounting for volumes exported with SMB on Red Hat Enterprise Linux Open the /etc/fstab file in a text editor and add a line containing the following details: In the OPTIONS column, ensure that you specify the credentials option, with a value of the path to the file that contains the username and/or password. Using the example server names, the entry contains the following replaced values. The sec=ntlmssp parameter is also required when mounting a volume on Red Hat Enterprise Linux 6, for example: See the mount.cifs man page for more information about these options. Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide Run # smbstatus -S on the client to display the status of the volume: 6.4.5.5. Configuring automatic mounting for volumes exported with SMB on Microsoft Windows In Windows Explorer, click Tools Map Network Drive... . to open the Map Network Drive screen. Choose the drive letter using the Drive drop-down list. In the Folder text box, specify the path of the server and the shared resource in the following format: \\ SERVER_NAME \ VOLNAME . Click the Reconnect at logon check box. Click Finish to complete the process, and display the network drive in Windows Explorer. If the Windows Security screen pops up, enter the username and password and click OK . Navigate to the network drive to verify it has mounted correctly. 6.4.5.6. Configuring automatic mounting for volumes exported with SMB on macOS Manually mount the volume using the process outlined in Section 6.4.5.3, "Manually mounting volumes exported with SMB on macOS" . In the Finder , click System Preferences > Users & Groups > Username > Login Items . Drag and drop the mounted volume into the login items list. Check Hide if you want to prevent the drive's window from opening every time you boot or log in. For further information about mounting volumes on macOS, see the Apple Support documentation: https://support.apple.com/en-in/guide/mac-help/mchlp1140/mac . 6.4.6. Starting and Verifying your Configuration Perform the following to start and verify your configuration: Verify the Configuration Verify the virtual IP (VIP) addresses of a shut down server are carried over to another server in the replicated volume. Verify that CTDB is running using the following commands: Mount a Red Hat Gluster Storage volume using any one of the VIPs. Run # ctdb ip to locate the physical server serving the VIP. Shut down the CTDB VIP server to verify successful configuration. When the Red Hat Gluster Storage server serving the VIP is shut down there will be a pause for a few seconds, then I/O will resume. 6.4.7. Disabling SMB Shares To stop automatic sharing on all nodes for all volumes execute the following steps: On all Red Hat Gluster Storage Servers, with elevated privileges, navigate to /var/lib/glusterd/hooks/1/start/post Rename the S30samba-start.sh to K30samba-start.sh. For more information about these scripts, see Section 13.2, "Prepackaged Scripts". To stop automatic sharing on all nodes for one particular volume: Run the following command to disable automatic SMB sharing per-volume: 6.4.8. Accessing Snapshots in Windows A snapshot is a read-only point-in-time copy of the volume. Windows has an inbuilt mechanism to browse snapshots via Volume Shadow-copy Service (also known as VSS). Using this feature users can access the versions of any file or folder with minimal steps. Note Shadow Copy (also known as Volume Shadow-copy Service, or VSS) is a technology included in Microsoft Windows that allows taking snapshots of computer files or volumes, apart from viewing snapshots. Currently we only support viewing of snapshots. Creation of snapshots with this interface is NOT supported. 6.4.8.1. Configuring Shadow Copy To configure shadow copy, the following configurations must be modified/edited in the smb.conf file. The smb.conf file is located at etc/samba/smb.conf. Note Ensure, shadow_copy2 module is enabled in smb.conf. To enable add the following parameter to the vfs objects option. For example: Table 6.9. Configuration Options Configuration Options Required? Default Value Description shadow:snapdir Yes n/a Path to the directory where snapshots are kept. The snapdir name should be .snaps. shadow:basedir Yes n/a Path to the base directory that snapshots are from. The basedir value should be /. shadow:sort Optional unsorted The supported values are asc/desc. By this parameter one can specify that the shadow copy directories should be sorted before they are sent to the client. This can be beneficial as unix filesystems are usually not listed alphabetically sorted. If enabled, it is specified in descending order. shadow:localtime Optional UTC This is an optional parameter that indicates whether the snapshot names are in UTC/GMT or in local time. shadow:format Yes n/a This parameter specifies the format specification for the naming of snapshots. The format must be compatible with the conversion specifications recognized by str[fp]time. The default value is _GMT-%Y.%m.%d-%H.%M.%S. shadow:fixinodes Optional No If you enable shadow:fixinodes then this module will modify the apparent inode number of files in the snapshot directories using a hash of the files path. This is needed for snapshot systems where the snapshots have the same device:inode number as the original files (such as happens with GPFS snapshots). If you don't set this option then the 'restore' button in the shadow copy UI will fail with a sharing violation. shadow:snapprefix Optional n/a Regular expression to match prefix of snapshot name. Red Hat Gluster Storage only supports Basic Regular Expression (BRE) shadow:delimiter Optional _GMT delimiter is used to separate shadow:snapprefix and shadow:format. Following is an example of the smb.conf file: In the above example, the mentioned parameters have to be added in the smb.conf file to enable shadow copy. The options mentioned are not mandatory. Note When configuring Shadow Copy with glusterfs_fuse modify the smb.conf file configurations. For example: In the above example `MOUNTDIR` is a local FUSE moutpoint. Shadow copy will filter all the snapshots based on the smb.conf entries. It will only show those snapshots which matches the criteria. In the example mentioned earlier, the snapshot name should start with an 'S' and end with 'p' and any alpha numeric characters in between is considered for the search. For example in the list of the following snapshots, the first two snapshots will be shown by Windows and the last one will be ignored. Hence, these options will help us filter out what snapshots to show and what not to. After editing the smb.conf file, execute the following steps to enable snapshot access: Start or restart the smb service. On RHEL 7 and RHEL 8, run systemctl [re]start smb On RHEL 6, run service smb [re]start Enable User Serviceable Snapshot (USS) for Samba. For more information see Section 8.13, "User Serviceable Snapshots" 6.4.8.2. Accessing Snapshot To access snapshot on the Windows system, execute the following steps: Right Click on the file or directory for which the version is required. Click on Restore versions . In the dialog box, select the Date/Time of the version of the file, and select either Open , Restore , or Copy . where, Open: Lets you open the required version of the file in read-only mode. Restore: Restores the file back to the selected version. Copy: Lets you copy the file to a different location. Figure 6.1. Accessing Snapshot 6.4.9. Tuning Performance This section provides details regarding improving the system performance in an SMB environment. The various enhancements tasks can be classified into: Enabling Metadata Caching to improve the performance of SMB access of Red Hat Gluster Storage volumes. Enhancing Directory Listing Performance Enhancing File/Directory Create Performance More detailed information for each of this is provided in the sections ahead. 6.4.9.1. Enabling Metadata Caching Enable metadata caching to improve the performance of directory operations. Execute the following commands from any one of the nodes on the trusted storage pool in the order mentioned below. Note If majority of the workload is modifying the same set of files and directories simultaneously from multiple clients, then enabling metadata caching might not provide the desired performance improvement. Execute the following command to enable metadata caching and cache invalidation: This is group set option which sets multiple volume options in a single command. To increase the number of files that can be cached, execute the following command: n , is set to 50000. It can be increased if the number of active files in the volume is very high. Increasing this number increases the memory footprint of the brick processes. 6.4.9.2. Enhancing Directory Listing Performance The directory listing gets slower as the number of bricks/nodes increases in a volume, though the file/directory numbers remain unchanged. By enabling the parallel readdir volume option, the performance of directory listing is made independent of the number of nodes/bricks in the volume. Thus, the increase in the scale of the volume does not reduce the directory listing performance. Note You can expect an increase in performance only if the distribute count of the volume is 2 or greater and the size of the directory is small (< 3000 entries). The larger the volume (distribute count) greater is the performance benefit. To enable parallel readdir execute the following commands: Verify if the performance.readdir-ahead option is enabled by executing the following command: If the performance.readdir-ahead is not enabled then execute the following command: Execute the following command to enable parallel-readdir option: Note If there are more than 50 bricks in the volume it is recommended to increase the cache size to be more than 10Mb (default value): 6.4.9.3. Enhancing File/Directory Create Performance Before creating / renaming any file, lookups (5-6 in SMB) are sent to verify if the file already exists. By serving these lookup from the cache when possible, increases the create / rename performance by multiple folds in SMB access. Execute the following command to enable negative-lookup cache: Note The above command also enables cache-invalidation and increases the timeout to 10 minutes.
|
[
"firewall-cmd --get-active-zones",
"firewall-cmd --zone= zone_name --add-service=samba firewall-cmd --zone= zone_name --add-service=samba --permanent",
"yum remove ctdb",
"yum install ctdb",
"firewall-cmd --get-active-zones",
"firewall-cmd --zone= zone_name --add-port=4379/tcp firewall-cmd --zone= zone_name --add-port=4379/tcp --permanent",
"Status of volume: ctdb Gluster process TCP Port RDMA Port Online Pid Brick node1:/rhgs/ctdb/b1 49157 0 Y 30439 Brick node2:/rhgs/ctdb/b1 49157 0 Y 3827 Brick node3:/rhgs/ctdb/b1 49157 0 Y 89421 Self-heal Daemon on localhost N/A N/A Y 183026 Self-heal Daemon on sesdel0207 N/A N/A Y 44245 Self-heal Daemon on segotl4158 N/A N/A Y 110627 cat ctdb_listnodes 192.168.10.1 192.168.10.2 cat ctdb_ip Public IPs on node 0 192.168.10.3 0",
"gluster volume create volname replica N ip_address_1 : brick_path ... ip_address_N : brick_path",
"gluster volume create ctdb replica 3 10.16.157.75:/rhgs/brick1/ctdb/b1 10.16.157.78:/rhgs/brick1/ctdb/b2 10.16.157.81:/rhgs/brick1/ctdb/b3",
"/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh",
"clustering=yes",
"gluster volume start ctdb",
"10.16.157.0 10.16.157.3 10.16.157.6",
"VIP / routing_prefix network_interface",
"192.168.1.20/24 eth0 192.168.1.21/24 eth0",
"systemctl start ctdb",
"service ctdb start",
"[gluster- VOLNAME ] comment = For samba share of volume VOLNAME vfs objects = glusterfs glusterfs:volume = VOLNAME glusterfs:logfile = /var/log/samba/ VOLNAME .log glusterfs:loglevel = 7 path = / read only = no guest ok = yes",
"gluster volume set VOLNAME performance.cache-samba-metadata on",
"gluster volume set VOLNAME storage.batch-fsync-delay-usec 0",
"gluster volume set <volname> performance.write-behind off",
"gluster volume set VOLNAME user.cifs enable",
"gluster volume set VOLNAME user.smb enable",
"gluster volume set VOLNAME group samba",
"localhost:/myvol /mylocal glusterfs defaults,_netdev,acl 0 0",
"localhost:/myvol 4117504 1818292 2299212 45% /mylocal",
"[gluster- VOLNAME ] comment = For samba share of volume VOLNAME vfs objects = glusterfs glusterfs:volume = VOLNAME glusterfs:logfile = /var/log/samba/ VOLNAME .log glusterfs:loglevel = 7 path = / read only = no guest ok = yes",
"vfs objects = glusterfs_fuse",
"path = /MOUNTDIR",
"setsebool -P samba_share_fusefs on",
"smbclient -L <hostname> -U%",
"smbclient -L rhs-vm1 -U% Domain=[MYGROUP] OS=[Unix] Server=[Samba 4.1.17] Sharename Type Comment --------- ---- ------- IPCUSD IPC IPC Service (Samba Server Version 4.1.17) gluster-vol1 Disk For samba share of volume vol1 Domain=[MYGROUP] OS=[Unix] Server=[Samba 4.1.17] Server Comment --------- ------- Workgroup Master --------- -------",
"smbclient //<hostname>/gluster-<volname> -U <username>%<password>",
"smbclient //10.0.0.1/gluster-vol1 -U root%redhat Domain=[MYGROUP] OS=[Unix] Server=[Samba 4.1.17] smb: \\> mkdir test smb: \\> cd test smb: \\test\\> pwd Current directory is \\\\10.0.0.1\\gluster-vol1\\test smb: \\test\\>",
"fruit:aapl = yes ea support = yes",
"vfs objects = fruit streams_xattr glusterfs",
"[gluster-volname] comment = For samba share of volume smbshare vfs objects = fruit streams_xattr glusterfs glusterfs:volume = volname glusterfs:logfile = /var/log/samba/glusterfs-volname-fruit.%M.log glusterfs:loglevel = 7 path = / read only = no guest ok = yes fruit:encoding = native",
"adduser username",
"smbpasswd -a username",
"mount -t glusterfs -o acl ip-address :/volname / mountpoint",
"mount -t glusterfs -o acl rhs-a:/repvol /mnt",
"setfacl -m user: username :rwx mountpoint",
"setfacl -m user:cifsuser:rwx /mnt",
"yum install cifs-utils",
"mount -t cifs -o user= username ,pass= password // hostname /gluster- volname / mountpoint",
"mount -t cifs -o user= username ,pass= password ,sec=ntlmssp // hostname /gluster- volname / mountpoint",
"mount -t cifs -o user=cifsuser,pass=redhat,sec=ntlmssp //server1/gluster-repvol /cifs",
"Service pid machine Connected at ------------------------------------------------------------------- gluster- VOLNAME 11967 __ffff_192.168.1.60 Mon Aug 6 02:23:25 2012",
"\\\\ HOSTNAME|IPADDRESS \\ SHARE_NAME MOUNTDIR cifs OPTIONS DUMP FSCK",
"\\\\server1\\test-volume /mnt/glusterfs cifs credentials=/etc/samba/passwd,_netdev 0 0",
"\\\\server1\\test-volume /mnt/glusterfs cifs credentials=/etc/samba/passwd,_netdev,sec=ntlmssp 0 0",
"Service pid machine Connected at ------------------------------------------------------------------- gluster- VOLNAME 11967 __ffff_192.168.1.60 Mon Aug 6 02:23:25 2012",
"ctdb status ctdb ip ctdb ping -n all",
"gluster volume set <VOLNAME> user.smb disable",
"vfs objects = shadow_copy2 glusterfs",
"[gluster-vol0] comment = For samba share of volume vol0 vfs objects = shadow_copy2 glusterfs glusterfs:volume = vol0 glusterfs:logfile = /var/log/samba/glusterfs-vol0.%M.log glusterfs:loglevel = 3 path = / read only = no guest ok = yes shadow:snapdir = /.snaps shadow:basedir = / shadow:sort = desc shadow:snapprefix= ^S[A-Za-z0-9]*pUSD shadow:format = _GMT-%Y.%m.%d-%H.%M.%S",
"vfs objects = shadow_copy2 glusterfs_fuse",
"[gluster-vol0] comment = For samba share of volume vol0 vfs objects = shadow_copy2 glusterfs_fuse path = /MOUNTDIR read only = no guest ok = yes shadow:snapdir = /MOUNTDIR/.snaps shadow:basedir = /MOUNTDIR shadow:sort = desc shadow:snapprefix= ^S[A-Za-z0-9]*pUSD shadow:format = _GMT-%Y.%m.%d-%H.%M.%S",
"Snap_GMT-2016.06.06-06.06.06 Sl123p_GMT-2016.07.07-07.07.07 xyz_GMT-2016.08.08-08.08.08",
"gluster volume set < volname > group metadata-cache",
"gluster volume set < VOLNAME > network.inode-lru-limit < n >",
"gluster volume get <VOLNAME> performance.readdir-ahead",
"gluster volume set <VOLNAME> performance.readdir-ahead on",
"gluster volume set <VOLNAME> performance.parallel-readdir on",
"gluster volume set <VOLNAME> performance.rda-cache-limit <CACHE SIZE>",
"gluster volume set <volname> group nl-cache volume set success"
] |
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-SMB
|
Release notes for Red Hat build of OpenJDK 11.0.15
|
Release notes for Red Hat build of OpenJDK 11.0.15 Red Hat build of OpenJDK 11 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.15/index
|
Part II. Notable Bug Fixes
|
Part II. Notable Bug Fixes This part describes bugs fixed in Red Hat Enterprise Linux 7.5 that have a significant impact on users.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/bug-fixes
|
Chapter 7. Red Hat Quay sizing and subscriptions
|
Chapter 7. Red Hat Quay sizing and subscriptions Scalability of Red Hat Quay is one of its key strengths, with a single code base supporting a broad spectrum of deployment sizes, including the following: Proof of Concept deployment on a single development machine Mid-size deployment of approximately 2,000 users that can serve content to dozens of Kubernetes clusters High-end deployment such as Quay.io that can serve thousands of Kubernetes clusters world-wide Since sizing heavily depends on a multitude of factors, such as the number of users, images, concurrent pulls and pushes, there are no standard sizing recommendations. The following are the minimum requirements for systems running Red Hat Quay (per container/pod instance): Quay: minimum 6 GB; recommended 8 GB, 2 more more vCPUs Clair: recommended 2 GB RAM and 2 or more vCPUs Storage: : recommended 30 GB NooBaa: minimum 2 GB, 1 vCPU (when objectstorage component is selected by the Operator) Clair database: minimum 5 GB required for security metadata Stateless components of Red Hat Quay can be scaled out, but this will cause a heavier load on stateful backend services. 7.1. Red Hat Quay sample sizings The following table shows approximate sizing for Proof of Concept, mid-size, and high-end deployments. Whether a deployment runs appropriately with the same metrics depends on many factors not shown below. Metric Proof of concept Mid-size High End (Quay.io) No. of Quay containers by default 1 4 15 No. of Quay containers max at scale-out N/A 8 30 No. of Clair containers by default 1 3 10 No. of Clair containers max at scale-out N/A 6 15 No. of mirroring pods (to mirror 100 repositories) 1 5-10 N/A Database sizing 2 -4 Cores 6-8 GB RAM 10-20 GB disk 4-8 Cores 6-32 GB RAM 100 GB - 1 TB disk 32 cores 244 GB 1+ TB disk Object storage backend sizing 10-100 GB 1 - 20 TB 50+ TB up to PB Redis cache sizing 2 Cores 2-4 GB RAM 4 cores 28 GB RAM Underlying node sizing (physical or virtual) 4 Cores 8 GB RAM 4-6 Cores 12-16 GB RAM Quay: 13 cores 56GB RAM Clair: 2 cores 4 GB RAM For further details on sizing & related recommendations for mirroring, see the section on repository mirroring . The sizing for the Redis cache is only relevant if you use Quay builders, otherwise it is not significant. 7.2. Red Hat Quay subscription information Red Hat Quay is available with Standard or Premium support, and subscriptions are based on deployments. Note Deployment means an installation of a single Red Hat Quay registry using a shared data backend. With a Red Hat Quay subscription, the following options are available: There is no limit on the number of pods, such as Quay, Clair, Builder, and so on, that you can deploy. Red Hat Quay pods can run in multiple data centers or availability zones. Storage and database backends can be deployed across multiple data centers or availability zones, but only as a single, shared storage backend and single, shared database backend. Red Hat Quay can manage content for an unlimited number of clusters or standalone servers. Clients can access the Red Hat Quay deployment regardless of their physical location. You can deploy Red Hat Quay on OpenShift Container Platform infrastructure nodes to minimize subscription requirements. You can run the Container Security Operator (CSO) and the Quay Bridge Operator (QBO) on your OpenShift Container Platform clusters at no additional cost. Note Red Hat Quay geo-replication requires a subscription for each storage replication. The database, however, is shared. For more information about purchasing a Red Hat Quay subscription, see Red Hat Quay . 7.3. Using Red Hat Quay with or without internal registry Red Hat Quay can be used as an external registry in front of multiple OpenShift Container Platform clusters with their internal registries. Red Hat Quay can also be used in place of the internal registry when it comes to automating builds and deployment rollouts. The required coordination of Secrets and ImageStreams is automated by the Quay Bridge Operator, which can be launched from the OperatorHub for OpenShift Container Platform.
| null |
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/red_hat_quay_architecture/sizing-intro
|
5.360. xinetd
|
5.360. xinetd 5.360.1. RHBA-2012:1162 - xinetd bug fix update An updated xinetd package that fixes one bug is now available for Red Hat Enterprise Linux 6. Xinetd is a secure replacement for inetd, the Internet services daemon. Xinetd provides access control for all services based on the address of the remote host and/or on time of access, and can prevent denial-of-access attacks. Xinetd provides extensive logging, has no limit on the number of server arguments, and allows users to bind specific services to specific IP addresses on a host machine. Each service has its own specific configuration file for Xinetd; the files are located in the /etc/xinetd.d directory. Bug Fix BZ# 841916 Due to incorrect handling of a file descriptor array in the service.c source file, some of the descriptors remained open when xinetd was under heavy load. Additionally, the system log was filled with a large number of messages that took up a lot of disk space over time. This bug has been fixed in the code, xinetd now handles the file descriptors correctly and no longer fills the system log. All users of xinetd are advised to upgrade to this updated package, which fixes this bug. 5.360.2. RHBA-2012:0409 - xinetd bug fix update An updated xinetd package that fixes multiple bugs is now available for Red Hat Enterprise Linux 6. The xinetd daemon is a secure replacement for xinetd, the Internet services daemon. The xinetd daemon provides access control for all services based on the address of the remote host, on time of access, or both, and can prevent denial of service (DoS) attacks. Bug Fixes BZ# 694820 Under certain circumstances, the xinetd daemon could become unresponsive (for example, when trying to acquire an already acquired lock for writing to its log file) when an unexpected signal arrived. With this update, the daemon handles unexpected signals correctly and no longer hangs under these circumstances. BZ# 697783 Previously, a bug in the xinetd code could cause corruption of the time_t variable resulting in the following compiler warning: A patch has been applied to address this issue, so that the warning no longer occurs. BZ# 697788 Previously, the xinetd daemon ignored the "port" line of the service configuration file, and it was therefore impossible to bind certain RPC services to a specific port. The underlying source code has been modified to ensure that xinetd honors the "port" line, so that the port numbers are now handled appropriately. BZ# 711787 Incorrect use of the realloc() function could cause memory corruption. This resulted in the xinetd daemon terminating unexpectedly right after the start when a large number of services had been configured. The realloc() function has been removed, which ensures that memory corruption no longer occurs in this scenario, and the xinetd daemon starts successfully even when configuring a large number of services. All users of xinetd are advised to upgrade to this updated package, which fixes these bugs.
|
[
"warning: dereferencing type-punned pointer will break strict-aliasing rules"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/xinetd
|
Getting started with hybrid committed spend
|
Getting started with hybrid committed spend Hybrid committed spend 1-latest Learn about and configure hybrid committed spend Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/hybrid_committed_spend/1-latest/html/getting_started_with_hybrid_committed_spend/index
|
Chapter 17. Red Hat Build of OptaPlanner on Spring Boot: a school timetable quick start guide
|
Chapter 17. Red Hat Build of OptaPlanner on Spring Boot: a school timetable quick start guide This guide walks you through the process of creating a Spring Boot application with OptaPlanner's constraint solving artificial intelligence (AI). You will build a REST application that optimizes a school timetable for students and teachers. Your service will assign Lesson instances to Timeslot and Room instances automatically by using AI to adhere to the following hard and soft scheduling constraints : A room can have at most one lesson at the same time. A teacher can teach at most one lesson at the same time. A student can attend at most one lesson at the same time. A teacher prefers to teach in a single room. A teacher prefers to teach sequential lessons and dislikes gaps between lessons. Mathematically speaking, school timetabling is an NP-hard problem. That means it is difficult to scale. Simply iterating through all possible combinations with brute force would take millions of years for a non-trivial data set, even on a supercomputer. Fortunately, AI constraint solvers such as OptaPlanner have advanced algorithms that deliver a near-optimal solution in a reasonable amount of time. What is considered to be a reasonable amount of time is subjective and depends on the goals of your problem. Prerequisites OpenJDK 11 or later is installed. Red Hat build of Open JDK is available from the Software Downloads page in the Red Hat Customer Portal (login required). Apache Maven 3.8 or higher is installed. Maven is available from the Apache Maven Project website. An IDE, such as IntelliJ IDEA, VSCode, or Eclipse is available. 17.1. Downloading and building the Spring Boot school timetable quick start If you want to see a completed example of the school timetable project for Red Hat Build of OptaPlanner with Spring Boot product, download the starter application from the Red Hat Customer Portal. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Red Hat Build of OptaPlanner Version: 8.38 Download Red Hat Build of OptaPlanner 8.38 Quick Starts . Extract the rhbop-8.38.0-optaplanner-quickstarts-sources.zip file. The extracted org.optaplanner.optaplanner-quickstarts-8.38.0.Final-redhat-00004/use-cases/school-timetabling directory contains example source code. Navigate to the org.optaplanner.optaplanner-quickstarts-8.38.0.Final-redhat-00004/use-cases/school-timetabling directory. Download the Red Hat Build of OptaPlanner 8.38.0 Maven Repositroy ( rhbop-8.38.0-optaplanner-maven-repository.zip ). Extract the rhbop-8.38.0-optaplanner-maven-repository.zip file. Copy the contents of the rhbop-8.38.0-optaplanner/maven-repository subdirectory into the ~/.m2/repository directory. Navigate to the org.optaplanner.optaplanner-quickstarts-8.38.0.Final-redhat-00004/technology/java-spring-boot directory. Enter the following command to build the Spring Boot school timetabling project: To build the Spring Boot school timetabling project, enter the following command: To view the project, enter the following URL in a web browser: 17.2. Model the domain objects The goal of the Red Hat Build of OptaPlanner timetable project is to assign each lesson to a time slot and a room. To do this, add three classes, Timeslot , Lesson , and Room , as shown in the following diagram: Timeslot The Timeslot class represents a time interval when lessons are taught, for example, Monday 10:30 - 11:30 or Tuesday 13:30 - 14:30 . In this example, all time slots have the same duration and there are no time slots during lunch or other breaks. A time slot has no date because a high school schedule just repeats every week. There is no need for continuous planning . A timeslot is called a problem fact because no Timeslot instances change during solving. Such classes do not require any OptaPlanner-specific annotations. Room The Room class represents a location where lessons are taught, for example, Room A or Room B . In this example, all rooms are without capacity limits and they can accommodate all lessons. Room instances do not change during solving so Room is also a problem fact . Lesson During a lesson, represented by the Lesson class, a teacher teaches a subject to a group of students, for example, Math by A.Turing for 9th grade or Chemistry by M.Curie for 10th grade . If a subject is taught multiple times each week by the same teacher to the same student group, there are multiple Lesson instances that are only distinguishable by id . For example, the 9th grade has six math lessons a week. During solving, OptaPlanner changes the timeslot and room fields of the Lesson class to assign each lesson to a time slot and a room. Because OptaPlanner changes these fields, Lesson is a planning entity : Most of the fields in the diagram contain input data, except for the orange fields. A lesson's timeslot and room fields are unassigned ( null ) in the input data and assigned (not null ) in the output data. OptaPlanner changes these fields during solving. Such fields are called planning variables. In order for OptaPlanner to recognize them, both the timeslot and room fields require an @PlanningVariable annotation. Their containing class, Lesson , requires an @PlanningEntity annotation. Procedure Create the src/main/java/com/example/domain/Timeslot.java class: package com.example.domain; import java.time.DayOfWeek; import java.time.LocalTime; public class Timeslot { private DayOfWeek dayOfWeek; private LocalTime startTime; private LocalTime endTime; private Timeslot() { } public Timeslot(DayOfWeek dayOfWeek, LocalTime startTime, LocalTime endTime) { this.dayOfWeek = dayOfWeek; this.startTime = startTime; this.endTime = endTime; } @Override public String toString() { return dayOfWeek + " " + startTime.toString(); } // ******************************** // Getters and setters // ******************************** public DayOfWeek getDayOfWeek() { return dayOfWeek; } public LocalTime getStartTime() { return startTime; } public LocalTime getEndTime() { return endTime; } } Notice the toString() method keeps the output short so it is easier to read OptaPlanner's DEBUG or TRACE log, as shown later. Create the src/main/java/com/example/domain/Room.java class: package com.example.domain; public class Room { private String name; private Room() { } public Room(String name) { this.name = name; } @Override public String toString() { return name; } // ******************************** // Getters and setters // ******************************** public String getName() { return name; } } Create the src/main/java/com/example/domain/Lesson.java class: package com.example.domain; import org.optaplanner.core.api.domain.entity.PlanningEntity; import org.optaplanner.core.api.domain.variable.PlanningVariable; @PlanningEntity public class Lesson { private Long id; private String subject; private String teacher; private String studentGroup; @PlanningVariable(valueRangeProviderRefs = "timeslotRange") private Timeslot timeslot; @PlanningVariable(valueRangeProviderRefs = "roomRange") private Room room; private Lesson() { } public Lesson(Long id, String subject, String teacher, String studentGroup) { this.id = id; this.subject = subject; this.teacher = teacher; this.studentGroup = studentGroup; } @Override public String toString() { return subject + "(" + id + ")"; } // ******************************** // Getters and setters // ******************************** public Long getId() { return id; } public String getSubject() { return subject; } public String getTeacher() { return teacher; } public String getStudentGroup() { return studentGroup; } public Timeslot getTimeslot() { return timeslot; } public void setTimeslot(Timeslot timeslot) { this.timeslot = timeslot; } public Room getRoom() { return room; } public void setRoom(Room room) { this.room = room; } } The Lesson class has an @PlanningEntity annotation, so OptaPlanner knows that this class changes during solving because it contains one or more planning variables. The timeslot field has an @PlanningVariable annotation, so OptaPlanner knows that it can change its value. In order to find potential Timeslot instances to assign to this field, OptaPlanner uses the valueRangeProviderRefs property to connect to a value range provider that provides a List<Timeslot> to pick from. See Section 17.4, "Gather the domain objects in a planning solution" for information about value range providers. The room field also has an @PlanningVariable annotation for the same reasons. 17.3. Define the constraints and calculate the score When solving a problem, a score represents the quality of a specific solution. The higher the score the better. Red Hat Build of OptaPlanner looks for the best solution, which is the solution with the highest score found in the available time. It might be the optimal solution. Because the timetable example use case has hard and soft constraints, use the HardSoftScore class to represent the score: Hard constraints must not be broken. For example: A room can have at most one lesson at the same time. Soft constraints should not be broken. For example: A teacher prefers to teach in a single room. Hard constraints are weighted against other hard constraints. Soft constraints are weighted against other soft constraints. Hard constraints always outweigh soft constraints, regardless of their respective weights. To calculate the score, you could implement an EasyScoreCalculator class: public class TimeTableEasyScoreCalculator implements EasyScoreCalculator<TimeTable> { @Override public HardSoftScore calculateScore(TimeTable timeTable) { List<Lesson> lessonList = timeTable.getLessonList(); int hardScore = 0; for (Lesson a : lessonList) { for (Lesson b : lessonList) { if (a.getTimeslot() != null && a.getTimeslot().equals(b.getTimeslot()) && a.getId() < b.getId()) { // A room can accommodate at most one lesson at the same time. if (a.getRoom() != null && a.getRoom().equals(b.getRoom())) { hardScore--; } // A teacher can teach at most one lesson at the same time. if (a.getTeacher().equals(b.getTeacher())) { hardScore--; } // A student can attend at most one lesson at the same time. if (a.getStudentGroup().equals(b.getStudentGroup())) { hardScore--; } } } } int softScore = 0; // Soft constraints are only implemented in the "complete" implementation return HardSoftScore.of(hardScore, softScore); } } Unfortunately, this solution does not scale well because it is non-incremental: every time a lesson is assigned to a different time slot or room, all lessons are re-evaluated to calculate the new score. A better solution is to create a src/main/java/com/example/solver/TimeTableConstraintProvider.java class to perform incremental score calculation. This class uses OptaPlanner's ConstraintStream API which is inspired by Java 8 Streams and SQL. The ConstraintProvider scales an order of magnitude better than the EasyScoreCalculator : O (n) instead of O (n2). Procedure Create the following src/main/java/com/example/solver/TimeTableConstraintProvider.java class: package com.example.solver; import com.example.domain.Lesson; import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; import org.optaplanner.core.api.score.stream.Constraint; import org.optaplanner.core.api.score.stream.ConstraintFactory; import org.optaplanner.core.api.score.stream.ConstraintProvider; import org.optaplanner.core.api.score.stream.Joiners; public class TimeTableConstraintProvider implements ConstraintProvider { @Override public Constraint[] defineConstraints(ConstraintFactory constraintFactory) { return new Constraint[] { // Hard constraints roomConflict(constraintFactory), teacherConflict(constraintFactory), studentGroupConflict(constraintFactory), // Soft constraints are only implemented in the "complete" implementation }; } private Constraint roomConflict(ConstraintFactory constraintFactory) { // A room can accommodate at most one lesson at the same time. // Select a lesson ... return constraintFactory.forEach(Lesson.class) // ... and pair it with another lesson ... .join(Lesson.class, // ... in the same timeslot ... Joiners.equal(Lesson::getTimeslot), // ... in the same room ... Joiners.equal(Lesson::getRoom), // ... and the pair is unique (different id, no reverse pairs) Joiners.lessThan(Lesson::getId)) // then penalize each pair with a hard weight. .penalize(HardSoftScore.ONE_HARD) .asConstraint("Room conflict"); } private Constraint teacherConflict(ConstraintFactory constraintFactory) { // A teacher can teach at most one lesson at the same time. return constraintFactory.forEach(Lesson.class) .join(Lesson.class, Joiners.equal(Lesson::getTimeslot), Joiners.equal(Lesson::getTeacher), Joiners.lessThan(Lesson::getId)) .penalize(HardSoftScore.ONE_HARD) .asConstraint("Teacher conflict"); } private Constraint studentGroupConflict(ConstraintFactory constraintFactory) { // A student can attend at most one lesson at the same time. return constraintFactory.forEach(Lesson.class) .join(Lesson.class, Joiners.equal(Lesson::getTimeslot), Joiners.equal(Lesson::getStudentGroup), Joiners.lessThan(Lesson::getId)) .penalize(HardSoftScore.ONE_HARD) .asConstraint("Student group conflict"); } } 17.4. Gather the domain objects in a planning solution A TimeTable instance wraps all Timeslot , Room , and Lesson instances of a single dataset. Furthermore, because it contains all lessons, each with a specific planning variable state, it is a planning solution and it has a score: If lessons are still unassigned, then it is an uninitialized solution, for example, a solution with the score -4init/0hard/0soft . If it breaks hard constraints, then it is an infeasible solution, for example, a solution with the score -2hard/-3soft . If it adheres to all hard constraints, then it is a feasible solution, for example, a solution with the score 0hard/-7soft . The TimeTable class has an @PlanningSolution annotation, so Red Hat Build of OptaPlanner knows that this class contains all of the input and output data. Specifically, this class is the input of the problem: A timeslotList field with all time slots This is a list of problem facts, because they do not change during solving. A roomList field with all rooms This is a list of problem facts, because they do not change during solving. A lessonList field with all lessons This is a list of planning entities because they change during solving. Of each Lesson : The values of the timeslot and room fields are typically still null , so unassigned. They are planning variables. The other fields, such as subject , teacher and studentGroup , are filled in. These fields are problem properties. However, this class is also the output of the solution: A lessonList field for which each Lesson instance has non-null timeslot and room fields after solving A score field that represents the quality of the output solution, for example, 0hard/-5soft Procedure Create the src/main/java/com/example/domain/TimeTable.java class: package com.example.domain; import java.util.List; import org.optaplanner.core.api.domain.solution.PlanningEntityCollectionProperty; import org.optaplanner.core.api.domain.solution.PlanningScore; import org.optaplanner.core.api.domain.solution.PlanningSolution; import org.optaplanner.core.api.domain.solution.ProblemFactCollectionProperty; import org.optaplanner.core.api.domain.valuerange.ValueRangeProvider; import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; @PlanningSolution public class TimeTable { @ValueRangeProvider(id = "timeslotRange") @ProblemFactCollectionProperty private List<Timeslot> timeslotList; @ValueRangeProvider(id = "roomRange") @ProblemFactCollectionProperty private List<Room> roomList; @PlanningEntityCollectionProperty private List<Lesson> lessonList; @PlanningScore private HardSoftScore score; private TimeTable() { } public TimeTable(List<Timeslot> timeslotList, List<Room> roomList, List<Lesson> lessonList) { this.timeslotList = timeslotList; this.roomList = roomList; this.lessonList = lessonList; } // ******************************** // Getters and setters // ******************************** public List<Timeslot> getTimeslotList() { return timeslotList; } public List<Room> getRoomList() { return roomList; } public List<Lesson> getLessonList() { return lessonList; } public HardSoftScore getScore() { return score; } } The value range providers The timeslotList field is a value range provider. It holds the Timeslot instances which OptaPlanner can pick from to assign to the timeslot field of Lesson instances. The timeslotList field has an @ValueRangeProvider annotation to connect those two, by matching the id with the valueRangeProviderRefs of the @PlanningVariable in the Lesson . Following the same logic, the roomList field also has an @ValueRangeProvider annotation. The problem fact and planning entity properties Furthermore, OptaPlanner needs to know which Lesson instances it can change as well as how to retrieve the Timeslot and Room instances used for score calculation by your TimeTableConstraintProvider . The timeslotList and roomList fields have an @ProblemFactCollectionProperty annotation, so your TimeTableConstraintProvider can select from those instances. The lessonList has an @PlanningEntityCollectionProperty annotation, so OptaPlanner can change them during solving and your TimeTableConstraintProvider can select from those too. 17.5. Create the Timetable service Now you are ready to put everything together and create a REST service. But solving planning problems on REST threads causes HTTP timeout issues. Therefore, the Spring Boot starter injects a SolverManager , which runs solvers in a separate thread pool and can solve multiple datasets in parallel. Procedure Create the src/main/java/com/example/solver/TimeTableController.java class: package com.example.solver; import java.util.UUID; import java.util.concurrent.ExecutionException; import com.example.domain.TimeTable; import org.optaplanner.core.api.solver.SolverJob; import org.optaplanner.core.api.solver.SolverManager; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; @RestController @RequestMapping("/timeTable") public class TimeTableController { @Autowired private SolverManager<TimeTable, UUID> solverManager; @PostMapping("/solve") public TimeTable solve(@RequestBody TimeTable problem) { UUID problemId = UUID.randomUUID(); // Submit the problem to start solving SolverJob<TimeTable, UUID> solverJob = solverManager.solve(problemId, problem); TimeTable solution; try { // Wait until the solving ends solution = solverJob.getFinalBestSolution(); } catch (InterruptedException | ExecutionException e) { throw new IllegalStateException("Solving failed.", e); } return solution; } } In this example, the initial implementation waits for the solver to finish, which can still cause an HTTP timeout. The complete implementation avoids HTTP timeouts much more elegantly. 17.6. Set the solver termination time If your planning application does not have a termination setting or a termination event, it theoretically runs forever and in reality eventually causes an HTTP timeout error. To prevent this from occurring, use the optaplanner.solver.termination.spent-limit parameter to specify the length of time after which the application terminates. In most applications, set the time to at least five minutes ( 5m ). However, in the Timetable example, limit the solving time to five seconds, which is short enough to avoid the HTTP timeout. Procedure Create the src/main/resources/application.properties file with the following content: quarkus.optaplanner.solver.termination.spent-limit=5s 17.7. Make the application executable After you complete the Red Hat Build of OptaPlanner Spring Boot timetable project, package everything into a single executable JAR file driven by a standard Java main() method. Prerequisites You have a completed OptaPlanner Spring Boot timetable project. Procedure Create the TimeTableSpringBootApp.java class with the following content: package com.example; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class TimeTableSpringBootApp { public static void main(String[] args) { SpringApplication.run(TimeTableSpringBootApp.class, args); } } Replace the src/main/java/com/example/DemoApplication.java class created by Spring Initializr with the TimeTableSpringBootApp.java class. Run the TimeTableSpringBootApp.java class as the main class of a regular Java application. 17.7.1. Try the timetable application After you start the Red Hat Build of OptaPlanner Spring Boot timetable application, you can test the REST service with any REST client that you want. This example uses the Linux curl command to send a POST request. Prerequisites The OptaPlanner Spring Boot timetable application is running. Procedure Enter the following command: After about five seconds, the termination spent time defined in application.properties , the service returns an output similar to the following example: Notice that the application assigned all four lessons to one of the two time slots and one of the two rooms. Also notice that it conforms to all hard constraints. For example, M. Curie's two lessons are in different time slots. On the server side, the info log shows what OptaPlanner did in those five seconds: 17.7.2. Test the application A good application includes test coverage. This example tests the Timetable Red Hat Build of OptaPlanner Spring Boot application. It uses a JUnit test to generate a test dataset and send it to the TimeTableController to solve. Procedure Create the src/test/java/com/example/solver/TimeTableControllerTest.java class with the following content: package com.example.solver; import java.time.DayOfWeek; import java.time.LocalTime; import java.util.ArrayList; import java.util.List; import com.example.domain.Lesson; import com.example.domain.Room; import com.example.domain.TimeTable; import com.example.domain.Timeslot; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Timeout; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import static org.junit.jupiter.api.Assertions.assertFalse; import static org.junit.jupiter.api.Assertions.assertNotNull; import static org.junit.jupiter.api.Assertions.assertTrue; @SpringBootTest(properties = { "optaplanner.solver.termination.spent-limit=1h", // Effectively disable this termination in favor of the best-score-limit "optaplanner.solver.termination.best-score-limit=0hard/*soft"}) public class TimeTableControllerTest { @Autowired private TimeTableController timeTableController; @Test @Timeout(600_000) public void solve() { TimeTable problem = generateProblem(); TimeTable solution = timeTableController.solve(problem); assertFalse(solution.getLessonList().isEmpty()); for (Lesson lesson : solution.getLessonList()) { assertNotNull(lesson.getTimeslot()); assertNotNull(lesson.getRoom()); } assertTrue(solution.getScore().isFeasible()); } private TimeTable generateProblem() { List<Timeslot> timeslotList = new ArrayList<>(); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(8, 30), LocalTime.of(9, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9, 30), LocalTime.of(10, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(10, 30), LocalTime.of(11, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(13, 30), LocalTime.of(14, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(14, 30), LocalTime.of(15, 30))); List<Room> roomList = new ArrayList<>(); roomList.add(new Room("Room A")); roomList.add(new Room("Room B")); roomList.add(new Room("Room C")); List<Lesson> lessonList = new ArrayList<>(); lessonList.add(new Lesson(101L, "Math", "B. May", "9th grade")); lessonList.add(new Lesson(102L, "Physics", "M. Curie", "9th grade")); lessonList.add(new Lesson(103L, "Geography", "M. Polo", "9th grade")); lessonList.add(new Lesson(104L, "English", "I. Jones", "9th grade")); lessonList.add(new Lesson(105L, "Spanish", "P. Cruz", "9th grade")); lessonList.add(new Lesson(201L, "Math", "B. May", "10th grade")); lessonList.add(new Lesson(202L, "Chemistry", "M. Curie", "10th grade")); lessonList.add(new Lesson(203L, "History", "I. Jones", "10th grade")); lessonList.add(new Lesson(204L, "English", "P. Cruz", "10th grade")); lessonList.add(new Lesson(205L, "French", "M. Curie", "10th grade")); return new TimeTable(timeslotList, roomList, lessonList); } } This test verifies that after solving, all lessons are assigned to a time slot and a room. It also verifies that it found a feasible solution (no hard constraints broken). Normally, the solver finds a feasible solution in less than 200 milliseconds. Notice how the @SpringBootTest annotation's properties overwrites the solver termination to terminate as soon as a feasible solution ( 0hard/*soft ) is found. This avoids hard coding a solver time, because the unit test might run on arbitrary hardware. This approach ensures that the test runs long enough to find a feasible solution, even on slow systems. However, it does not run a millisecond longer than it strictly must, even on fast systems. 17.7.3. Logging After you complete the Red Hat Build of OptaPlanner Spring Boot timetable application, you can use logging information to help you fine-tune the constraints in the ConstraintProvider . Review the score calculation speed in the info log file to assess the impact of changes to your constraints. Run the application in debug mode to show every step that your application takes or use trace logging to log every step and every move. Procedure Run the timetable application for a fixed amount of time, for example, five minutes. Review the score calculation speed in the log file as shown in the following example: Change a constraint, run the planning application again for the same amount of time, and review the score calculation speed recorded in the log file. Run the application in debug mode to log every step: To run debug mode from the command line, use the -D system property. To change logging in the application.properties file, add the following line to that file: logging.level.org.optaplanner=debug The following example shows output in the log file in debug mode: Use trace logging to show every step and every move for each step. 17.8. Add Database and UI integration After you create the Red Hat Build of OptaPlanner application example with Spring Boot, add database and UI integration. Prerequisite You have created the OptaPlanner Spring Boot timetable example. Procedure Create Java Persistence API (JPA) repositories for Timeslot , Room , and Lesson . For information about creating JPA repositories, see Accessing Data with JPA on the Spring website. Expose the JPA repositories through REST. For information about exposing the repositories, see Accessing JPA Data with REST on the Spring website. Build a TimeTableRepository facade to read and write a TimeTable in a single transaction. Adjust the TimeTableController as shown in the following example: package com.example.solver; import com.example.domain.TimeTable; import com.example.persistence.TimeTableRepository; import org.optaplanner.core.api.score.ScoreManager; import org.optaplanner.core.api.solver.SolverManager; import org.optaplanner.core.api.solver.SolverStatus; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; @RestController @RequestMapping("/timeTable") public class TimeTableController { @Autowired private TimeTableRepository timeTableRepository; @Autowired private SolverManager<TimeTable, Long> solverManager; @Autowired private ScoreManager<TimeTable> scoreManager; // To try, GET http://localhost:8080/timeTable @GetMapping() public TimeTable getTimeTable() { // Get the solver status before loading the solution // to avoid the race condition that the solver terminates between them SolverStatus solverStatus = getSolverStatus(); TimeTable solution = timeTableRepository.findById(TimeTableRepository.SINGLETON_TIME_TABLE_ID); scoreManager.updateScore(solution); // Sets the score solution.setSolverStatus(solverStatus); return solution; } @PostMapping("/solve") public void solve() { solverManager.solveAndListen(TimeTableRepository.SINGLETON_TIME_TABLE_ID, timeTableRepository::findById, timeTableRepository::save); } public SolverStatus getSolverStatus() { return solverManager.getSolverStatus(TimeTableRepository.SINGLETON_TIME_TABLE_ID); } @PostMapping("/stopSolving") public void stopSolving() { solverManager.terminateEarly(TimeTableRepository.SINGLETON_TIME_TABLE_ID); } } For simplicity, this code handles only one TimeTable instance, but it is straightforward to enable multi-tenancy and handle multiple TimeTable instances of different high schools in parallel. The getTimeTable() method returns the latest timetable from the database. It uses the ScoreManager (which is automatically injected) to calculate the score of that timetable so the UI can show the score. The solve() method starts a job to solve the current timetable and store the time slot and room assignments in the database. It uses the SolverManager.solveAndListen() method to listen to intermediate best solutions and update the database accordingly. This enables the UI to show progress while the backend is still solving. Now that the solve() method returns immediately, adjust the TimeTableControllerTest as shown in the following example: package com.example.solver; import com.example.domain.Lesson; import com.example.domain.TimeTable; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Timeout; import org.optaplanner.core.api.solver.SolverStatus; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import static org.junit.jupiter.api.Assertions.assertFalse; import static org.junit.jupiter.api.Assertions.assertNotNull; import static org.junit.jupiter.api.Assertions.assertTrue; @SpringBootTest(properties = { "optaplanner.solver.termination.spent-limit=1h", // Effectively disable this termination in favor of the best-score-limit "optaplanner.solver.termination.best-score-limit=0hard/*soft"}) public class TimeTableControllerTest { @Autowired private TimeTableController timeTableController; @Test @Timeout(600_000) public void solveDemoDataUntilFeasible() throws InterruptedException { timeTableController.solve(); TimeTable timeTable = timeTableController.getTimeTable(); while (timeTable.getSolverStatus() != SolverStatus.NOT_SOLVING) { // Quick polling (not a Test Thread Sleep anti-pattern) // Test is still fast on fast systems and doesn't randomly fail on slow systems. Thread.sleep(20L); timeTable = timeTableController.getTimeTable(); } assertFalse(timeTable.getLessonList().isEmpty()); for (Lesson lesson : timeTable.getLessonList()) { assertNotNull(lesson.getTimeslot()); assertNotNull(lesson.getRoom()); } assertTrue(timeTable.getScore().isFeasible()); } } Poll for the latest solution until the solver finishes solving. To visualize the timetable, build an attractive web UI on top of these REST methods. 17.9. Using Micrometer and Prometheus to monitor your school timetable OptaPlanner Spring Boot application OptaPlanner exposes metrics through Micrometer , a metrics instrumentation library for Java applications. You can use Micrometer with Prometheus to monitor the OptaPlanner solver in the school timetable application. Prerequisites You have created the Spring Boot OptaPlanner school timetable application. Prometheus is installed. For information about installing Prometheus, see the Prometheus website. Procedure Navigate to the technology/java-spring-boot directory. Add the Micrometer Prometheus dependencies to the school timetable pom.xml file: Add the following property to the application.properties file: Start the school timetable application: Open http://localhost:8080/actuator/prometheus in a web browser.
|
[
"mvn clean install -DskipTests",
"mvn spring-boot:run -DskipTests",
"http://localhost:8080/",
"package com.example.domain; import java.time.DayOfWeek; import java.time.LocalTime; public class Timeslot { private DayOfWeek dayOfWeek; private LocalTime startTime; private LocalTime endTime; private Timeslot() { } public Timeslot(DayOfWeek dayOfWeek, LocalTime startTime, LocalTime endTime) { this.dayOfWeek = dayOfWeek; this.startTime = startTime; this.endTime = endTime; } @Override public String toString() { return dayOfWeek + \" \" + startTime.toString(); } // ******************************** // Getters and setters // ******************************** public DayOfWeek getDayOfWeek() { return dayOfWeek; } public LocalTime getStartTime() { return startTime; } public LocalTime getEndTime() { return endTime; } }",
"package com.example.domain; public class Room { private String name; private Room() { } public Room(String name) { this.name = name; } @Override public String toString() { return name; } // ******************************** // Getters and setters // ******************************** public String getName() { return name; } }",
"package com.example.domain; import org.optaplanner.core.api.domain.entity.PlanningEntity; import org.optaplanner.core.api.domain.variable.PlanningVariable; @PlanningEntity public class Lesson { private Long id; private String subject; private String teacher; private String studentGroup; @PlanningVariable(valueRangeProviderRefs = \"timeslotRange\") private Timeslot timeslot; @PlanningVariable(valueRangeProviderRefs = \"roomRange\") private Room room; private Lesson() { } public Lesson(Long id, String subject, String teacher, String studentGroup) { this.id = id; this.subject = subject; this.teacher = teacher; this.studentGroup = studentGroup; } @Override public String toString() { return subject + \"(\" + id + \")\"; } // ******************************** // Getters and setters // ******************************** public Long getId() { return id; } public String getSubject() { return subject; } public String getTeacher() { return teacher; } public String getStudentGroup() { return studentGroup; } public Timeslot getTimeslot() { return timeslot; } public void setTimeslot(Timeslot timeslot) { this.timeslot = timeslot; } public Room getRoom() { return room; } public void setRoom(Room room) { this.room = room; } }",
"public class TimeTableEasyScoreCalculator implements EasyScoreCalculator<TimeTable> { @Override public HardSoftScore calculateScore(TimeTable timeTable) { List<Lesson> lessonList = timeTable.getLessonList(); int hardScore = 0; for (Lesson a : lessonList) { for (Lesson b : lessonList) { if (a.getTimeslot() != null && a.getTimeslot().equals(b.getTimeslot()) && a.getId() < b.getId()) { // A room can accommodate at most one lesson at the same time. if (a.getRoom() != null && a.getRoom().equals(b.getRoom())) { hardScore--; } // A teacher can teach at most one lesson at the same time. if (a.getTeacher().equals(b.getTeacher())) { hardScore--; } // A student can attend at most one lesson at the same time. if (a.getStudentGroup().equals(b.getStudentGroup())) { hardScore--; } } } } int softScore = 0; // Soft constraints are only implemented in the \"complete\" implementation return HardSoftScore.of(hardScore, softScore); } }",
"package com.example.solver; import com.example.domain.Lesson; import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; import org.optaplanner.core.api.score.stream.Constraint; import org.optaplanner.core.api.score.stream.ConstraintFactory; import org.optaplanner.core.api.score.stream.ConstraintProvider; import org.optaplanner.core.api.score.stream.Joiners; public class TimeTableConstraintProvider implements ConstraintProvider { @Override public Constraint[] defineConstraints(ConstraintFactory constraintFactory) { return new Constraint[] { // Hard constraints roomConflict(constraintFactory), teacherConflict(constraintFactory), studentGroupConflict(constraintFactory), // Soft constraints are only implemented in the \"complete\" implementation }; } private Constraint roomConflict(ConstraintFactory constraintFactory) { // A room can accommodate at most one lesson at the same time. // Select a lesson return constraintFactory.forEach(Lesson.class) // ... and pair it with another lesson .join(Lesson.class, // ... in the same timeslot Joiners.equal(Lesson::getTimeslot), // ... in the same room Joiners.equal(Lesson::getRoom), // ... and the pair is unique (different id, no reverse pairs) Joiners.lessThan(Lesson::getId)) // then penalize each pair with a hard weight. .penalize(HardSoftScore.ONE_HARD) .asConstraint(\"Room conflict\"); } private Constraint teacherConflict(ConstraintFactory constraintFactory) { // A teacher can teach at most one lesson at the same time. return constraintFactory.forEach(Lesson.class) .join(Lesson.class, Joiners.equal(Lesson::getTimeslot), Joiners.equal(Lesson::getTeacher), Joiners.lessThan(Lesson::getId)) .penalize(HardSoftScore.ONE_HARD) .asConstraint(\"Teacher conflict\"); } private Constraint studentGroupConflict(ConstraintFactory constraintFactory) { // A student can attend at most one lesson at the same time. return constraintFactory.forEach(Lesson.class) .join(Lesson.class, Joiners.equal(Lesson::getTimeslot), Joiners.equal(Lesson::getStudentGroup), Joiners.lessThan(Lesson::getId)) .penalize(HardSoftScore.ONE_HARD) .asConstraint(\"Student group conflict\"); } }",
"package com.example.domain; import java.util.List; import org.optaplanner.core.api.domain.solution.PlanningEntityCollectionProperty; import org.optaplanner.core.api.domain.solution.PlanningScore; import org.optaplanner.core.api.domain.solution.PlanningSolution; import org.optaplanner.core.api.domain.solution.ProblemFactCollectionProperty; import org.optaplanner.core.api.domain.valuerange.ValueRangeProvider; import org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore; @PlanningSolution public class TimeTable { @ValueRangeProvider(id = \"timeslotRange\") @ProblemFactCollectionProperty private List<Timeslot> timeslotList; @ValueRangeProvider(id = \"roomRange\") @ProblemFactCollectionProperty private List<Room> roomList; @PlanningEntityCollectionProperty private List<Lesson> lessonList; @PlanningScore private HardSoftScore score; private TimeTable() { } public TimeTable(List<Timeslot> timeslotList, List<Room> roomList, List<Lesson> lessonList) { this.timeslotList = timeslotList; this.roomList = roomList; this.lessonList = lessonList; } // ******************************** // Getters and setters // ******************************** public List<Timeslot> getTimeslotList() { return timeslotList; } public List<Room> getRoomList() { return roomList; } public List<Lesson> getLessonList() { return lessonList; } public HardSoftScore getScore() { return score; } }",
"package com.example.solver; import java.util.UUID; import java.util.concurrent.ExecutionException; import com.example.domain.TimeTable; import org.optaplanner.core.api.solver.SolverJob; import org.optaplanner.core.api.solver.SolverManager; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; @RestController @RequestMapping(\"/timeTable\") public class TimeTableController { @Autowired private SolverManager<TimeTable, UUID> solverManager; @PostMapping(\"/solve\") public TimeTable solve(@RequestBody TimeTable problem) { UUID problemId = UUID.randomUUID(); // Submit the problem to start solving SolverJob<TimeTable, UUID> solverJob = solverManager.solve(problemId, problem); TimeTable solution; try { // Wait until the solving ends solution = solverJob.getFinalBestSolution(); } catch (InterruptedException | ExecutionException e) { throw new IllegalStateException(\"Solving failed.\", e); } return solution; } }",
"quarkus.optaplanner.solver.termination.spent-limit=5s",
"package com.example; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class TimeTableSpringBootApp { public static void main(String[] args) { SpringApplication.run(TimeTableSpringBootApp.class, args); } }",
"curl -i -X POST http://localhost:8080/timeTable/solve -H \"Content-Type:application/json\" -d '{\"timeslotList\":[{\"dayOfWeek\":\"MONDAY\",\"startTime\":\"08:30:00\",\"endTime\":\"09:30:00\"},{\"dayOfWeek\":\"MONDAY\",\"startTime\":\"09:30:00\",\"endTime\":\"10:30:00\"}],\"roomList\":[{\"name\":\"Room A\"},{\"name\":\"Room B\"}],\"lessonList\":[{\"id\":1,\"subject\":\"Math\",\"teacher\":\"A. Turing\",\"studentGroup\":\"9th grade\"},{\"id\":2,\"subject\":\"Chemistry\",\"teacher\":\"M. Curie\",\"studentGroup\":\"9th grade\"},{\"id\":3,\"subject\":\"French\",\"teacher\":\"M. Curie\",\"studentGroup\":\"10th grade\"},{\"id\":4,\"subject\":\"History\",\"teacher\":\"I. Jones\",\"studentGroup\":\"10th grade\"}]}'",
"HTTP/1.1 200 Content-Type: application/json {\"timeslotList\":...,\"roomList\":...,\"lessonList\":[{\"id\":1,\"subject\":\"Math\",\"teacher\":\"A. Turing\",\"studentGroup\":\"9th grade\",\"timeslot\":{\"dayOfWeek\":\"MONDAY\",\"startTime\":\"08:30:00\",\"endTime\":\"09:30:00\"},\"room\":{\"name\":\"Room A\"}},{\"id\":2,\"subject\":\"Chemistry\",\"teacher\":\"M. Curie\",\"studentGroup\":\"9th grade\",\"timeslot\":{\"dayOfWeek\":\"MONDAY\",\"startTime\":\"09:30:00\",\"endTime\":\"10:30:00\"},\"room\":{\"name\":\"Room A\"}},{\"id\":3,\"subject\":\"French\",\"teacher\":\"M. Curie\",\"studentGroup\":\"10th grade\",\"timeslot\":{\"dayOfWeek\":\"MONDAY\",\"startTime\":\"08:30:00\",\"endTime\":\"09:30:00\"},\"room\":{\"name\":\"Room B\"}},{\"id\":4,\"subject\":\"History\",\"teacher\":\"I. Jones\",\"studentGroup\":\"10th grade\",\"timeslot\":{\"dayOfWeek\":\"MONDAY\",\"startTime\":\"09:30:00\",\"endTime\":\"10:30:00\"},\"room\":{\"name\":\"Room B\"}}],\"score\":\"0hard/0soft\"}",
"... Solving started: time spent (33), best score (-8init/0hard/0soft), environment mode (REPRODUCIBLE), random (JDK with seed 0). ... Construction Heuristic phase (0) ended: time spent (73), best score (0hard/0soft), score calculation speed (459/sec), step total (4). ... Local Search phase (1) ended: time spent (5000), best score (0hard/0soft), score calculation speed (28949/sec), step total (28398). ... Solving ended: time spent (5000), best score (0hard/0soft), score calculation speed (28524/sec), phase total (2), environment mode (REPRODUCIBLE).",
"package com.example.solver; import java.time.DayOfWeek; import java.time.LocalTime; import java.util.ArrayList; import java.util.List; import com.example.domain.Lesson; import com.example.domain.Room; import com.example.domain.TimeTable; import com.example.domain.Timeslot; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Timeout; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import static org.junit.jupiter.api.Assertions.assertFalse; import static org.junit.jupiter.api.Assertions.assertNotNull; import static org.junit.jupiter.api.Assertions.assertTrue; @SpringBootTest(properties = { \"optaplanner.solver.termination.spent-limit=1h\", // Effectively disable this termination in favor of the best-score-limit \"optaplanner.solver.termination.best-score-limit=0hard/*soft\"}) public class TimeTableControllerTest { @Autowired private TimeTableController timeTableController; @Test @Timeout(600_000) public void solve() { TimeTable problem = generateProblem(); TimeTable solution = timeTableController.solve(problem); assertFalse(solution.getLessonList().isEmpty()); for (Lesson lesson : solution.getLessonList()) { assertNotNull(lesson.getTimeslot()); assertNotNull(lesson.getRoom()); } assertTrue(solution.getScore().isFeasible()); } private TimeTable generateProblem() { List<Timeslot> timeslotList = new ArrayList<>(); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(8, 30), LocalTime.of(9, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(9, 30), LocalTime.of(10, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(10, 30), LocalTime.of(11, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(13, 30), LocalTime.of(14, 30))); timeslotList.add(new Timeslot(DayOfWeek.MONDAY, LocalTime.of(14, 30), LocalTime.of(15, 30))); List<Room> roomList = new ArrayList<>(); roomList.add(new Room(\"Room A\")); roomList.add(new Room(\"Room B\")); roomList.add(new Room(\"Room C\")); List<Lesson> lessonList = new ArrayList<>(); lessonList.add(new Lesson(101L, \"Math\", \"B. May\", \"9th grade\")); lessonList.add(new Lesson(102L, \"Physics\", \"M. Curie\", \"9th grade\")); lessonList.add(new Lesson(103L, \"Geography\", \"M. Polo\", \"9th grade\")); lessonList.add(new Lesson(104L, \"English\", \"I. Jones\", \"9th grade\")); lessonList.add(new Lesson(105L, \"Spanish\", \"P. Cruz\", \"9th grade\")); lessonList.add(new Lesson(201L, \"Math\", \"B. May\", \"10th grade\")); lessonList.add(new Lesson(202L, \"Chemistry\", \"M. Curie\", \"10th grade\")); lessonList.add(new Lesson(203L, \"History\", \"I. Jones\", \"10th grade\")); lessonList.add(new Lesson(204L, \"English\", \"P. Cruz\", \"10th grade\")); lessonList.add(new Lesson(205L, \"French\", \"M. Curie\", \"10th grade\")); return new TimeTable(timeslotList, roomList, lessonList); } }",
"... Solving ended: ..., score calculation speed (29455/sec),",
"logging.level.org.optaplanner=debug",
"... Solving started: time spent (67), best score (-20init/0hard/0soft), environment mode (REPRODUCIBLE), random (JDK with seed 0). ... CH step (0), time spent (128), score (-18init/0hard/0soft), selected move count (15), picked move ([Math(101) {null -> Room A}, Math(101) {null -> MONDAY 08:30}]). ... CH step (1), time spent (145), score (-16init/0hard/0soft), selected move count (15), picked move ([Physics(102) {null -> Room A}, Physics(102) {null -> MONDAY 09:30}]).",
"package com.example.solver; import com.example.domain.TimeTable; import com.example.persistence.TimeTableRepository; import org.optaplanner.core.api.score.ScoreManager; import org.optaplanner.core.api.solver.SolverManager; import org.optaplanner.core.api.solver.SolverStatus; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; @RestController @RequestMapping(\"/timeTable\") public class TimeTableController { @Autowired private TimeTableRepository timeTableRepository; @Autowired private SolverManager<TimeTable, Long> solverManager; @Autowired private ScoreManager<TimeTable> scoreManager; // To try, GET http://localhost:8080/timeTable @GetMapping() public TimeTable getTimeTable() { // Get the solver status before loading the solution // to avoid the race condition that the solver terminates between them SolverStatus solverStatus = getSolverStatus(); TimeTable solution = timeTableRepository.findById(TimeTableRepository.SINGLETON_TIME_TABLE_ID); scoreManager.updateScore(solution); // Sets the score solution.setSolverStatus(solverStatus); return solution; } @PostMapping(\"/solve\") public void solve() { solverManager.solveAndListen(TimeTableRepository.SINGLETON_TIME_TABLE_ID, timeTableRepository::findById, timeTableRepository::save); } public SolverStatus getSolverStatus() { return solverManager.getSolverStatus(TimeTableRepository.SINGLETON_TIME_TABLE_ID); } @PostMapping(\"/stopSolving\") public void stopSolving() { solverManager.terminateEarly(TimeTableRepository.SINGLETON_TIME_TABLE_ID); } }",
"package com.example.solver; import com.example.domain.Lesson; import com.example.domain.TimeTable; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.Timeout; import org.optaplanner.core.api.solver.SolverStatus; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import static org.junit.jupiter.api.Assertions.assertFalse; import static org.junit.jupiter.api.Assertions.assertNotNull; import static org.junit.jupiter.api.Assertions.assertTrue; @SpringBootTest(properties = { \"optaplanner.solver.termination.spent-limit=1h\", // Effectively disable this termination in favor of the best-score-limit \"optaplanner.solver.termination.best-score-limit=0hard/*soft\"}) public class TimeTableControllerTest { @Autowired private TimeTableController timeTableController; @Test @Timeout(600_000) public void solveDemoDataUntilFeasible() throws InterruptedException { timeTableController.solve(); TimeTable timeTable = timeTableController.getTimeTable(); while (timeTable.getSolverStatus() != SolverStatus.NOT_SOLVING) { // Quick polling (not a Test Thread Sleep anti-pattern) // Test is still fast on fast systems and doesn't randomly fail on slow systems. Thread.sleep(20L); timeTable = timeTableController.getTimeTable(); } assertFalse(timeTable.getLessonList().isEmpty()); for (Lesson lesson : timeTable.getLessonList()) { assertNotNull(lesson.getTimeslot()); assertNotNull(lesson.getRoom()); } assertTrue(timeTable.getScore().isFeasible()); } }",
"<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-registry-prometheus</artifactId> </dependency>",
"management.endpoints.web.exposure.include=metrics,prometheus",
"mvn spring-boot:run"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_optaplanner/8.38/html/developing_solvers_with_red_hat_build_of_optaplanner/assembly-business-optimizer-springboot_optaplanner-quickstarts
|
4.6.3. EDIT MONITORING SCRIPTS Subsection
|
4.6.3. EDIT MONITORING SCRIPTS Subsection Click on the MONITORING SCRIPTS link at the top of the page. The EDIT MONITORING SCRIPTS subsection allows the administrator to specify a send/expect string sequence to verify that the service for the virtual server is functional on each real server. It is also the place where the administrator can specify customized scripts to check services requiring dynamically changing data. Figure 4.9. The EDIT MONITORING SCRIPTS Subsection Sending Program For more advanced service verification, you can use this field to specify the path to a service-checking script. This functionality is especially helpful for services that require dynamically changing data, such as HTTPS or SSL. To use this functionality, you must write a script that returns a textual response, set it to be executable, and type the path to it in the Sending Program field. Note To ensure that each server in the real server pool is checked, use the special token %h after the path to the script in the Sending Program field. This token is replaced with each real server's IP address as the script is called by the nanny daemon. The following is a sample script to use as a guide when composing an external service-checking script: Note If an external program is entered in the Sending Program field, then the Send field is ignored. Send Enter a string for the nanny daemon to send to each real server in this field. By default the send field is completed for HTTP. You can alter this value depending on your needs. If you leave this field blank, the nanny daemon attempts to open the port and assume the service is running if it succeeds. Only one send sequence is allowed in this field, and it can only contain printable, ASCII characters as well as the following escape characters: \n for new line. \r for carriage return. \t for tab. \ to escape the character which follows it. Expect Enter a the textual response the server should return if it is functioning properly. If you wrote your own sending program, enter the response you told it to send if it was successful. Note To determine what to send for a given service, you can open a telnet connection to the port on a real server and see what is returned. For instance, FTP reports 220 upon connecting, so could enter quit in the Send field and 220 in the Expect field. Warning Remember to click the ACCEPT button after making any changes in this panel. To make sure you do not lose any changes when selecting a new panel. Once you have configured virtual servers using the Piranha Configuration Tool , you must copy specific configuration files to the backup LVS router. See Section 4.7, "Synchronizing Configuration Files" for details.
|
[
"#!/bin/sh TEST=`dig -t soa example.com @USD1 | grep -c dns.example.com if [ USDTEST != \"1\" ]; then echo \"OK else echo \"FAIL\" fi"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s2-piranha-virtservs-ems-VSA
|
Chapter 14. Storage
|
Chapter 14. Storage DM rebase to version 4.2 Device Mapper (DM) has been upgraded to upstream version 4.2, which provides a number of bug fixes and enhancements over the version including a significant DM crypt performance update and DM core update to support Multi-Queue Block I/O Queueing Mechanism (blk-mq). Multiqueue I/O scheduling with blk-mq Red Hat Enterprise Linux 7.2 includes a new multiple queue I/O scheduling mechanism for block devices known as blk-mq. It can improve performance by allowing certain device drivers to map I/O requests to multiple hardware or software queues. The improved performance comes from reducing lock contention present when multiple threads of execution perform I/O to a single device. Newer devices, such as Non-Volatile Memory Express (NVMe), are best positioned to take advantage of this feature due to their native support for multiple hardware submission and completion queues, and their low-latency performance characteristics. Performance gains, as always, will depend on the exact hardware and workload. The blk-mq feature is currently implemented, and enabled by default, in the following drivers: virtio-blk, mtip32xx, nvme, and rbd. The related feature, scsi-mq, allows Small Computer System Interface (SCSI) device drivers to use the blk-mq infrastructure. The scsi-mq feature is provided as a Technology Preview in Red Hat Enterprise Linux 7.2. To enable scsi-mq, specify scsi_mod.use_blk_mq=y on the kernel command line. The default value is n (disabled). The device mapper (DM) multipath target, which uses request-based DM, can also be configured to use the blk-mq infrastructure if the dm_mod.use_blk_mq=y kernel option is specified. The default value is n (disabled). It may be beneficial to set dm_mod.use_blk_mq=y if the underlying SCSI devices are also using blk-mq, as doing so reduces locking overhead at the DM layer. To determine whether DM multipath is using blk-mq on a system, cat the file /sys/block/dm-X/dm/use_blk_mq , where dm-X is replaced by the DM multipath device of interest. This file is read-only and reflects what the global value in /sys/module/dm_mod/parameters/use_blk_mq was at the time the request-based DM multipath device was created. New delay_watch_checks and delay_wait_checks options in the multipath.conf file Should a path be unreliable, as when the connection drops in and out frequently, multipathd will still continuously attempt to use that path. The timeout before multipathd realizes that the path is no longer accessible is 300 seconds, which can give the appearance that multipathd has stalled. To fix this, two new configuration options have been added: delay_watch_checks and delay_wait_checks. Set the delay_watch_checks to how many cycles multipathd is to watch the path for after it comes online. Should the path fail in under that assigned value, multipathd will not use it. multipathd will then rely on the delay_wait_checks option to tell it how many consecutive cycles it must pass until the path becomes valid again. This prevents unreliable paths from immediately being used as soon as they come back online. New config_dir option in the multipath.conf file Users were unable to split their configuration between /etc/multipath.conf and other configuration files. This prevented users from setting up one main configuration file for all their machines and keep machine-specific configuration information in separate configuration files for each machine. To address this, a new config_dir option was added in the multipath.config file. Users must change the config_dir option to either an empty string or a fully qualified directory path name. When set to anything other than an empty string, multipath will read all .conf files in alphabetical order. It will then apply the configurations exactly as if they had been added to the /etc/multipath.conf. If this change is not made, config_dir defaults to /etc/multipath/conf.d. New dmstats command to display and manage I/O statistics for regions of devices that use the device-mapper driver The dmstats command provides userspace support for device-mapper I/O statistics. This allows a user to create, manage and report I/O counters, metrics and latency histogram data for user-defined arbitrary regions of device-mapper devices. Statistics fields are now available in dmsetup reports and the dmstats command adds new specialized reporting modes designed for use with statistics information. For information on the dmstats command, see the dmstats(8) man page. LVM Cache LVM cache has been fully supported since Red Hat Enterprise Linux 7.1. This feature allows users to create logical volumes (LVs) with a small fast device performing as a cache to larger slower devices. Refer to the lvmcache(7) manual page for information on creating cache logical volumes. Note the following restrictions on the use of cache LVs: * The cache LV must be a top-level device. It cannot be used as a thin-pool LV, an image of a RAID LV, or any other sub-LV type. * The cache LV sub-LVs (the origin LV, metadata LV, and data LV) can only be of linear, stripe, or RAID type. * The properties of the cache LV cannot be changed after creation. To change cache properties, remove the cache as described in lvmcache(7) and recreate it with the desired properties. New LVM/DM cache policy A new smq dm-cache policy has been written that the reduces memory consumption and improves performance for most use cases. It is now the default cache policy for new LVM cache logical volumes. Users who prefer to use the legacy mq cache policy can still do so by supplying the -cachepolicy argument when creating the cache logical volume. LVM systemID LVM volume groups can now be assigned an owner. The volume group owner is the system ID of a host. Only the host with the given system ID can use the VG. This can benefit volume groups that exist on shared devices, visible to multiple hosts, which are otherwise not protected from concurrent use from multiple hosts. LVM volume groups on shared devices with an assigned system ID are owned by one host and protected from other hosts. New lvmpolld daemon The lvmpolld daemon provides a polling method for long-running LVM commands. When enabled, control of long-running LVM commands is transferred from the original LVM command to the lvmpolld daemon. This allows the operation to continue independent of the original LVM command. The lvmpolld daemon is enabled by default. Before the introduction of the lvmpolld daemon, any background polling process originating in an lvm2 command initiated inside a cgroup of a systemd service could get killed if the main process (the main service) exited in the cgroup . This could lead to premature termination of the lvm2 polling process. Additionally, lvmpolld helps to prevent spawning lvm2 polling processes querying for progress on the same task multiple times because it tracks the progress for all polling tasks in progress. For further information on the lvmpolld daemon, see the lvm.conf configuration file. Enhancements to LVM selection criteria The Red Hat Enterprise Linux 7.2 release supports several enhancements to LVM selection criteria. Previously, it was possible to use selection criteria only for reporting commands; LVM now supports selection criteria for several LVM processing commands as well. Additionally, there are several changes in this release to provide better support for time reporting fields and selection. For information on the implementation of these new features, see the LVM Selection Criteria appendix in the Logical Volume Administration manual. The default maximum number of SCSI LUNs is increased The default value for the max_report_luns parameter has been increased from 511 to 16393. This parameter specifies the maximum number of logical units that may be configured when the systems scans the SCSI interconnect using the Report LUNs mechanism.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/storage
|
Chapter 8. Red Hat Developer Hub integration with Microsoft Azure Kubernetes Service (AKS)
|
Chapter 8. Red Hat Developer Hub integration with Microsoft Azure Kubernetes Service (AKS) You can integrate Developer Hub with Microsoft Azure Kubernetes Service (AKS), which provides a significant advancement in development, offering a streamlined environment for building, deploying, and managing your applications. This integration requires the deployment of Developer Hub on AKS using one of the following methods: The Helm chart The Red Hat Developer Hub Operator 8.1. Monitoring and logging with Azure Kubernetes Services (AKS) in Red Hat Developer Hub Monitoring and logging are integral aspects of managing and maintaining Azure Kubernetes Services (AKS) in Red Hat Developer Hub. With features like Managed Prometheus Monitoring and Azure Monitor integration, administrators can efficiently monitor resource utilization, diagnose issues, and ensure the reliability of their containerized workloads. To enable Managed Prometheus Monitoring, use the -enable-azure-monitor-metrics option within either the az aks create or az aks update command, depending on whether you're creating a new cluster or updating an existing one, such as: az aks create/update --resource-group <your-ResourceGroup> --name <your-Cluster> --enable-azure-monitor-metrics The command installs the metrics add-on, which gathers Prometheus metrics . Using the command, you can enable monitoring of Azure resources through both native Azure Monitor metrics and Prometheus metrics. You can also view the results in the portal under Monitoring Insights . For more information, see Monitor Azure resources with Azure Monitor . Furthermore, metrics from both the Managed Prometheus service and Azure Monitor can be accessed through Azure Managed Grafana service. For more information, see Link a Grafana workspace section. By default, Prometheus uses the minimum ingesting profile, which optimizes ingestion volume and sets default configurations for scrape frequency, targets, and metrics collected. The default settings can be customized through custom configuration. Azure offers various methods, including using different ConfigMaps, to provide scrape configuration and other metric add-on settings. For more information about default configuration, see Default Prometheus metrics configuration in Azure Monitor and Customize scraping of Prometheus metrics in Azure Monitor managed service for Prometheus documentation. 8.1.1. Viewing logs with Azure Kubernetes Services (AKS) You can access live data logs generated by Kubernetes objects and collect log data in Container Insights within AKS. Prerequisites You have deployed Developer Hub on AKS. For more information, see Installing Red Hat Developer Hub on Azure Kubernetes Service (AKS) . assembly-install-rhdh-aks.adoc Procedure View live logs from your Developer Hub instance Navigate to the Azure Portal. Search for the resource group <your-ResourceGroup> and locate your AKS cluster <your-Cluster> . Select Kubernetes resources Workloads from the menu. Select the <your-rhdh-cr>-developer-hub (in case of Helm Chart installation) or <your-rhdh-cr>-backstage (in case of Operator-backed installation) deployment. Click Live Logs in the left menu. Select the pod. Note There must be only single pod. Live log data is collected and displayed. View real-time log data from the Container Engine Navigate to the Azure Portal. Search for the resource group <your-ResourceGroup> and locate your AKS cluster <your-Cluster> . Select Monitoring Insights from the menu. Go to the Containers tab. Find the backend-backstage container and click it to view real-time log data as it's generated by the Container Engine. 8.2. Using Microsoft Azure as an authentication provider in Red Hat Developer Hub The core-plugin-api package in Developer Hub comes integrated with Microsoft Azure authentication provider, authenticating signing in using Azure OAuth. Prerequisites You have deployed Developer Hub on AKS. For more information, see Installing Red Hat Developer Hub on Azure Kubernetes Service (AKS) . You have created registered your application in Azure portal. For more information, see Register an application with the Microsoft identity platform . 8.2.1. Using Microsoft Azure as an authentication provider in Helm deployment You can use Microsoft Azure as an authentication provider in Red Hat Developer Hub, when installed using the Helm Chart. For more information, see Deploying Developer Hub on AKS with the Helm chart . Procedure After the application is registered, note down the following: clientId : Application (client) ID, found under App Registration Overview . clientSecret : Secret, found under *App Registration Certificates & secrets (create new if needed). tenantId : Directory (tenant) ID, found under App Registration Overview . Ensure the following fragment is included in your Developer Hub ConfigMap: auth: environment: production providers: microsoft: production: clientId: USD{AZURE_CLIENT_ID} clientSecret: USD{AZURE_CLIENT_SECRET} tenantId: USD{AZURE_TENANT_ID} domainHint: USD{AZURE_TENANT_ID} additionalScopes: - Mail.Send You can either create a new file or add it to an existing one. Apply the ConfigMap to your Kubernetes cluster: kubectl -n <your_namespace> apply -f <app-config>.yaml Create or reuse an existing Secret containing Azure credentials and add the following fragment: stringData: AZURE_CLIENT_ID: <value-of-clientId> AZURE_CLIENT_SECRET: <value-of-clientSecret> AZURE_TENANT_ID: <value-of-tenantId> Apply the secret to your Kubernetes cluster: kubectl -n <your_namespace> apply -f <azure-secrets>.yaml Ensure your values.yaml file references the previously created ConfigMap and Secret: upstream: backstage: ... extraAppConfig: - filename: ... configMapRef: <app-config-containing-azure> extraEnvVarsSecrets: - <secret-containing-azure> Optional: If the Helm Chart is already installed, upgrade it: Optional: If your rhdh.yaml file is not changed, for example, you only updated the ConfigMap and Secret referenced from it, refresh your Developer Hub deployment by removing the corresponding pods: kubectl -n <your_namespace> delete pods -l backstage.io/app=backstage-<your-rhdh-cr> 8.2.2. Using Microsoft Azure as an authentication provider in Operator-backed deployment You can use Microsoft Azure as an authentication provider in Red Hat Developer Hub, when installed using the Operator. For more information, see Installing Red Hat Developer Hub on OpenShift Container Platform with the Operator . Procedure After the application is registered, note down the following: clientId : Application (client) ID, found under App Registration Overview . clientSecret : Secret, found under *App Registration Certificates & secrets (create new if needed). tenantId : Directory (tenant) ID, found under App Registration Overview . Ensure the following fragment is included in your Developer Hub ConfigMap: auth: environment: production providers: microsoft: production: clientId: USD{AZURE_CLIENT_ID} clientSecret: USD{AZURE_CLIENT_SECRET} tenantId: USD{AZURE_TENANT_ID} domainHint: USD{AZURE_TENANT_ID} additionalScopes: - Mail.Send You can either create a new file or add it to an existing one. Apply the ConfigMap to your Kubernetes cluster: kubectl -n <your_namespace> apply -f <app-config>.yaml Create or reuse an existing Secret containing Azure credentials and add the following fragment: stringData: AZURE_CLIENT_ID: <value-of-clientId> AZURE_CLIENT_SECRET: <value-of-clientSecret> AZURE_TENANT_ID: <value-of-tenantId> Apply the secret to your Kubernetes cluster: kubectl -n <your_namespace> apply -f <azure-secrets>.yaml Ensure your Custom Resource manifest contains references to the previously created ConfigMap and Secret: apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: <your-rhdh-cr> spec: application: imagePullSecrets: - rhdh-pull-secret route: enabled: false appConfig: configMaps: - name: <app-config-containing-azure> extraEnvs: secrets: - name: <secret-containing-azure> Apply your Custom Resource manifest: kubectl -n <your_namespace> apply -f rhdh.yaml Optional: If your rhdh.yaml file is not changed, for example, you only updated the ConfigMap and Secret referenced from it, refresh your Developer Hub deployment by removing the corresponding pods: kubectl -n <your_namespace> delete pods -l backstage.io/app=backstage-<your-rhdh-cr>
|
[
"az aks create/update --resource-group <your-ResourceGroup> --name <your-Cluster> --enable-azure-monitor-metrics",
"auth: environment: production providers: microsoft: production: clientId: USD{AZURE_CLIENT_ID} clientSecret: USD{AZURE_CLIENT_SECRET} tenantId: USD{AZURE_TENANT_ID} domainHint: USD{AZURE_TENANT_ID} additionalScopes: - Mail.Send",
"-n <your_namespace> apply -f <app-config>.yaml",
"stringData: AZURE_CLIENT_ID: <value-of-clientId> AZURE_CLIENT_SECRET: <value-of-clientSecret> AZURE_TENANT_ID: <value-of-tenantId>",
"-n <your_namespace> apply -f <azure-secrets>.yaml",
"upstream: backstage: extraAppConfig: - filename: configMapRef: <app-config-containing-azure> extraEnvVarsSecrets: - <secret-containing-azure>",
"helm -n <your_namespace> upgrade -f <your-values.yaml> <your_deploy_name> redhat-developer/backstage --version 1.2.6",
"-n <your_namespace> delete pods -l backstage.io/app=backstage-<your-rhdh-cr>",
"auth: environment: production providers: microsoft: production: clientId: USD{AZURE_CLIENT_ID} clientSecret: USD{AZURE_CLIENT_SECRET} tenantId: USD{AZURE_TENANT_ID} domainHint: USD{AZURE_TENANT_ID} additionalScopes: - Mail.Send",
"-n <your_namespace> apply -f <app-config>.yaml",
"stringData: AZURE_CLIENT_ID: <value-of-clientId> AZURE_CLIENT_SECRET: <value-of-clientSecret> AZURE_TENANT_ID: <value-of-tenantId>",
"-n <your_namespace> apply -f <azure-secrets>.yaml",
"apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: <your-rhdh-cr> spec: application: imagePullSecrets: - rhdh-pull-secret route: enabled: false appConfig: configMaps: - name: <app-config-containing-azure> extraEnvs: secrets: - name: <secret-containing-azure>",
"-n <your_namespace> apply -f rhdh.yaml",
"-n <your_namespace> delete pods -l backstage.io/app=backstage-<your-rhdh-cr>"
] |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/administration_guide_for_red_hat_developer_hub/assembly-rhdh-integration-aks
|
24.3. Configuring an Exported File System for Diskless Clients
|
24.3. Configuring an Exported File System for Diskless Clients Prerequisites Install the necessary packages. See Chapter 24, Setting up a Remote Diskless System Configure the tftp service. See Section 24.1, "Configuring a tftp Service for Diskless Clients" . Configure DHCP. See Section 24.2, "Configuring DHCP for Diskless Clients" . Procedure The root directory of the exported file system (used by diskless clients in the network) is shared via NFS. Configure the NFS service to export the root directory by adding it to /etc/exports . For instructions on how to do so, see the Section 8.6.1, "The /etc/exports Configuration File" . To accommodate completely diskless clients, the root directory should contain a complete Red Hat Enterprise Linux installation. You can either clone an existing installation or install a new base system: To synchronize with a running system, use the rsync utility: Replace hostname.com with the hostname of the running system with which to synchronize via rsync . Replace exported-root-directory with the path to the exported file system. To install Red Hat Enterprise Linux to the exported location, use the yum utility with the --installroot option: The file system to be exported still needs to be configured further before it can be used by diskless clients. To do this, perform the following procedure: Procedure 24.2. Configure File System Select the kernel that diskless clients should use ( vmlinuz- kernel-version ) and copy it to the tftp boot directory: Create the initrd (that is, initramfs- kernel-version .img ) with NFS support: Change the initrd's file permissions to 644 using the following command: Warning If the initrd's file permissions are not changed, the pxelinux.0 boot loader will fail with a "file not found" error. Copy the resulting initramfs- kernel-version .img into the tftp boot directory as well. Edit the default boot configuration to use the initrd and kernel in the /var/lib/tftpboot/ directory. This configuration should instruct the diskless client's root to mount the exported file system ( /exported/root/directory ) as read-write. Add the following configuration in the /var/lib/tftpboot/pxelinux.cfg/default file: Replace server-ip with the IP address of the host machine on which the tftp and DHCP services reside. The NFS share is now ready for exporting to diskless clients. These clients can boot over the network via PXE.
|
[
"rsync -a -e ssh --exclude='/proc/*' --exclude='/sys/*' hostname.com :/ exported-root-directory",
"yum install @Base kernel dracut-network nfs-utils --installroot= exported-root-directory --releasever=/",
"cp /boot/vmlinuz- kernel-version /var/lib/tftpboot/",
"dracut --add nfs initramfs- kernel-version .img kernel-version",
"chmod 644 initramfs- kernel-version .img",
"default rhel7 label rhel7 kernel vmlinuz- kernel-version append initrd=initramfs- kernel-version .img root=nfs: server-ip : /exported/root/directory rw"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/diskless-nfs-config
|
5.2. Resource Properties
|
5.2. Resource Properties The properties that you define for a resource tell the cluster which script to use for the resource, where to find that script and what standards it conforms to. Table 5.1, "Resource Properties" describes these properties. Table 5.1. Resource Properties Field Description resource_id Your name for the resource standard The standard the script conforms to. Allowed values: ocf , service , upstart , systemd , lsb , stonith type The name of the Resource Agent you wish to use, for example IPaddr or Filesystem provider The OCF spec allows multiple vendors to supply the same ResourceAgent. Most of the agents shipped by Red Hat use heartbeat as the provider. Table 5.2, "Commands to Display Resource Properties" . summarizes the commands that display the available resource properties. Table 5.2. Commands to Display Resource Properties pcs Display Command Output pcs resource list Displays a list of all available resources. pcs resource standards Displays a list of available resources agent standards. pcs resource providers Displays a list of available resources agent providers. pcs resource list string Displays a list of available resources filtered by the specified string. You can use this command to display resources filtered by the name of a standard, a provider, or a type.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-resourceprops-haar
|
Red Hat build of Eclipse Vert.x 4.3.7 - Planned End of Life
|
Red Hat build of Eclipse Vert.x 4.3.7 - Planned End of Life The Red Hat build of Eclipse Vert.x 4.3.7 is the last supported release that Red Hat plans to provide. The full support ends on May 31, 2023. See the product life cycle page for details. Red Hat will continue to deliver security and bug fixes for Red Hat build of Eclipse Vert.x with 4.3.x releases until the product end of life. You can migrate Eclipse Vert.x applications to the Red Hat build of Quarkus. Red Hat build of Quarkus Quarkus is a Kubernetes-native Java framework tailored for JVM and native compilation, created using the best Java libraries and standards. It provides an effective solution for running Java applications in environments such as serverless, microservices, containers, Kubernetes, FaaS, or the cloud. The reactive capabilities of Quarkus use Eclipse Vert.x internally, and you can reuse Eclipse Vert.x applications in Quarkus. Therefore, migration of Eclipse Vert.x applications to Quarkus is the recommended option. See the Quarkus product page and documentation for more information. We will create resources to help you with the migration process.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/release_notes_for_eclipse_vert.x_4.3/con_red-hat-build-of-eclipse-vertx-end-of-life
|
Chapter 6. Known Issues
|
Chapter 6. Known Issues SB-1165 : Database application fails to run because org.apache.tomcat.jdbc.pool.DataSource can not be found. ENTSBT-1211 : A running service does not automatically recover after a short time, as it should, after pressing the Stop Service button in the Health Check Spring Boot Example. ENTSBT-1243 : The provided hibernate-core version is not compatible with Spring Boot 2.7. ENTSBT-1258 : jakarta.xml.bind-api-2.3.3.redhat-00001-sources.jar file, which is a dependency for Dekorate 2.11.5, is not brought in. ENTSBT-1261 : Running crud-example throws java.sql.SQLFeatureNotSupportedException exception. ENTSBT-1332 : Enabling pooledPreparedStatements causes a StackOverflowError on transaction timeout. ENTSBT-1354 : The Narayana Spring Boot starter includes spring-boot-starter version 2.7.2 . Use Maven version 3.8.5 to ensure that spring-boot.version is overridden as expected. When using a Maven version other than 3.8.5, if you specify the value of spring-boot.version by using mvn -Dspring-boot.version , the spring-boot-starter version is not always overridden.
| null |
https://docs.redhat.com/en/documentation/red_hat_support_for_spring_boot/2.7/html/release_notes_for_spring_boot_2.7/known-issues-spring-boot
|
1.9. Expect the Unexpected
|
1.9. Expect the Unexpected While the phrase "expect the unexpected" is trite, it reflects an underlying truth that all system administrators must understand: There will be times when you are caught off-guard. After becoming comfortable with this uncomfortable fact of life, what can a concerned system administrator do? The answer lies in flexibility; by performing your job in such a way as to give you (and your users) the most options possible. Take, for example, the issue of disk space. Given that never having sufficient disk space seems to be as much a physical law as the law of gravity, it is reasonable to assume that at some point you will be confronted with a desperate need for additional disk space right now . What would a system administrator who expects the unexpected do in this case? Perhaps it is possible to keep a few disk drives sitting on the shelf as spares in case of hardware problems [2] . A spare of this type could be quickly deployed [3] on a temporary basis to address the short-term need for disk space, giving time to more permanently resolve the issue (by following the standard procedure for procuring additional disk drives, for example). By trying to anticipate problems before they occur, you will be in a position to respond more quickly and effectively than if you let yourself be surprised. [2] And of course a system administrator that expects the unexpected would naturally use RAID (or related technologies) to lessen the impact of a critical disk drive failing during production. [3] Again, system administrators that think ahead configure their systems to make it as easy as possible to quickly add a new disk drive to the system.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-philosophy-unexpected
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.2/html/logging_configuration/making-open-source-more-inclusive
|
8.8. Generating a New Unique MAC Address
|
8.8. Generating a New Unique MAC Address In some cases you will need to generate a new and unique MAC address for a guest virtual machine. There is no command line tool available to generate a new MAC address at the time of writing. The script provided below can generate a new MAC address for your guest virtual machines. Save the script on your guest virtual machine as macgen.py . Now from that directory you can run the script using ./macgen.py and it will generate a new MAC address. A sample output would look like the following: 8.8.1. Another Method to Generate a New MAC for Your Guest Virtual Machine You can also use the built-in modules of python-virtinst to generate a new MAC address and UUID for use in a guest virtual machine configuration file: The script above can also be implemented as a script file as seen below.
|
[
"./macgen.py 00:16:3e:20:b0:11",
"#!/usr/bin/python macgen.py script to generate a MAC address for guest virtual machines # import random # def randomMAC(): mac = [ 0x00, 0x16, 0x3e, random.randint(0x00, 0x7f), random.randint(0x00, 0xff), random.randint(0x00, 0xff) ] return ':'.join(map(lambda x: \"%02x\" % x, mac)) # print randomMAC()",
"echo 'import virtinst.util ; print virtinst.util.uuidToString(virtinst.util.randomUUID())' | python echo 'import virtinst.util ; print virtinst.util.randomMAC()' | python",
"#!/usr/bin/env python -*- mode: python; -*- print \"\" print \"New UUID:\" import virtinst.util ; print virtinst.util.uuidToString(virtinst.util.randomUUID()) print \"New MAC:\" import virtinst.util ; print virtinst.util.randomMAC() print \"\""
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-tips_and_tricks-generating_a_new_unique_mac_address
|
Chapter 9. Managing users and roles
|
Chapter 9. Managing users and roles A User defines a set of details for individuals using the system. Users can be associated with organizations and environments, so that when they create new entities, the default settings are automatically used. Users can also have one or more roles attached, which grants them rights to view and manage organizations and environments. See Section 9.1, "Managing users" for more information on working with users. You can manage permissions of several users at once by organizing them into user groups. User groups themselves can be further grouped to create a hierarchy of permissions. For more information on creating user groups, see Section 9.4, "Creating and managing user groups" . Roles define a set of permissions and access levels. Each role contains one on more permission filters that specify the actions allowed for the role. Actions are grouped according to the Resource type . Once a role has been created, users and user groups can be associated with that role. This way, you can assign the same set of permissions to large groups of users. Satellite provides a set of predefined roles and also enables creating custom roles and permission filters as described in Section 9.5, "Creating and managing roles" . 9.1. Managing users As an administrator, you can create, modify and remove Satellite users. You can also configure access permissions for a user or a group of users by assigning them different roles . 9.1.1. Creating a user Use this procedure to create a user. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Users . Click Create User . Enter the account details for the new user. Click Submit to create the user. The user account details that you can specify include the following: On the User tab, select an authentication source from the Authorized by list: INTERNAL : to manage the user inside Satellite Server. EXTERNAL : to manage the user with external authentication. For more information, see Configuring External Authentication in Installing Satellite Server in a connected network environment . On the Organizations tab, select an organization for the user. Specify the default organization Satellite selects for the user after login from the Default on login list. Important If a user is not assigned to an organization, their access is limited. CLI procedure Create a user: The --auth-source-id 1 setting means that the user is authenticated internally, you can specify an external authentication source as an alternative. Add the --admin option to grant administrator privileges to the user. Specifying organization IDs is not required. You can modify the user details later by using the hammer user update command. Additional resources For more information about creating user accounts by using Hammer, enter hammer user create --help . 9.1.2. Assigning roles to a user Use this procedure to assign roles to a user. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Users . Click the username of the user to be assigned one or more roles. Note If a user account is not listed, check that you are currently viewing the correct organization. To list all the users in Satellite, click Default Organization and then Any Organization . Click the Locations tab, and select a location if none is assigned. Click the Organizations tab, and check that an organization is assigned. Click the Roles tab to display the list of available roles. Select the roles to assign from the Roles list. To grant all the available permissions, select the Administrator checkbox. Click Submit . To view the roles assigned to a user, click the Roles tab; the assigned roles are listed under Selected items . To remove an assigned role, click the role name in Selected items . CLI procedure To assign roles to a user, enter the following command: 9.1.3. Impersonating a different user account Administrators can impersonate other authenticated users for testing and troubleshooting purposes by temporarily logging on to the Satellite web UI as a different user. When impersonating another user, the administrator has permissions to access exactly what the impersonated user can access in the system, including the same menus. Audits are created to record the actions that the administrator performs while impersonating another user. However, all actions that an administrator performs while impersonating another user are recorded as having been performed by the impersonated user. Prerequisites Ensure that you are logged on to the Satellite web UI as a user with administrator privileges for Satellite. Procedure In the Satellite web UI, navigate to Administer > Users . To the right of the user that you want to impersonate, from the list in the Actions column, select Impersonate . When you want to stop the impersonation session, in the upper right of the main menu, click the impersonation icon. 9.1.4. Creating an API-only user You can create users that can interact only with the Satellite API. Prerequisites You have created a user and assigned roles to them. Note that this user must be authorized internally. For more information, see the following sections: Section 9.1.1, "Creating a user" Section 9.1.2, "Assigning roles to a user" Procedure Log in to your Satellite as admin. Navigate to Administer > Users and select a user. On the User tab, set a password. Do not save or communicate this password with others. You can create pseudo-random strings on your console: Create a Personal Access Token for the user. For more information, see Section 9.3.1, "Creating a Personal Access Token" . 9.2. Managing SSH keys Adding SSH keys to a user allows deployment of SSH keys during provisioning. For information on deploying SSH keys during provisioning, see Deploying SSH Keys during Provisioning in Provisioning hosts . For information on SSH keys and SSH key creation, see Using SSH-based Authentication in Red Hat Enterprise Linux 8 Configuring basic system settings . 9.2.1. Managing SSH keys for a user Use this procedure to add or remove SSH keys for a user. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites Ensure that you are logged in to the Satellite web UI as an Admin user of Red Hat Satellite or a user with the create_ssh_key permission enabled for adding SSH key and destroy_ssh_key permission for removing a key. Procedure In the Satellite web UI, navigate to Administer > Users . From the Username column, click on the username of the required user. Click on the SSH Keys tab. To Add SSH key Prepare the content of the public SSH key in a clipboard. Click Add SSH Key . In the Key field, paste the public SSH key content from the clipboard. In the Name field, enter a name for the SSH key. Click Submit . To Remove SSH key Click Delete on the row of the SSH key to be deleted. Click OK in the confirmation prompt. CLI procedure To add an SSH key to a user, you must specify either the path to the public SSH key file, or the content of the public SSH key copied to the clipboard. If you have the public SSH key file, enter the following command: If you have the content of the public SSH key, enter the following command: To delete an SSH key from a user, enter the following command: To view an SSH key attached to a user, enter the following command: To list SSH keys attached to a user, enter the following command: 9.3. Managing Personal Access Tokens Personal Access Tokens allow you to authenticate API requests without using your password. You can set an expiration date for your Personal Access Token and you can revoke it if you decide it should expire before the expiration date. 9.3.1. Creating a Personal Access Token Use this procedure to create a Personal Access Token. Procedure In the Satellite web UI, navigate to Administer > Users . Select a user for which you want to create a Personal Access Token. On the Personal Access Tokens tab, click Add Personal Access Token . Enter a Name for you Personal Access Token. Optional: Select the Expires date to set an expiration date. If you do not set an expiration date, your Personal Access Token will never expire unless revoked. Click Submit. You now have the Personal Access Token available to you on the Personal Access Tokens tab. Important Ensure to store your Personal Access Token as you will not be able to access it again after you leave the page or create a new Personal Access Token. You can click Copy to clipboard to copy your Personal Access Token. Verification Make an API request to your Satellite Server and authenticate with your Personal Access Token: You should receive a response with status 200 , for example: If you go back to Personal Access Tokens tab, you can see the updated Last Used time to your Personal Access Token. 9.3.2. Revoking a Personal Access Token Use this procedure to revoke a Personal Access Token before its expiration date. Procedure In the Satellite web UI, navigate to Administer > Users . Select a user for which you want to revoke the Personal Access Token. On the Personal Access Tokens tab, locate the Personal Access Token you want to revoke. Click Revoke in the Actions column to the Personal Access Token you want to revoke. Verification Make an API request to your Satellite Server and try to authenticate with the revoked Personal Access Token: You receive the following error message: 9.4. Creating and managing user groups 9.4.1. User groups With Satellite, you can assign permissions to groups of users. You can also create user groups as collections of other user groups. If using an external authentication source, you can map Satellite user groups to external user groups as described in Configuring External User Groups in Installing Satellite Server in a connected network environment . User groups are defined in an organizational context, meaning that you must select an organization before you can access user groups. 9.4.2. Creating a user group Use this procedure to create a user group. Procedure In the Satellite web UI, navigate to Administer > User Groups . Click Create User group . On the User Group tab, specify the name of the new user group and select group members: Select the previously created user groups from the User Groups list. Select users from the Users list. On the Roles tab, select the roles you want to assign to the user group. Alternatively, select the Admin checkbox to assign all available permissions. Click Submit . CLI procedure To create a user group, enter the following command: 9.4.3. Removing a user group Use the following procedure to remove a user group from Satellite. Procedure In the Satellite web UI, navigate to Administer > User Groups . Click Delete to the right of the user group you want to delete. Click Confirm to delete the user group. 9.5. Creating and managing roles Satellite provides a set of predefined roles with permissions sufficient for standard tasks, as listed in Section 9.6, "Predefined roles available in Satellite" . It is also possible to configure custom roles, and assign one or more permission filters to them. Permission filters define the actions allowed for a certain resource type. Certain Satellite plugins create roles automatically. 9.5.1. Creating a role Use this procedure to create a role. Procedure In the Satellite web UI, navigate to Administer > Roles . Click Create Role . Provide a Name for the role. Click Submit to save your new role. CLI procedure To create a role, enter the following command: To serve its purpose, a role must contain permissions. After creating a role, proceed to Section 9.5.3, "Adding permissions to a role" . 9.5.2. Cloning a role Use the Satellite web UI to clone a role. Procedure In the Satellite web UI, navigate to Administer > Roles and select Clone from the drop-down menu to the right of the required role. Provide a Name for the role. Click Submit to clone the role. Click the name of the cloned role and navigate to Filters . Edit the permissions as required. Click Submit to save your new role. 9.5.3. Adding permissions to a role Use this procedure to add permissions to a role. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Roles . Select Add Filter from the drop-down list to the right of the required role. Select the Resource type from the drop-down list. The (Miscellaneous) group gathers permissions that are not associated with any resource group. Click the permissions you want to select from the Permission list. Depending on the Resource type selected, you can select or deselect the Unlimited and Override checkbox. The Unlimited checkbox is selected by default, which means that the permission is applied on all resources of the selected type. When you disable the Unlimited checkbox, the Search field activates. In this field you can specify further filtering with use of the Satellite search syntax. For more information, see Section 9.7, "Granular permission filtering" . When you enable the Override checkbox, you can add additional locations and organizations to allow the role to access the resource type in the additional locations and organizations; you can also remove an already associated location and organization from the resource type to restrict access. Click . Click Submit to save changes. CLI procedure List all available permissions: Add permissions to a role: For more information about roles and permissions parameters, enter the hammer role --help and hammer filter --help commands. 9.5.4. Viewing permissions of a role Use the Satellite web UI to view the permissions of a role. Procedure In the Satellite web UI, navigate to Administer > Roles . Click Filters to the right of the required role to get to the Filters page. The Filters page contains a table of permissions assigned to a role grouped by the resource type. It is also possible to generate a complete table of permissions and actions that you can use on your Satellite system. For more information, see Section 9.5.5, "Creating a complete permission table" . 9.5.5. Creating a complete permission table Use the Satellite CLI to create a permission table. Procedure Start the Satellite console with the following command: Insert the following code into the console: The above syntax creates a table of permissions and saves it to the /tmp/table.html file. Press Ctrl + D to exit the Satellite console. Insert the following text at the first line of /tmp/table.html : Append the following text at the end of /tmp/table.html : Open /tmp/table.html in a web browser to view the table. 9.5.6. Removing a role Use the following procedure to remove a role from Satellite. Procedure In the Satellite web UI, navigate to Administer > Roles . Select Delete from the drop-down list to the right of the role to be deleted. Click Confirm to delete the role. 9.6. Predefined roles available in Satellite The following table provides an overview of permissions that predefined roles in Satellite grant to a user. For a complete set of predefined roles and the permissions they grant, log in to Satellite web UI as the privileged user and navigate to Administer > Roles . For more information, see Section 9.5.4, "Viewing permissions of a role" . Predefined role Permissions the role provides Additional information Auditor View the Audit log. Default role View tasks and jobs invocations. Satellite automatically assigns this role to every user in the system. Manager View and edit global settings. Organization admin All permissions except permissions for managing organizations. An administrator role defined per organization. The role has no visibility into resources in other organizations. By cloning this role and assigning an organization, you can delegate administration of that organization to a user. Site manager View permissions for various items. Permissions to manage hosts in the infrastructure. A restrained version of the Manager role. System admin Edit global settings in Administer > Settings . View, create, edit, and destroy users, user groups, and roles. View, create, edit, destroy, and assign organizations and locations but not view resources within them. Users with this role can create users and assign all roles to them. Give this role only to trusted users. Viewer View the configuration of every element of the Satellite structure, logs, reports, and statistics. 9.7. Granular permission filtering As mentioned in Section 9.5.3, "Adding permissions to a role" , Red Hat Satellite provides the ability to limit the configured user permissions to selected instances of a resource type. These granular filters are queries to the Satellite database and are supported by the majority of resource types. 9.7.1. Creating a granular permission filter Use this procedure to create a granular filter. To use the CLI instead of the Satellite web UI, see the CLI procedure . Satellite does not apply search conditions to create actions. For example, limiting the create_locations action with name = "Default Location" expression in the search field does not prevent the user from assigning a custom name to the newly created location. Procedure Specify a query in the Search field on the Edit Filter page. Deselect the Unlimited checkbox for the field to be active. Queries have the following form: field_name marks the field to be queried. The range of available field names depends on the resource type. For example, the Partition Table resource type offers family , layout , and name as query parameters. operator specifies the type of comparison between field_name and value . See Section 9.7.3, "Supported operators for granular search" for an overview of applicable operators. value is the value used for filtering. This can be for example a name of an organization. Two types of wildcard characters are supported: underscore (_) provides single character replacement, while percent sign (%) replaces zero or more characters. For most resource types, the Search field provides a drop-down list suggesting the available parameters. This list appears after placing the cursor in the search field. For many resource types, you can combine queries using logical operators such as and , not and has operators. CLI procedure To create a granular filter, enter the hammer filter create command with the --search option to limit permission filters, for example: This command adds to the qa-user role a permission to view, create, edit, and destroy content views that only applies to content views with name starting with ccv . 9.7.2. Examples of using granular permission filters As an administrator, you can allow selected users to make changes in a certain part of the environment path. The following filter allows you to work with content while it is in the development stage of the application lifecycle, but the content becomes inaccessible once is pushed to production. 9.7.2.1. Applying permissions for the host resource type The following query applies any permissions specified for the Host resource type only to hosts in the group named host-editors. The following query returns records where the name matches XXXX , Yyyy , or zzzz example strings: You can also limit permissions to a selected environment. To do so, specify the environment name in the Search field, for example: You can limit user permissions to a certain organization or location with the use of the granular permission filter in the Search field. However, some resource types provide a GUI alternative, an Override checkbox that provides the Locations and Organizations tabs. On these tabs, you can select from the list of available organizations and locations. For more information, see Section 9.7.2.2, "Creating an organization-specific manager role" . 9.7.2.2. Creating an organization-specific manager role Use the Satellite web UI to create an administrative role restricted to a single organization named org-1 . Procedure In the Satellite web UI, navigate to Administer > Roles . Clone the existing Organization admin role. Select Clone from the drop-down list to the Filters button. You are then prompted to insert a name for the cloned role, for example org-1 admin . Click the desired locations and organizations to associate them with the role. Click Submit to create the role. Click org-1 admin , and click Filters to view all associated filters. The default filters work for most use cases. However, you can optionally click Edit to change the properties for each filter. For some filters, you can enable the Override option if you want the role to be able to access resources in additional locations and organizations. For example, by selecting the Domain resource type, the Override option, and then additional locations and organizations using the Locations and Organizations tabs, you allow this role to access domains in the additional locations and organizations that is not associated with this role. You can also click New filter to associate new filters with this role. 9.7.3. Supported operators for granular search Table 9.1. Logical operators Operator Description and Combines search criteria. not Negates an expression. has Object must have a specified property. Table 9.2. Symbolic operators Operator Description = Is equal to . An equality comparison that is case-sensitive for text fields. != Is not equal to . An inversion of the = operator. ~ Like . A case-insensitive occurrence search for text fields. !~ Not like . An inversion of the ~ operator. ^ In . An equality comparison that is case-sensitive search for text fields. This generates a different SQL query to the Is equal to comparison, and is more efficient for multiple value comparison. !^ Not in . An inversion of the ^ operator. >, >= Greater than , greater than or equal to . Supported for numerical fields only. <, ⇐ Less than , less than or equal to . Supported for numerical fields only.
|
[
"hammer user create --auth-source-id My_Authentication_Source --login My_User_Name --mail My_User_Mail --organization-ids My_Organization_ID_1 , My_Organization_ID_2 --password My_User_Password",
"hammer user add-role --id user_id --role role_name",
"openssl rand -hex 32",
"hammer user ssh-keys add --user-id user_id --name key_name --key-file ~/.ssh/id_rsa.pub",
"hammer user ssh-keys add --user-id user_id --name key_name --key ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNtYAAABBBHHS2KmNyIYa27Qaa7EHp+2l99ucGStx4P77e03ZvE3yVRJEFikpoP3MJtYYfIe8k 1/46MTIZo9CPTX4CYUHeN8= host@user",
"hammer user ssh-keys delete --id key_id --user-id user_id",
"hammer user ssh-keys info --id key_id --user-id user_id",
"hammer user ssh-keys list --user-id user_id",
"curl https:// satellite.example.com /api/status --user My_Username : My_Personal_Access_Token",
"{\"satellite_version\":\"6.15.0\",\"result\":\"ok\",\"status\":200,\"version\":\"3.5.1.10\",\"api_version\":2}",
"curl https:// satellite.example.com /api/status --user My_Username : My_Personal_Access_Token",
"{ \"error\": {\"message\":\"Unable to authenticate user My_Username \"} }",
"hammer user-group create --name My_User_Group_Name --role-ids My_Role_ID_1 , My_Role_ID_2 --user-ids My_User_ID_1 , My_User_ID_2",
"hammer role create --name My_Role_Name",
"hammer filter available-permissions",
"hammer filter create --permission-ids My_Permission_ID_1 , My_Permission_ID_2 --role My_Role_Name",
"foreman-rake console",
"f = File.open('/tmp/table.html', 'w') result = Foreman::AccessControl.permissions {|a,b| a.security_block <=> b.security_block}.collect do |p| actions = p.actions.collect { |a| \"<li>#{a}</li>\" } \"<tr><td>#{p.name}</td><td><ul>#{actions.join('')}</ul></td><td>#{p.resource_type}</td></tr>\" end.join(\"\\n\") f.write(result)",
"<table border=\"1\"><tr><td>Permission name</td><td>Actions</td><td>Resource type</td></tr>",
"</table>",
"field_name operator value",
"hammer filter create --permission-ids 91 --search \"name ~ ccv*\" --role qa-user",
"hostgroup = host-editors",
"name ^ (XXXX, Yyyy, zzzz)",
"Dev"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/Managing_Users_and_Roles_admin
|
Installing and viewing dynamic plugins
|
Installing and viewing dynamic plugins Red Hat Developer Hub 1.3 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/installing_and_viewing_dynamic_plugins/index
|
Chapter 2. Managing local storage by using RHEL system roles
|
Chapter 2. Managing local storage by using RHEL system roles To manage LVM and local file systems (FS) by using Ansible, you can use the storage role, which is one of the RHEL system roles available in RHEL 9. Using the storage role enables you to automate administration of file systems on disks and logical volumes on multiple machines and across all versions of RHEL starting with RHEL 7.7. For more information about RHEL system roles and how to apply them, see Introduction to RHEL system roles . 2.1. Creating an XFS file system on a block device by using the storage RHEL system role The example Ansible playbook applies the storage role to create an XFS file system on a block device using the default parameters. Note The storage role can create a file system only on an unpartitioned, whole disk or a logical volume (LV). It cannot create the file system on a partition. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs The volume name ( barefs in the example) is currently arbitrary. The storage role identifies the volume by the disk device listed under the disks: attribute. You can omit the fs_type: xfs line because XFS is the default file system in RHEL 9. To create the file system on an LV, provide the LVM setup under the disks: attribute, including the enclosing volume group. For details, see Creating or resizing a logical volume by using the storage RHEL system role . Do not provide the path to the LV device. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 2.2. Persistently mounting a file system by using the storage RHEL system role The example Ansible applies the storage role to immediately and persistently mount an XFS file system. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs mount_point: /mnt/data mount_user: somebody mount_group: somegroup mount_mode: 0755 This playbook adds the file system to the /etc/fstab file, and mounts the file system immediately. If the file system on the /dev/sdb device or the mount point directory do not exist, the playbook creates them. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 2.3. Creating or resizing a logical volume by using the storage RHEL system role Use the storage role to perform the following tasks: To create an LVM logical volume in a volume group consisting of many disks To resize an existing file system on LVM To express an LVM volume size in percentage of the pool's total size If the volume group does not exist, the role creates it. If a logical volume exists in the volume group, it is resized if the size does not match what is specified in the playbook. If you are reducing a logical volume, to prevent data loss you must ensure that the file system on that logical volume is not using the space in the logical volume that is being reduced. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create logical volume ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - sda - sdb - sdc volumes: - name: mylv size: 2G fs_type: ext4 mount_point: /mnt/data The settings specified in the example playbook include the following: size: <size> You must specify the size by using units (for example, GiB) or percentage (for example, 60%). For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Verify that specified volume has been created or resized to the requested size: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 2.4. Enabling online block discard by using the storage RHEL system role You can mount an XFS file system with the online block discard option to automatically discard unused blocks. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Enable online block discard ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs mount_point: /mnt/data mount_options: discard For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Verify that online block discard option is enabled: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 2.5. Creating and mounting an Ext4 file system by using the storage RHEL system role The example Ansible playbook applies the storage role to create and mount an Ext4 file system. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: ext4 fs_label: label-name mount_point: /mnt/data The playbook creates the file system on the /dev/sdb disk. The playbook persistently mounts the file system at the /mnt/data directory. The label of the file system is label-name . Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 2.6. Creating and mounting an Ext3 file system by using the storage RHEL system role The example Ansible playbook applies the storage role to create and mount an Ext3 file system. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - hosts: all roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: ext3 fs_label: label-name mount_point: /mnt/data mount_user: somebody mount_group: somegroup mount_mode: 0755 The playbook creates the file system on the /dev/sdb disk. The playbook persistently mounts the file system at the /mnt/data directory. The label of the file system is label-name . Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 2.7. Creating a swap volume by using the storage RHEL system role This section provides an example Ansible playbook. This playbook applies the storage role to create a swap volume, if it does not exist, or to modify the swap volume, if it already exist, on a block device by using the default parameters. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Create a disk device with swap hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: swap_fs type: disk disks: - /dev/sdb size: 15 GiB fs_type: swap The volume name ( swap_fs in the example) is currently arbitrary. The storage role identifies the volume by the disk device listed under the disks: attribute. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 2.8. Configuring a RAID volume by using the storage RHEL system role With the storage system role, you can configure a RAID volume on RHEL by using Red Hat Ansible Automation Platform and Ansible-Core. Create an Ansible playbook with the parameters to configure a RAID volume to suit your requirements. Warning Device names might change in certain circumstances, for example, when you add a new disk to a system. Therefore, to prevent data loss, use persistent naming attributes in the playbook. For more information about persistent naming attributes, see Persistent naming attributes . Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create a RAID on sdd, sde, sdf, and sdg ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_volumes: - name: data type: raid disks: [sdd, sde, sdf, sdg] raid_level: raid0 raid_chunk_size: 32 KiB mount_point: /mnt/data state: present For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Verify that the array was correctly created: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 2.9. Configuring an LVM pool with RAID by using the storage RHEL system role With the storage system role, you can configure an LVM pool with RAID on RHEL by using Red Hat Ansible Automation Platform. You can set up an Ansible playbook with the available parameters to configure an LVM pool with RAID. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Configure LVM pool with RAID ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_pools: - name: my_pool type: lvm disks: [sdh, sdi] raid_level: raid1 volumes: - name: my_volume size: "1 GiB" mount_point: "/mnt/app/shared" fs_type: xfs state: present For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Verify that your pool is on RAID: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory Managing RAID 2.10. Configuring a stripe size for RAID LVM volumes by using the storage RHEL system role With the storage system role, you can configure a stripe size for RAID LVM volumes on RHEL by using Red Hat Ansible Automation Platform. You can set up an Ansible playbook with the available parameters to configure an LVM pool with RAID. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Configure stripe size for RAID LVM volumes ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_pools: - name: my_pool type: lvm disks: [sdh, sdi] volumes: - name: my_volume size: "1 GiB" mount_point: "/mnt/app/shared" fs_type: xfs raid_level: raid0 raid_stripe_size: "256 KiB" state: present For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Verify that stripe size is set to the required size: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory Managing RAID 2.11. Configuring an LVM-VDO volume by using the storage RHEL system role You can use the storage RHEL system role to create a VDO volume on LVM (LVM-VDO) with enabled compression and deduplication. Note Because of the storage system role use of LVM-VDO, only one volume can be created per pool. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create LVM-VDO volume under volume group 'myvg' ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - /dev/sdb volumes: - name: mylv1 compression: true deduplication: true vdo_pool_size: 10 GiB size: 30 GiB mount_point: /mnt/app/shared The settings specified in the example playbook include the following: vdo_pool_size: <size> The actual size that the volume takes on the device. You can specify the size in human-readable format, such as 10 GiB. If you do not specify a unit, it defaults to bytes. size: <size> The virtual size of VDO volume. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification View the current status of compression and deduplication: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 2.12. Creating a LUKS2 encrypted volume by using the storage RHEL system role You can use the storage role to create and configure a volume encrypted with LUKS by running an Ansible playbook. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: luks_password: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create and configure a volume encrypted with LUKS ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs fs_label: <label> mount_point: /mnt/data encryption: true encryption_password: "{{ luks_password }}" For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Find the luksUUID value of the LUKS encrypted volume: View the encryption status of the volume: Verify the created LUKS encrypted volume: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory Encrypting block devices by using LUKS Ansible vault 2.13. Creating shared LVM devices using the storage RHEL system role You can use the storage RHEL system role to create shared LVM devices if you want your multiple systems to access the same storage at the same time. This can bring the following notable benefits: Resource sharing Flexibility in managing storage resources Simplification of storage management tasks Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. lvmlockd is configured on the managed node. For more information, see Configuring LVM to share SAN disks among multiple machines . Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com become: true tasks: - name: Create shared LVM device ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: vg1 disks: /dev/vdb type: lvm shared: true state: present volumes: - name: lv1 size: 4g mount_point: /opt/test1 storage_safe_mode: false storage_use_partitions: true For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 2.14. Resizing physical volumes by using the storage RHEL system role With the storage system role, you can resize LVM physical volumes after resizing the underlying storage or disks from outside of the host. For example, you increased the size of a virtual disk and want to use the extra space in an existing LVM. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The size of the underlying block storage has been changed. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Resize LVM PV size ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: myvg disks: ["sdf"] type: lvm grow_to_fill: true For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Display the new physical volume size: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 2.15. Creating an encrypted Stratis pool by using the storage RHEL system role To secure your data, you can create an encrypted Stratis pool with the storage RHEL system role. In addition to a passphrase, you can use Clevis and Tang or TPM protection as an encryption method. Important You can configure Stratis encryption only on the entire pool. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: luks_password: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create a new encrypted Stratis pool with Clevis and Tang ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: mypool disks: - sdd - sde type: stratis encryption: true encryption_password: "{{ luks_password }}" encryption_clevis_pin: tang encryption_tang_url: tang-server.example.com:7500 The settings specified in the example playbook include the following: encryption_password Password or passphrase used to unlock the LUKS volumes. encryption_clevis_pin Clevis method that you can use to encrypt the created pool. You can use tang and tpm2 . encryption_tang_url URL of the Tang server. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Verify that the pool was created with Clevis and Tang configured: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory Ansible vault
|
[
"--- - hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs mount_point: /mnt/data mount_user: somebody mount_group: somegroup mount_mode: 0755",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create logical volume ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - sda - sdb - sdc volumes: - name: mylv size: 2G fs_type: ext4 mount_point: /mnt/data",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'lvs myvg'",
"--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Enable online block discard ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs mount_point: /mnt/data mount_options: discard",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'findmnt /mnt/data'",
"--- - hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: ext4 fs_label: label-name mount_point: /mnt/data",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - hosts: all roles: - rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: ext3 fs_label: label-name mount_point: /mnt/data mount_user: somebody mount_group: somegroup mount_mode: 0755",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Create a disk device with swap hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_volumes: - name: swap_fs type: disk disks: - /dev/sdb size: 15 GiB fs_type: swap",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create a RAID on sdd, sde, sdf, and sdg ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_volumes: - name: data type: raid disks: [sdd, sde, sdf, sdg] raid_level: raid0 raid_chunk_size: 32 KiB mount_point: /mnt/data state: present",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'mdadm --detail /dev/md/data'",
"--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Configure LVM pool with RAID ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_pools: - name: my_pool type: lvm disks: [sdh, sdi] raid_level: raid1 volumes: - name: my_volume size: \"1 GiB\" mount_point: \"/mnt/app/shared\" fs_type: xfs state: present",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'lsblk'",
"--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Configure stripe size for RAID LVM volumes ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_pools: - name: my_pool type: lvm disks: [sdh, sdi] volumes: - name: my_volume size: \"1 GiB\" mount_point: \"/mnt/app/shared\" fs_type: xfs raid_level: raid0 raid_stripe_size: \"256 KiB\" state: present",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'lvs -o+stripesize /dev/my_pool/my_volume'",
"--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create LVM-VDO volume under volume group 'myvg' ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - /dev/sdb volumes: - name: mylv1 compression: true deduplication: true vdo_pool_size: 10 GiB size: 30 GiB mount_point: /mnt/app/shared",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'lvs -o+vdo_compression,vdo_compression_state,vdo_deduplication,vdo_index_state' LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert VDOCompression VDOCompressionState VDODeduplication VDOIndexState mylv1 myvg vwi-a-v--- 3.00t vpool0 enabled online enabled online",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"luks_password: <password>",
"--- - name: Manage local storage hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create and configure a volume encrypted with LUKS ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs fs_label: <label> mount_point: /mnt/data encryption: true encryption_password: \"{{ luks_password }}\"",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'cryptsetup luksUUID /dev/sdb' 4e4e7970-1822-470e-b55a-e91efe5d0f5c",
"ansible managed-node-01.example.com -m command -a 'cryptsetup status luks-4e4e7970-1822-470e-b55a-e91efe5d0f5c' /dev/mapper/luks-4e4e7970-1822-470e-b55a-e91efe5d0f5c is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/sdb",
"ansible managed-node-01.example.com -m command -a 'cryptsetup luksDump /dev/sdb' LUKS header information Version: 2 Epoch: 3 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: 4e4e7970-1822-470e-b55a-e91efe5d0f5c Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 16777216 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes]",
"--- - name: Manage local storage hosts: managed-node-01.example.com become: true tasks: - name: Create shared LVM device ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: vg1 disks: /dev/vdb type: lvm shared: true state: present volumes: - name: lv1 size: 4g mount_point: /opt/test1 storage_safe_mode: false storage_use_partitions: true",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Resize LVM PV size ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: myvg disks: [\"sdf\"] type: lvm grow_to_fill: true",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'pvs' PV VG Fmt Attr PSize PFree /dev/sdf1 myvg lvm2 a-- 1,99g 1,99g",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"luks_password: <password>",
"--- - name: Manage local storage hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create a new encrypted Stratis pool with Clevis and Tang ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: mypool disks: - sdd - sde type: stratis encryption: true encryption_password: \"{{ luks_password }}\" encryption_clevis_pin: tang encryption_tang_url: tang-server.example.com:7500",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'sudo stratis report' \"clevis_config\": { \"thp\": \"j-G4ddvdbVfxpnUbgxlpbe3KutSKmcHttILAtAkMTNA\", \"url\": \"tang-server.example.com:7500\" }, \"clevis_pin\": \"tang\", \"in_use\": true, \"key_description\": \"blivet-mypool\","
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_file_systems/managing-local-storage-using-rhel-system-roles_managing-file-systems
|
3.5.2. Disk Drives with MBR on UEFI Systems
|
3.5.2. Disk Drives with MBR on UEFI Systems Systems with UEFI firmware require a disk with a GUID Partition Table (GPT). When installing Red Hat Enterprise Linux on a disk with a Master Boot Record (MBR; sometimes also called msdos ) label, the disk must be relabeled. This means you can not reuse existing partitions on a MBR-partitioned disk, and all data on the disk will be lost. Make sure to back up all data on the drive before installing Red Hat Enterprise Linux. A GUID Partition Table is only required on the system's boot drive - the disk where the boot loader is installed. Other drives can be labeled with a Master Boot Record and their partition layout can be reused. There are several ways to install Red Hat Enterprise Linux on an UEFI system and use a drive which has a Master Boot Record. You can: Attach the drive to an existing Linux system and use an utility such as parted or fdisk to create a GPT label on the drive. For example, to create a GPT label on disk /dev/sdc using parted , use the following command: Warning Make sure you specify the correct drive. Relabeling a disk will destroy all data on it, and parted will not ask you for a confirmation. Perform an automated Kickstart installation, and use the clearpart and zerombr commands. If your system uses UEFI firmware, using these commands on the boot drive will relabel it with a GPT. During a manual installation in the graphical user interface, when you get to the partitioning screen. Select an option other than custom partitioning (for example Use All Space ). Make sure to check the Review and modify partitioning layout check box, and click . On the following screen, modify the automatically created layout so it suits your needs. After you finish and click , Anaconda will use your layout and relabel the drive automatically.
|
[
"parted /dev/sdc mklabel gpt"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sect-uefi-support-drives-x86
|
9.2. Multistate Resources: Resources That Have Multiple Modes
|
9.2. Multistate Resources: Resources That Have Multiple Modes Multistate resources are a specialization of Clone resources. They allow the instances to be in one of two operating modes; these are called Master and Slave . The names of the modes do not have specific meanings, except for the limitation that when an instance is started, it must come up in the Slave state. You can create a resource as a master/slave clone with the following single command. The name of the master/slave clone will be resource_id -master . Note For Red Hat Enterprise Linux release 7.3 and earlier, use the following format to create a master/slave clone. Alternately, you can create a master/slave resource from a previously-created resource or resource group with the following command: When you use this command, you can specify a name for the master/slave clone. If you do not specify a name, the name of the master/slave clone will be resource_id -master or group_name -master . For information on resource options, see Section 6.1, "Resource Creation" . Table 9.2, "Properties of a Multistate Resource" describes the options you can specify for a multistate resource. Table 9.2. Properties of a Multistate Resource Field Description id Your name for the multistate resource priority , target-role , is-managed See Table 6.3, "Resource Meta Options" . clone-max , clone-node-max , notify , globally-unique , ordered , interleave See Table 9.1, "Resource Clone Options" . master-max How many copies of the resource can be promoted to master status; default 1. master-node-max How many copies of the resource can be promoted to master status on a single node; default 1. 9.2.1. Monitoring Multi-State Resources To add a monitoring operation for the master resource only, you can add an additional monitor operation to the resource. Note, however, that every monitor operation on a resource must have a different interval. The following example configures a monitor operation with an interval of 11 seconds on the master resource for ms_resource . This monitor operation is in addition to the default monitor operation with the default monitor interval of 10 seconds. 9.2.2. Multistate Constraints In most cases, a multistate resources will have a single copy on each active cluster node. If this is not the case, you can indicate which nodes the cluster should preferentially assign copies to with resource location constraints. These constraints are written no differently than those for regular resources. For information on resource location constraints, see Section 7.1, "Location Constraints" . You can create a colocation constraint which specifies whether the resources are master or slave resources. The following command creates a resource colocation constraint. For information on colocation constraints, see Section 7.3, "Colocation of Resources" . When configuring an ordering constraint that includes multistate resources, one of the actions that you can specify for the resources is promote , indicating that the resource be promoted from slave to master. Additionally, you can specify an action of demote , indicated that the resource be demoted from master to slave. The command for configuring an order constraint is as follows. For information on resource order constraints, see Section 7.2, "Order Constraints" . 9.2.3. Multistate Stickiness To achieve a stable allocation pattern, multistate resources are slightly sticky by default. If no value for resource-stickiness is provided, the multistate resource will use a value of 1. Being a small value, it causes minimal disturbance to the score calculations of other resources but is enough to prevent Pacemaker from needlessly moving copies around the cluster.
|
[
"pcs resource create resource_id standard:provider:type | type [ resource options ] master [ master_options ]",
"pcs resource create resource_id standard:provider:type | type [ resource options ] --master [meta master_options ]",
"pcs resource master master/slave_name resource_id|group_name [ master_options ]",
"pcs resource op add ms_resource interval=11s role=Master",
"pcs constraint colocation add [master|slave] source_resource with [master|slave] target_resource [ score ] [ options ]",
"pcs constraint order [ action ] resource_id then [ action ] resource_id [ options ]"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-multistateresource-haar
|
Chapter 1. Supported upgrade paths
|
Chapter 1. Supported upgrade paths The in-place upgrade replaces the RHEL 8 operating system on your system with a RHEL 9 version. Important It is not possible to perform an in-place upgrade directly from RHEL 7 to RHEL 9. However, you can perform an in-place upgrade from RHEL 7 to RHEL 8 and then perform a second in-place upgrade to RHEL 9. For more information, see Upgrading from RHEL 7 to RHEL 8 . Currently, it is possible to perform an in-place upgrade from the following source RHEL 8 minor versions to the following target RHEL 9 minor versions: Table 1.1. Supported upgrade paths System configuration Source OS version Target OS version RHEL RHEL 8.8 RHEL 9.2 RHEL 8.10 RHEL 9.4 RHEL 8.10 RHEL 9.5 RHEL with SAP HANA RHEL 8.8 RHEL 9.2 RHEL 8.10 RHEL 9.4 For more information about supported upgrade paths, see Supported in-place upgrade paths for Red Hat Enterprise Linux and the In-place upgrade Support Policy .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/upgrading_from_rhel_8_to_rhel_9/con_supported-upgrade-paths_upgrading-from-rhel-8-to-rhel-9
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Use the Create Issue form in Red Hat Jira to provide your feedback. The Jira issue is created in the Red Hat Satellite Jira project, where you can track its progress. Prerequisites Ensure you have registered a Red Hat account . Procedure Click the following link: Create Issue . If Jira displays a login error, log in and proceed after you are redirected to the form. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
| null |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_capsule_server/providing-feedback-on-red-hat-documentation_capsule
|
Chapter 3. Configuring realms
|
Chapter 3. Configuring realms Once you have an administrative account for the Admin Console, you can configure realms. A realm is a space where you manage objects, including users, applications, roles, and groups. A user belongs to and logs into a realm. One Red Hat build of Keycloak deployment can define, store, and manage as many realms as there is space for in the database. 3.1. Using the Admin Console You configure realms and perform most administrative tasks in the Red Hat build of Keycloak Admin Console. Prerequisites You need an administrator account. See Creating the first administrator . Procedure Go to the URL for the Admin Console. For example, for localhost, use this URL: http://localhost:8080/admin/ Login page Enter the username and password you created on the Welcome Page or through environment variables as per Creating the initial admin user guide. This action displays the Admin Console. Admin Console Note the menus and other options that you can use: Click the menu labeled Master to pick a realm you want to manage or to create a new one. Click the top right list to view your account or log out. Hover over a question mark ? icon to show a tooltip text that describes that field. The image above shows the tooltip in action. Click a question mark ? icon to show a tooltip text that describes that field. The image above shows the tooltip in action. Note Export files from the Admin Console are not suitable for backups or data transfer between servers. Only boot-time exports are suitable for backups or data transfer between servers. 3.2. The master realm In the Admin Console, two types of realms exist: Master realm - This realm was created for you when you first started Red Hat build of Keycloak. It contains the administrator account you created at the first login. Use the master realm only to create and manage the realms in your system. Other realms - These realms are created by the administrator in the master realm. In these realms, administrators manage the users in your organization and the applications they need. The applications are owned by the users. Realms and applications Realms are isolated from one another and can only manage and authenticate the users that they control. Following this security model helps prevent accidental changes and follows the tradition of permitting user accounts access to only those privileges and powers necessary for the successful completion of their current task. Additional resources See Dedicated Realm Admin Consoles if you want to disable the master realm and define administrator accounts within any new realm you create. Each realm has its own dedicated Admin Console that you can log into with local accounts. 3.3. Creating a realm You create a realm to provide a management space where you can create users and give them permissions to use applications. At first login, you are typically in the master realm, the top-level realm from which you create other realms. When deciding what realms you need, consider the kind of isolation you want to have for your users and applications. For example, you might create a realm for the employees of your company and a separate realm for your customers. Your employees would log into the employee realm and only be able to visit internal company applications. Customers would log into the customer realm and only be able to interact with customer-facing apps. Procedure Click Red Hat build of Keycloak to master realm , then click Create Realm . Add realm menu Enter a name for the realm. Click Create . Create realm The current realm is now set to the realm you just created. You can switch between realms by clicking the realm name in the menu. 3.4. Configuring SSL for a realm Each realm has an associated SSL Mode, which defines the SSL/HTTPS requirements for interacting with the realm. Browsers and applications that interact with the realm honor the SSL/HTTPS requirements defined by the SSL Mode or they cannot interact with the server. Procedure Click Realm settings in the menu. Click the General tab. General tab Set Require SSL to one of the following SSL modes: External requests Users can interact with Red Hat build of Keycloak without SSL so long as they stick to private IPv4 addresses such as localhost , 127.0.0.1 , 10.x.x.x , 192.168.x.x , 172.16.x.x or IPv6 link-local and unique-local addresses. If you try to access Red Hat build of Keycloak without SSL from a non-private IP address, you will get an error. None Red Hat build of Keycloak does not require SSL. This choice applies only in development when you are experimenting and do not plan to support this deployment. All requests Red Hat build of Keycloak requires SSL for all IP addresses. 3.5. Configuring email for a realm Red Hat build of Keycloak sends emails to users to verify their email addresses, when they forget their passwords, or when an administrator needs to receive notifications about a server event. To enable Red Hat build of Keycloak to send emails, you provide Red Hat build of Keycloak with your SMTP server settings. Procedure Click Realm settings in the menu. Click the Email tab. Email tab Fill in the fields and toggle the switches as needed. Template From From denotes the address used for the From SMTP-Header for the emails sent. From display name From display name allows to configure a user-friendly email address aliases (optional). If not set the plain From email address will be displayed in email clients. Reply to Reply to denotes the address used for the Reply-To SMTP-Header for the mails sent (optional). If not set the plain From email address will be used. Reply to display name Reply to display name allows to configure a user-friendly email address aliases (optional). If not set the plain Reply To email address will be displayed. Envelope from Envelope from denotes the Bounce Address used for the Return-Path SMTP-Header for the mails sent (optional). Connection & Authentication Host Host denotes the SMTP server hostname used for sending emails. Port Port denotes the SMTP server port. Encryption Tick one of these checkboxes to support sending emails for recovering usernames and passwords, especially if the SMTP server is on an external network. You will most likely need to change the Port to 465, the default port for SSL/TLS. Authentication Set this switch to ON if your SMTP server requires authentication. When prompted, supply the Username and Password . The value of the Password field can refer a value from an external vault . 3.6. Configuring themes For a given realm, you can change the appearance of any UI in Red Hat build of Keycloak by using themes. Procedure Click Realm setting in the menu. Click the Themes tab. Themes tab Pick the theme you want for each UI category and click Save . Login theme Username password entry, OTP entry, new user registration, and other similar screens related to login. Account theme The console used by the user to manage his or her account. Admin console theme The skin of the Red Hat build of Keycloak Admin Console. Email theme Whenever Red Hat build of Keycloak has to send out an email, it uses templates defined in this theme to craft the email. Additional resources The Server Developer Guide describes how to create a new theme or modify existing ones. 3.7. Enabling internationalization Every UI screen is internationalized in Red Hat build of Keycloak. The default language is English, but you can choose which locales you want to support and what the default locale will be. Procedure Click Realm Settings in the menu. Click the Localization tab. Enable Internationalization . Select the languages you will support. Localization tab The time a user logs in, that user can choose a language on the login page to use for the login screens, Account Console, and Admin Console. Additional resources The Server Developer Guide explains how you can offer additional languages. All internationalized texts which are provided by the theme can be overwritten by realm-specific texts on the Localization tab. 3.7.1. User locale selection A locale selector provider suggests the best locale on the information available. However, it is often unknown who the user is. For this reason, the previously authenticated user's locale is remembered in a persisted cookie. The logic for selecting the locale uses the first of the following that is available: User selected - when the user has selected a locale using the drop-down locale selector User profile - when there is an authenticated user and the user has a preferred locale set Client selected - passed by the client using for example ui_locales parameter Cookie - last locale selected on the browser Accepted language - locale from Accept-Language header Realm default If none of the above, fall back to English When a user is authenticated an action is triggered to update the locale in the persisted cookie mentioned earlier. If the user has actively switched the locale through the locale selector on the login pages the users locale is also updated at this point. If you want to change the logic for selecting the locale, you have an option to create custom LocaleSelectorProvider . For details, please refer to the Server Developer Guide . 3.8. Controlling login options Red Hat build of Keycloak includes several built-in login page features. 3.8.1. Enabling forgot password If you enable Forgot password , users can reset their login credentials if they forget their passwords or lose their OTP generator. Procedure Click Realm settings in the menu. Click the Login tab. Login tab Toggle Forgot password to ON . A Forgot Password? link displays in your login pages. Forgot password link Specify Host and From in the Email tab in order for Red Hat build of Keycloak to be able to send the reset email. Click this link to bring users where they can enter their username or email address and receive an email with a link to reset their credentials. Forgot password page The text sent in the email is configurable. See Server Developer Guide for more information. When users click the email link, Red Hat build of Keycloak asks them to update their password, and if they have set up an OTP generator, Red Hat build of Keycloak asks them to reconfigure the OTP generator. For security reasons, the flow forces federated users to login again after the reset credentials and keeps internal database users logged in if the same authentication session (same browser) is used. Depending on the security requirements of your organization, you can change the default behavior. To change this behavior, perform these steps: Procedure Click Authentication in the menu. Click the Flows tab. Select the Reset Credentials flow. Reset credentials flow If you do not want to reset the OTP, set the Reset - Conditional OTP sub-flow requirement to Disabled . Send Reset Email Configuration If you want to change default behavior for the force login option, click the Send Reset Email settings icon in the flow, define an Alias , and select the best Force login after reset option for you ( true , always force re-authentication, false , keep the user logged in if the same browser was used, only-federated , default value that forces login again only for federated users). Click Authentication in the menu. Click the Required actions tab. Ensure Update Password is enabled. Required Actions 3.8.2. Enabling Remember Me A logged-in user closing their browser destroys their session, and that user must log in again. You can set Red Hat build of Keycloak to keep the user's login session open if that user clicks the Remember Me checkbox upon login. This action turns the login cookie from a session-only cookie to a persistence cookie. Procedure Click Realm settings in the menu. Click the Login tab. Toggle the Remember Me switch to On . Login tab When you save this setting, a remember me checkbox displays on the realm's login page. Remember Me 3.8.3. ACR to Level of Authentication (LoA) Mapping In the general settings of a realm, you can define which Authentication Context Class Reference (ACR) value is mapped to which Level of Authentication (LoA) . The ACR can be any value, whereas the LoA must be numeric. The acr claim can be requested in the claims or acr_values parameter sent in the OIDC request and it is also included in the access token and ID token. The mapped number is used in the authentication flow conditions. Mapping can be also specified at the client level in case that particular client needs to use different values than realm. However, a best practice is to stick to realm mappings. For further details see Step-up Authentication and the official OIDC specification . 3.8.4. Update Email Workflow (UpdateEmail) With this workflow, users will have to use an UPDATE_EMAIL action to change their own email address. The action is associated with a single email input form. If the realm has email verification disabled, this action will allow to update the email without verification. If the realm has email verification enabled, the action will send an email update action token to the new email address without changing the account email. Only the action token triggering will complete the email update. Applications are able to send their users to the email update form by leveraging UPDATE_EMAIL as an AIA (Application Initiated Action) . Note UpdateEmail is Technology Preview and is not fully supported. This feature is disabled by default. To enable start the server with --features=preview or --features=update-email Note If you enable this feature and you are migrating from a version, enable the Update Email required action in your realms. Otherwise, users cannot update their email addresses. 3.9. Configuring realm keys The authentication protocols that are used by Red Hat build of Keycloak require cryptographic signatures and sometimes encryption. Red Hat build of Keycloak uses asymmetric key pairs, a private and public key, to accomplish this. Red Hat build of Keycloak has a single active key pair at a time, but can have several passive keys as well. The active key pair is used to create new signatures, while the passive key pair can be used to verify signatures. This makes it possible to regularly rotate the keys without any downtime or interruption to users. When a realm is created, a key pair and a self-signed certificate is automatically generated. Procedure Click Realm settings in the menu. Click Keys . Select Passive keys from the filter dropdown to view passive keys. Select Disabled keys from the filter dropdown to view disabled keys. A key pair can have the status Active , but still not be selected as the currently active key pair for the realm. The selected active pair which is used for signatures is selected based on the first key provider sorted by priority that is able to provide an active key pair. 3.9.1. Rotating keys We recommend that you regularly rotate keys. Start by creating new keys with a higher priority than the existing active keys. You can instead create new keys with the same priority and making the keys passive. Once new keys are available, all new tokens and cookies will be signed with the new keys. When a user authenticates to an application, the SSO cookie is updated with the new signature. When OpenID Connect tokens are refreshed new tokens are signed with the new keys. Eventually, all cookies and tokens use the new keys and after a while the old keys can be removed. The frequency of deleting old keys is a tradeoff between security and making sure all cookies and tokens are updated. Consider creating new keys every three to six months and deleting old keys one to two months after you create the new keys. If a user was inactive in the period between the new keys being added and the old keys being removed, that user will have to re-authenticate. Rotating keys also applies to offline tokens. To make sure they are updated, the applications need to refresh the tokens before the old keys are removed. 3.9.2. Adding a generated key pair Use this procedure to generate a key pair including a self-signed certificate. Procedure Select the realm in the Admin Console. Click Realm settings in the menu. Click the Keys tab. Click the Providers tab. Click Add provider and select rsa-generated . Enter a number in the Priority field. This number determines if the new key pair becomes the active key pair. The highest number makes the key pair active. Select a value for AES Key size . Click Save . Changing the priority for a provider will not cause the keys to be re-generated, but if you want to change the keysize you can edit the provider and new keys will be generated. 3.9.3. Rotating keys by extracting a certificate You can rotate keys by extracting a certificate from an RSA generated key pair and using that certificate in a new keystore. Prerequisites A generated key pair Procedure Select the realm in the Admin Console. Click Realm Settings . Click the Keys tab. A list of Active keys appears. On a row with an RSA key, click Certificate under Public Keys . The certificate appears in text form. Save the certificate to a file and enclose it in these lines. ----Begin Certificate---- <Output> ----End Certificate---- Use the keytool command to convert the key file to PEM Format. Remove the current RSA public key certificate from the keystore. keytool -delete -keystore <keystore>.jks -storepass <password> -alias <key> Import the new certificate into the keystore keytool -importcert -file domain.crt -keystore <keystore>.jks -storepass <password> -alias <key> Rebuild the application. mvn clean install wildfly:deploy 3.9.4. Adding an existing key pair and certificate To add a key pair and certificate obtained elsewhere select Providers and choose rsa from the dropdown. You can change the priority to make sure the new key pair becomes the active key pair. Prerequisites A private key file. The file must be PEM formatted. Procedure Select the realm in the Admin Console. Click Realm settings . Click the Keys tab. Click the Providers tab. Click Add provider and select rsa . Enter a number in the Priority field. This number determines if the new key pair becomes the active key pair. Click Browse... beside Private RSA Key to upload the private key file. If you have a signed certificate for your private key, click Browse... beside X509 Certificate to upload the certificate file. Red Hat build of Keycloak automatically generates a self-signed certificate if you do not upload a certificate. Click Save . 3.9.5. Loading keys from a Java Keystore To add a key pair and certificate stored in a Java Keystore file on the host select Providers and choose java-keystore from the dropdown. You can change the priority to make sure the new key pair becomes the active key pair. For the associated certificate chain to be loaded it must be imported to the Java Keystore file with the same Key Alias used to load the key pair. Procedure Select the realm in the Admin Console. Click Realm settings in the menu. Click the Keys tab. Click the Providers tab. Click Add provider and select java-keystore . Enter a number in the Priority field. This number determines if the new key pair becomes the active key pair. Enter the desired Algorithm . Note that the algorithm should match the key type (for example RS256 requires a RSA private key, ES256 a EC private key or AES an AES secret key). Enter a value for Keystore . Path to the keystore file. Enter the Keystore Password . The option can refer a value from an external vault . Enter a value for Keystore Type ( JKS , PKCS12 or BCFKS ). Enter a value for the Key Alias to load from the keystore. Enter the Key Password . The option can refer a value from an external vault . Enter a value for Key Use ( sig for signing or enc for encryption). Note that the use should match the algorithm type (for example RS256 is sig but RSA-OAEP is enc ) Click Save . Warning Not all the keystore types support all types of keys. For example, JKS in all modes and PKCS12 in fips mode ( BCFIPS provider) cannot store secret key entries. 3.9.6. Making keys passive Procedure Select the realm in the Admin Console. Click Realm settings in the menu. Click the Keys tab. Click the Providers tab. Click the provider of the key you want to make passive. Toggle Active to Off . Click Save . 3.9.7. Disabling keys Procedure Select the realm in the Admin Console. Click Realm settings in the menu. Click the Keys tab. Click the Providers tab. Click the provider of the key you want to make passive. Toggle Enabled to Off . Click Save . 3.9.8. Compromised keys Red Hat build of Keycloak has the signing keys stored just locally and they are never shared with the client applications, users or other entities. However, if you think that your realm signing key was compromised, you should first generate new key pair as described above and then immediately remove the compromised key pair. Alternatively, you can delete the provider from the Providers table. Procedure Click Clients in the menu. Click security-admin-console . Scroll down to the Access settings section. Fill in the Admin URL field. Click the Advanced tab. Click Set to now in the Revocation section. Click Push . Pushing the not-before policy ensures that client applications do not accept the existing tokens signed by the compromised key. The client application is forced to download new key pairs from Red Hat build of Keycloak also so the tokens signed by the compromised key will be invalid. Note REST and confidential clients must set Admin URL so Red Hat build of Keycloak can send clients the pushed not-before policy request.
|
[
"----Begin Certificate---- <Output> ----End Certificate----",
"keytool -delete -keystore <keystore>.jks -storepass <password> -alias <key>",
"keytool -importcert -file domain.crt -keystore <keystore>.jks -storepass <password> -alias <key>",
"mvn clean install wildfly:deploy"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_administration_guide/configuring-realms
|
Chapter 15. GenericKafkaListenerConfigurationBootstrap schema reference
|
Chapter 15. GenericKafkaListenerConfigurationBootstrap schema reference Used in: GenericKafkaListenerConfiguration Full list of GenericKafkaListenerConfigurationBootstrap schema properties Configures bootstrap service settings for listeners. Example configuration for the host , nodePort , loadBalancerIP , and annotations properties is shown in the GenericKafkaListenerConfiguration schema section. 15.1. Specifying alternative bootstrap addresses To specify alternative names for the bootstrap address, use the alternativeNames property. This property is applicable to all types of listeners. The names are added to the broker certificates and can be used for TLS hostname verification. Example route listener configuration with additional bootstrap addresses listeners: #... - name: external1 port: 9094 type: route tls: true configuration: bootstrap: alternativeNames: - example.hostname1 - example.hostname2 # ... 15.2. GenericKafkaListenerConfigurationBootstrap schema properties Property Property type Description alternativeNames string array Additional alternative names for the bootstrap service. The alternative names will be added to the list of subject alternative names of the TLS certificates. host string Specifies the hostname used for the bootstrap resource. For route (optional) or ingress (required) listeners only. Ensure the hostname resolves to the Ingress endpoints; no validation is performed by Streams for Apache Kafka. nodePort integer Node port for the bootstrap service. For nodeport listeners only. loadBalancerIP string The loadbalancer is requested with the IP address specified in this property. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This property is ignored if the cloud provider does not support the feature. For loadbalancer listeners only. annotations map Annotations added to Ingress , Route , or Service resources. You can use this property to configure DNS providers such as External DNS. For loadbalancer , nodeport , route , or ingress listeners only. labels map Labels added to Ingress , Route , or Service resources. For loadbalancer , nodeport , route , or ingress listeners only. externalIPs string array External IPs associated to the nodeport service. These IPs are used by clients external to the OpenShift cluster to access the Kafka brokers. This property is helpful when nodeport without externalIP is not sufficient. For example on bare-metal OpenShift clusters that do not support Loadbalancer service types. For nodeport listeners only.
|
[
"listeners: # - name: external1 port: 9094 type: route tls: true configuration: bootstrap: alternativeNames: - example.hostname1 - example.hostname2"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-GenericKafkaListenerConfigurationBootstrap-reference
|
23.5. Welcome to Red Hat Enterprise Linux
|
23.5. Welcome to Red Hat Enterprise Linux The Welcome screen does not prompt you for any input. Figure 23.3. The Welcome screen Click on the button to continue.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-welcome-s390
|
15.2. Configure Lock Striping (Remote Client-Server Mode)
|
15.2. Configure Lock Striping (Remote Client-Server Mode) Lock striping in Red Hat JBoss Data Grid's Remote Client-Server mode is enabled by setting the striping element to true . Example 15.1. Lock Striping (Remote Client-Server Mode) Note The default isolation mode for the Remote Client-Server mode configuration is READ_COMMITTED . If the isolation attribute is included to explicitly specify an isolation mode, it is ignored, a warning is thrown, and the default value is used instead. The locking element uses the following attributes: The acquire-timeout attribute specifies the maximum time to attempt a lock acquisition. The default value for this attribute is 10000 milliseconds. The concurrency-level attribute specifies the concurrency level for lock containers. Adjust this value according to the number of concurrent threads interacting with JBoss Data Grid. The default value for this attribute is 32 . The striping attribute specifies whether a shared pool of locks is maintained for all entries that require locking ( true ). If set to false , a lock is created for each entry. Lock striping controls the memory footprint but can reduce concurrency in the system. The default value for this attribute is false . Report a bug
|
[
"<locking acquire-timeout=\"20000\" concurrency-level=\"500\" striping=\"true\" />"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/configure_lock_striping
|
Chapter 11. Customizing the Developer Portal layout
|
Chapter 11. Customizing the Developer Portal layout You can customize the look and feel of the entire Developer Portal to match your own branding. A standard CSS stylesheet is available to provide an easy starting point for your customizations. To create layout templates, use the code for Main layout as your starting point. In this tutorial, you'll add your own CSS customizations to your Developer Portal and reload it to put your new styling changes live. 11.1. Creating a new CSS file There is a default stylesheet, default.css . This is quite large and complex, so rather than extend it, it is better to create your own stylesheet for any of your own customizations to overwrite the defaults. You create a new stylesheet the same way you create a page. Remember to choose an appropriate MIME content type in the advanced page settings. Important Make sure that the selected layout is blank. Otherwise the page layout HTML will obscure the CSS rules. 11.2. Linking the stylesheet into your page layout Add the link to your custom CSS in each of your layout templates (or in a partial if you have a common HEAD section) after the link to bootstrap.css. For example: Now enjoy the beauty of your own unique branding! 11.3. Defining page layout templates The general idea is to define a separate layout for each of the different page styles in your portal. There is one standard layout called Main layout when you start. Do not make any changes to this layout until you are an expert at using the Developer Portal because this layout is used by all system-generated pages. Typically, you want a unique style for the home page of your portal. The Main layout template is a starting point for your customizations. To create a page layout template: Open Main layout and copy its code to the clipboard. Create a new layout, give it a title, a system name, and select Liquid enabled . Paste the Main layout code into your new layout. Remove the sidebar menu by deleting this line from your new layout: Customize the code to create your layout template.
|
[
"<link rel=\"stylesheet\" href=\"/stylesheets/custom.css\">",
"{% include 'submenu'%}"
] |
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/creating_the_developer_portal/change-css
|
Chapter 6. Recommended minimum hardware requirements for the Red Hat Ceph Storage Dashboard
|
Chapter 6. Recommended minimum hardware requirements for the Red Hat Ceph Storage Dashboard The Red Hat Ceph Storage Dashboard has minimum hardware requirements. Minimum requirements 4 core processor at 2.5 GHz or higher 8 GB RAM 50 GB hard disk drive 1 Gigabit Ethernet network interface Additional Resources For more information, see High-level monitoring of a Ceph storage cluster in the Administration Guide .
| null |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/hardware_guide/recommended-minimum-hardware-requirements-for-the-red-hat-ceph-storage-dashboard_hw
|
Preface
|
Preface Red Hat Enterprise Linux minor releases are an aggregation of individual enhancement, security and bug fix errata. The Red Hat Enterprise Linux 6.3 Release Notes documents the major changes made to the Red Hat Enterprise Linux 6 operating system and its accompanying applications for this minor release. Detailed notes on changes (that is, bugs fixed, enhancements added, and known issues found) in this minor release are available in the Technical Notes . The Technical Notes document also contains a complete list of all currently available Technology Previews along with packages that provide them. Important The online Red Hat Enterprise Linux 6.3 Release Notes , which are located online here , are to be considered the definitive, up-to-date version. Customers with questions about the release are advised to consult the online Release and Technical Notes for their version of Red Hat Enterprise Linux. Should you require information regarding the Red Hat Enterprise Linux life cycle, refer to https://access.redhat.com/support/policy/updates/errata/ .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_release_notes/pref-release_notes-preface
|
Service Mesh
|
Service Mesh OpenShift Container Platform 4.9 Service Mesh installation, usage, and release notes Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/service_mesh/index
|
Chapter 6. PriorityClass [scheduling.k8s.io/v1]
|
Chapter 6. PriorityClass [scheduling.k8s.io/v1] Description PriorityClass defines mapping from a priority class name to the priority integer value. The value can be any valid integer. Type object Required value 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources description string description is an arbitrary string that usually provides guidelines on when this priority class should be used. globalDefault boolean globalDefault specifies whether this PriorityClass should be considered as the default priority for pods that do not have any priority class. Only one PriorityClass can be marked as globalDefault . However, if more than one PriorityClasses exists with their globalDefault field set to true, the smallest value of such global default PriorityClasses will be used as the default priority. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata preemptionPolicy string preemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. Possible enum values: - "Never" means that pod never preempts other pods with lower priority. - "PreemptLowerPriority" means that pod can preempt other pods with lower priority. value integer value represents the integer value of this priority class. This is the actual priority that pods receive when they have the name of this class in their pod spec. 6.2. API endpoints The following API endpoints are available: /apis/scheduling.k8s.io/v1/priorityclasses DELETE : delete collection of PriorityClass GET : list or watch objects of kind PriorityClass POST : create a PriorityClass /apis/scheduling.k8s.io/v1/watch/priorityclasses GET : watch individual changes to a list of PriorityClass. deprecated: use the 'watch' parameter with a list operation instead. /apis/scheduling.k8s.io/v1/priorityclasses/{name} DELETE : delete a PriorityClass GET : read the specified PriorityClass PATCH : partially update the specified PriorityClass PUT : replace the specified PriorityClass /apis/scheduling.k8s.io/v1/watch/priorityclasses/{name} GET : watch changes to an object of kind PriorityClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 6.2.1. /apis/scheduling.k8s.io/v1/priorityclasses HTTP method DELETE Description delete collection of PriorityClass Table 6.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PriorityClass Table 6.3. HTTP responses HTTP code Reponse body 200 - OK PriorityClassList schema 401 - Unauthorized Empty HTTP method POST Description create a PriorityClass Table 6.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.5. Body parameters Parameter Type Description body PriorityClass schema Table 6.6. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 202 - Accepted PriorityClass schema 401 - Unauthorized Empty 6.2.2. /apis/scheduling.k8s.io/v1/watch/priorityclasses HTTP method GET Description watch individual changes to a list of PriorityClass. deprecated: use the 'watch' parameter with a list operation instead. Table 6.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /apis/scheduling.k8s.io/v1/priorityclasses/{name} Table 6.8. Global path parameters Parameter Type Description name string name of the PriorityClass HTTP method DELETE Description delete a PriorityClass Table 6.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PriorityClass Table 6.11. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PriorityClass Table 6.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.13. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PriorityClass Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.15. Body parameters Parameter Type Description body PriorityClass schema Table 6.16. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 401 - Unauthorized Empty 6.2.4. /apis/scheduling.k8s.io/v1/watch/priorityclasses/{name} Table 6.17. Global path parameters Parameter Type Description name string name of the PriorityClass HTTP method GET Description watch changes to an object of kind PriorityClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/schedule_and_quota_apis/priorityclass-scheduling-k8s-io-v1
|
Chapter 2. Preparing to deploy multiple OpenShift Data Foundation storage clusters
|
Chapter 2. Preparing to deploy multiple OpenShift Data Foundation storage clusters Before you begin the deployment of OpenShift Data Foundation using dynamic, local, or external storage, ensure that your resource requirements are met. See the Resource requirements section in the Planning guide. Things you should remember before installing multiple OpenShift Data Foundation storage clusters: openshift-storage and openshift-storage-extended are the exclusively supported namespaces. Internal storage cluster is restricted to the OpenShift Data Foundation operator namespace. External storage cluster is permissible in both operator and non-operator namespaces. Multiple storage clusters are not supported in the same namespace. Hence, the external storage system will not be visible under the OpenShift Data Foundation operator page as the operator is under openshift-storage namespace and the external storage system is not. Customers running external storage clusters in the operator namespace cannot utilize multiple storage clusters. Multicloud Object Gateway is supported solely within the operator namespace. It is ignored in other namespaces. RADOS Gateway (RGW) can be in either the operator namespace, a non-operator namespace, or both Network File System (NFS) is enabled as long as it is enabled for at least one of the clusters. Topology is enabled as long as it is enabled for at least one of the clusters. Topology domain labels are set as long as the internal cluster is present. The Topology view of the cluster is only supported for OpenShift Data Foundation internal mode deployments. Different multus settings are not supported for multiple storage clusters.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_multiple_openshift_data_foundation_storage_clusters/preparing-to-deploy-multiple-odf-storage-clusters_rhodf
|
Observability overview
|
Observability overview OpenShift Container Platform 4.13 Contains information about CI/CD for OpenShift Container Platform Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/observability_overview/index
|
Chapter 23. Apache CXF Binding IDs
|
Chapter 23. Apache CXF Binding IDs Table of Binding IDs Table 23.1. Binding IDs for Message Bindings Binding ID CORBA http://cxf.apache.org/bindings/corba HTTP/REST http://apache.org/cxf/binding/http SOAP 1.1 http://schemas.xmlsoap.org/wsdl/soap/http SOAP 1.1 w/ MTOM http://schemas.xmlsoap.org/wsdl/soap/http?mtom=true SOAP 1.2 http://www.w3.org/2003/05/soap/bindings/HTTP/ SOAP 1.2 w/ MTOM http://www.w3.org/2003/05/soap/bindings/HTTP/?mtom=true XML http://cxf.apache.org/bindings/xformat
| null |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/cxfdeploybindingappx
|
Chapter 2. Security content automation protocol
|
Chapter 2. Security content automation protocol Satellite uses the Security Content Automation Protocol (SCAP) standard to define security policies. SCAP is a framework of several specifications based on XML, such as checklists described in the Extensible Checklist Configuration Description Format (XCCDF) and vulnerabilities described in the Open Vulnerability and Assessment Language (OVAL). These specifications are encapsulated as data stream files. Checklist items in XCCDF, also known as rules , express the desired configuration of a system item. For example, a rule may specify that no one can log in to a host over SSH using the root user account. Rules can be grouped into one or more XCCDF profiles , which allows multiple profiles to share a rule. The OpenSCAP scanner tool evaluates system items on a host against the rules and generates a report in the Asset Reporting Format (ARF), which is then returned to Satellite for monitoring and analysis. Table 2.1. Specifications in the SCAP framework 1.3 supported by the OpenSCAP scanner Title Description Version SCAP Security Content Automation Protocol 1.3 XCCDF Extensible Configuration Checklist Description Format 1.2 OVAL Open Vulnerability and Assessment Language 5.11 - Asset Identification 1.1 ARF Asset Reporting Format 1.1 CCE Common Configuration Enumeration 5.0 CPE Common Platform Enumeration 2.3 CVE Common Vulnerabilities and Exposures 2.0 CVSS Common Vulnerability Scoring System 2.0 Additional resources For more information about SCAP, see the OpenSCAP project .
| null |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_security_compliance/security_content_automation_protocol_security-compliance
|
Appendix A. List of Bugzillas by Component
|
Appendix A. List of Bugzillas by Component Table A.1. List of Bugzillas by Component Component Release Notes Technical Notes New Features Known Issues Notable Bug Fixes bind BZ# 1452639 binutils BZ# 1427285 , BZ# 1476412 clufter BZ# 1526494 freerdp BZ# 1347920 gcc BZ# 1535656 BZ# 1104812 gcc-libraries BZ# 1465568 git BZ# 1430723 glib2 BZ# 1154183 glibc BZ# 1437147 gmp BZ# 1430873 grub BZ# 1227194 , BZ# 1573121 , BZ# 1598553 hwdata BZ# 1489294 initscripts BZ# 1440888 BZ# 1436061 , BZ# 1518429 iproute BZ# 1476664 iptables BZ# 1459673 BZ# 1210563 kernel BZ# 1073220 , BZ# 1544565 BZ# 1146727 , BZ# 1212959 , BZ# 1274139 , BZ# 1427036 , BZ# 1445919 , BZ# 1459263 , BZ# 1463754 , BZ# 1488822 , BZ# 1492220 , BZ# 1496105 , BZ# 1535024 libica BZ# 1490894 other BZ# 1497859 , BZ# 1588352 pacemaker BZ# 1427643 , BZ# 1513199 preupgrade-assistant-el6toel7 BZ# 1366671 BZ# 1388967 selinux-policy BZ# 1558428 subscription-manager BZ# 1581359 systemtap BZ# 1525651
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_technical_notes/appe-list-of-bugzillas-by-component
|
9.7. Configuring Startup Order for Resource Dependencies not Managed by Pacemaker (Red Hat Enterprise Linux 7.4 and later)
|
9.7. Configuring Startup Order for Resource Dependencies not Managed by Pacemaker (Red Hat Enterprise Linux 7.4 and later) It is possible for a cluster to include resources with dependencies that are not themselves managed by the cluster. In this case, you must ensure that those dependencies are started before Pacemaker is started and stopped after Pacemaker is stopped. As of Red Hat Enterprise Linux 7.4, you can configure your startup order to account for this situation by means of the systemd resource-agents-deps target. You can create a systemd drop-in unit for this target and Pacemaker will order itself appropriately relative to this target. For example, if a cluster includes a resource that depends on the external service foo that is not managed by the cluster, you can create the drop-in unit /etc/systemd/system/resource-agents-deps.target.d/foo.conf that contains the following: After creating a drop-in unit, run the systemctl daemon-reload command. A cluster dependency specified in this way can be something other than a service. For example, you may have a dependency on mounting a file system at /srv , in which case you would create a systemd file srv.mount for it according to the systemd documentation, then create a drop-in unit as described here with srv.mount in the .conf file instead of foo.service to make sure that Pacemaker starts after the disk is mounted.
|
[
"[Unit] Requires=foo.service After=foo.service"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-nonpacemakerstartup-haar
|
Chapter 46. Red Hat Enterprise Linux System Roles Powered by Ansible
|
Chapter 46. Red Hat Enterprise Linux System Roles Powered by Ansible The postfix role of Red Hat Enterprise Linux System Roles as a Technology Preview Red Hat Enterprise Linux System Roles provides a configuration interface for Red Hat Enterprise Linux subsystems, which makes system configuration easier through the inclusion of Ansible Roles. This interface enables managing system configurations across multiple versions of Red Hat Enterprise Linux, as well as adopting new major releases. Since Red Hat Enterprise Linux 7.4, the Red Hat Enterprise Linux System Roles packages have been distributed through the Extras channel. For details regarding Red Hat Enterprise Linux System Roles, see https://access.redhat.com/articles/3050101 . Red Hat Enterprise Linux System Roles currently consists of five roles: selinux kdump network timesync postfix The postfix role has been available as a Technology Preview since Red Hat Enterprise Linux 7.4. The remaining roles have been fully supported since Red Hat Enterprise Linux 7.6. (BZ#1439896)
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/technology_previews_red_hat_enterprise_linux_system_roles_powered_by_ansible
|
Monitoring Ceph with Datadog Guide
|
Monitoring Ceph with Datadog Guide Red Hat Ceph Storage 5 Guide on Monitoring Ceph with Datadog Red Hat Ceph Storage Documentation Team
| null |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/monitoring_ceph_with_datadog_guide/index
|
8.2. Supported Active Directory Versions
|
8.2. Supported Active Directory Versions Windows Synchronization and the Password Sync Service are supported on Windows 2008 R2 and Windows 2012 R2 on both 32-bit and 64-bit platforms.
| null |
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/supported-ad
|
Logging
|
Logging OpenShift Container Platform 4.15 Configuring and using logging in OpenShift Container Platform Red Hat OpenShift Documentation Team
|
[
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <name> spec: outputs: - name: <output_name> type: <output_type> <output_type>: tuning: deliveryMode: AtMostOnce",
"oc create secret generic logging-loki-s3 --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<aws_bucket_endpoint>\" --from-literal=access_key_id=\"<aws_access_key_id>\" --from-literal=access_key_secret=\"<aws_access_key_secret>\" --from-literal=region=\"<aws_region_of_your_bucket>\" -n openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2022-06-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack",
"oc explain clusterlogforwarders.observability.openshift.io.spec.outputs",
"oc explain lokistacks.loki.grafana.com oc explain lokistacks.loki.grafana.com.spec oc explain lokistacks.loki.grafana.com.spec.storage oc explain lokistacks.loki.grafana.com.spec.storage.schemas",
"oc explain lokistacks.loki.grafana.com.spec.size",
"oc explain lokistacks.spec.template.distributor.replicas",
"GROUP: loki.grafana.com KIND: LokiStack VERSION: v1 FIELD: replicas <integer> DESCRIPTION: Replicas defines the number of replica pods of the component.",
"oc -n openshift-logging patch clusterlogging/instance -p '{\"spec\":{\"managementState\": \"Unmanaged\"}}' --type=merge",
"oc -n openshift-logging patch elasticsearch/elasticsearch -p '{\"metadata\":{\"ownerReferences\": []}}' --type=merge",
"oc -n openshift-logging patch kibana/kibana -p '{\"metadata\":{\"ownerReferences\": []}}' --type=merge",
"oc -n openshift-logging patch clusterlogging/instance -p '{\"spec\":{\"managementState\": \"Managed\"}}' --type=merge",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" collection: resources: limits: {} requests: {} nodeSelector: {} tolerations: {}",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder spec: managementState: Managed collector: resources: limits: {} requests: {} nodeSelector: {} tolerations: {}",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: application-logs type: application application: namespaces: - foo - bar includes: - namespace: my-important container: main excludes: - container: too-verbose",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: application-logs type: application application: includes: - namespace: foo - namespace: bar - namespace: my-important container: main excludes: - container: too-verbose",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: an-http receiver: http: port: 8443 format: kubeAPIAudit - name: a-syslog receiver: type: syslog syslog: port: 9442",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: an-http type: receiver receiver: type: http port: 8443 http: format: kubeAPIAudit - name: a-syslog type: receiver receiver: type: syslog port: 9442",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: elasticsearch",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: serviceAccount: name: <service_account_name> managementState: Managed outputs: - name: audit-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: audit-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector - name: app-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: app-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector - name: infra-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: infra-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector pipelines: - name: app inputRefs: - application outputRefs: - app-elasticsearch - name: audit inputRefs: - audit outputRefs: - audit-elasticsearch - name: infra inputRefs: - infrastructure outputRefs: - infra-elasticsearch",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: lokistack lokistack: name: lokistack-dev",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: lokistack-dev namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - outputRefs: - default-lokistack - inputRefs: - application - infrastructure",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder spec: pipelines: - name: application-logs parse: json labels: foo: bar detectMultilineErrors: true",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: filters: - name: detectexception type: detectMultilineException - name: parse-json type: parse - name: labels type: openshiftLabels openshiftLabels: foo: bar pipelines: - name: application-logs filterRefs: - detectexception - labels - parse-json",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder status: conditions: - lastTransitionTime: \"2024-09-13T03:28:44Z\" message: 'permitted to collect log types: [application]' reason: ClusterRolesExist status: \"True\" type: observability.openshift.io/Authorized - lastTransitionTime: \"2024-09-13T12:16:45Z\" message: \"\" reason: ValidationSuccess status: \"True\" type: observability.openshift.io/Valid - lastTransitionTime: \"2024-09-13T12:16:45Z\" message: \"\" reason: ReconciliationComplete status: \"True\" type: Ready filterConditions: - lastTransitionTime: \"2024-09-13T13:02:59Z\" message: filter \"detectexception\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidFilter-detectexception - lastTransitionTime: \"2024-09-13T13:02:59Z\" message: filter \"parse-json\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidFilter-parse-json inputConditions: - lastTransitionTime: \"2024-09-13T12:23:03Z\" message: input \"application1\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidInput-application1 outputConditions: - lastTransitionTime: \"2024-09-13T13:02:59Z\" message: output \"default-lokistack-application1\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidOutput-default-lokistack-application1 pipelineConditions: - lastTransitionTime: \"2024-09-13T03:28:44Z\" message: pipeline \"default-before\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidPipeline-default-before",
"oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 Annotations <1> rules: Specifies the permissions granted by this ClusterRole. <2> apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. <3> loki.grafana.com: The API group for managing Loki-related resources. <4> resources: The resource type that the ClusterRole grants permission to interact with. <5> application: Refers to the application resources within the Loki logging system. <6> resourceNames: Specifies the names of resources that this role can manage. <7> logs: Refers to the log resources that can be created. <8> verbs: The actions allowed on the resources. <9> create: Grants permission to create new logs in the Loki system.",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug",
"java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name>",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: \"^open\" - test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1 type: application",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4 type: application",
"oc apply -f <filename>.yaml",
"oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>",
"oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6",
"oc apply -f <filename>.yaml",
"oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\"},\"type\":\"memberlist\"}}}'",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4",
"oc get pods --field-selector status.phase==Pending -n openshift-logging",
"NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m",
"oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r",
"storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1",
"oc delete pvc <pvc_name> -n openshift-logging",
"oc delete pod <pod_name> -n openshift-logging",
"oc patch pvc <pvc_name> -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector",
"oc project openshift-logging",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-audit-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: authentication: token: from: serviceAccount target: name: logging-loki namespace: openshift-logging tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector",
"oc project openshift-logging",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-audit-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 spec: serviceAccount: name: collector outputs: - name: loki-otlp type: lokiStack 2 lokiStack: target: name: logging-loki namespace: openshift-logging dataModel: Otel 3 authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: my-pipeline inputRefs: - application - infrastructure outputRefs: - loki-otlp",
"oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 Annotations <1> rules: Specifies the permissions granted by this ClusterRole. <2> apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. <3> loki.grafana.com: The API group for managing Loki-related resources. <4> resources: The resource type that the ClusterRole grants permission to interact with. <5> application: Refers to the application resources within the Loki logging system. <6> resourceNames: Specifies the names of resources that this role can manage. <7> logs: Refers to the log resources that can be created. <8> verbs: The actions allowed on the resources. <9> create: Grants permission to create new logs in the Loki system.",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 name: clf-otlp spec: serviceAccount: name: <service_account_name> outputs: - name: otlp type: otlp otlp: tuning: compression: gzip deliveryMode: AtLeastOnce maxRetryDuration: 20 maxWrite: 10M minRetryDuration: 5 url: <otlp_url> 2 pipelines: - inputRefs: - application - infrastructure - audit name: otlp-logs outputRefs: - otlp",
"java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name>",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: \"^open\" - test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1 type: application",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4 type: application",
"oc apply -f <filename>.yaml",
"oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>",
"oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6",
"oc apply -f <filename>.yaml",
"oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\"},\"type\":\"memberlist\"}}}'",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4",
"oc get pods --field-selector status.phase==Pending -n openshift-logging",
"NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m",
"oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r",
"storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1",
"oc delete pvc <pvc_name> -n openshift-logging",
"oc delete pod <pod_name> -n openshift-logging",
"oc patch pvc <pvc_name> -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"spec: storage: schemas: - version: v13 effectiveDate: 2024-10-25",
"spec: limits: global: otlp: {} 1 tenants: application: otlp: {} 2",
"spec: limits: global: otlp: streamLabels: resourceAttributes: - name: \"k8s.namespace.name\" - name: \"k8s.pod.name\" - name: \"k8s.container.name\"",
"spec: limits: global: otlp: streamLabels: # structuredMetadata: resourceAttributes: - name: \"process.command_line\" - name: \"k8s\\\\.pod\\\\.labels\\\\..+\" regex: true scopeAttributes: - name: \"service.name\" logAttributes: - name: \"http.route\"",
"spec: tenants: mode: openshift-logging openshift: otlp: disableRecommendedAttributes: true 1",
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408",
"oc project openshift-logging",
"oc get clusterlogging instance -o yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging status: 1 collection: logs: fluentdStatus: daemonSet: fluentd 2 nodes: collector-2rhqp: ip-10-0-169-13.ec2.internal collector-6fgjh: ip-10-0-165-244.ec2.internal collector-6l2ff: ip-10-0-128-218.ec2.internal collector-54nx5: ip-10-0-139-30.ec2.internal collector-flpnn: ip-10-0-147-228.ec2.internal collector-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - collector-2rhqp - collector-54nx5 - collector-6fgjh - collector-6l2ff - collector-flpnn - collector-n2frh logstore: 3 elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization: 4 kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1",
"nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-clientdatamaster-0-1 upgradeStatus: {}",
"nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: cluster-logging-operator upgradeStatus: {}",
"Elasticsearch Status: Shard Allocation Enabled: shard allocation unknown Cluster: Active Primary Shards: 0 Active Shards: 0 Initializing Shards: 0 Num Data Nodes: 0 Num Nodes: 0 Pending Tasks: 0 Relocating Shards: 0 Status: cluster health unknown Unassigned Shards: 0 Cluster Name: elasticsearch Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: 0/5 nodes are available: 5 node(s) didn't match node selector. Reason: Unschedulable Status: True Type: Unschedulable elasticsearch-cdm-mkkdys93-2: Node Count: 2 Pods: Client: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Data: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Master: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready:",
"Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) Reason: Unschedulable Status: True Type: Unschedulable",
"Status: Collection: Logs: Fluentd Status: Daemon Set: fluentd Nodes: Pods: Failed: Not Ready: Ready:",
"oc project openshift-logging",
"oc describe deployment cluster-logging-operator",
"Name: cluster-logging-operator . Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1----",
"oc get replicaset",
"NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m",
"oc describe replicaset cluster-logging-operator-574b8987df",
"Name: cluster-logging-operator-574b8987df . Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv----",
"oc delete pod --selector logging-infra=collector",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- health",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/nodes?v",
"oc -n openshift-logging get pods -l component=elasticsearch",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/master?v",
"oc logs <elasticsearch_master_pod_name> -c elasticsearch -n openshift-logging",
"oc logs <elasticsearch_node_name> -c elasticsearch -n openshift-logging",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/recovery?active_only=true",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- health | grep number_of_pending_tasks",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/settings?pretty",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/settings?pretty -X PUT -d '{\"persistent\": {\"cluster.routing.allocation.enable\":\"all\"}}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/indices?v",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name>/_cache/clear?pretty",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.allocation.max_retries\":10}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_search/scroll/_all -X DELETE",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.unassigned.node_left.delayed_timeout\":\"10m\"}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cat/indices?v",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_red_index_name> -X DELETE",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_nodes/stats?pretty",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"oc -n openshift-logging get po -o wide",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/health?pretty | grep unassigned_shards",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"oc -n openshift-logging get po -o wide",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_cluster/health?pretty | grep relocating_shards",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc -n openshift-logging get pods -l component=elasticsearch",
"export ES_POD_NAME=<elasticsearch_pod_name>",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=_all/_settings?pretty -X PUT -d '{\"index.blocks.read_only_allow_delete\": null}'",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8 Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7 Filesystem Size Used Avail Use% Mounted on /dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw Filesystem Size Used Avail Use% Mounted on /dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent",
"oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- indices",
"oc exec -n openshift-logging -c elasticsearch USDES_POD_NAME -- es_util --query=<elasticsearch_index_name> -X DELETE",
"oc project openshift-logging",
"oc get Elasticsearch",
"NAME AGE elasticsearch 5h9m",
"oc get Elasticsearch <Elasticsearch-instance> -o yaml",
"oc get Elasticsearch elasticsearch -n openshift-logging -o yaml",
"status: 1 cluster: 2 activePrimaryShards: 30 activeShards: 60 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: \"\" conditions: [] 3 nodes: 4 - deploymentName: elasticsearch-cdm-zjf34ved-1 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-2 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-3 upgradeStatus: {} pods: 5 client: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt data: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt master: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt shardAllocationEnabled: all",
"status: nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}",
"status: nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}",
"status: nodes: - conditions: - lastTransitionTime: 2019-04-10T02:26:24Z message: '0/8 nodes are available: 8 node(s) didn''t match node selector.' reason: Unschedulable status: \"True\" type: Unschedulable",
"status: nodes: - conditions: - last Transition Time: 2019-04-10T05:55:51Z message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) reason: Unschedulable status: True type: Unschedulable",
"status: clusterHealth: \"\" conditions: - lastTransitionTime: 2019-04-17T20:01:31Z message: Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles reason: Invalid Settings status: \"True\" type: InvalidRedundancy",
"status: clusterHealth: green conditions: - lastTransitionTime: '2019-04-17T20:12:34Z' message: >- Invalid master nodes count. Please ensure there are no more than 3 total nodes with master roles reason: Invalid Settings status: 'True' type: InvalidMasters",
"status: clusterHealth: green conditions: - lastTransitionTime: \"2021-05-07T01:05:13Z\" message: Changing the storage structure for a custom resource is not supported reason: StorageStructureChangeIgnored status: 'True' type: StorageStructureChangeIgnored",
"oc get pods --selector component=elasticsearch -o name",
"pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7",
"oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices",
"Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -n openshift-logging' to see all of the containers in this pod. green open infra-000002 S4QANnf1QP6NgCegfnrnbQ 3 1 119926 0 157 78 green open audit-000001 8_EQx77iQCSTzFOXtxRqFw 3 1 0 0 0 0 green open .security iDjscH7aSUGhIdq0LheLBQ 1 1 5 0 0 0 green open .kibana_-377444158_kubeadmin yBywZ9GfSrKebz5gWBZbjw 3 1 1 0 0 0 green open infra-000001 z6Dpe__ORgiopEpW6Yl44A 3 1 871000 0 874 436 green open app-000001 hIrazQCeSISewG3c2VIvsQ 3 1 2453 0 3 1 green open .kibana_1 JCitcBMSQxKOvIq6iQW6wg 1 1 0 0 0 0 green open .kibana_-1595131456_user1 gIYFIEGRRe-ka0W3okS-mQ 3 1 1 0 0 0",
"oc get pods --selector component=elasticsearch -o name",
"pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7",
"oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw",
". Status: Running . Containers: elasticsearch: Container ID: cri-o://b7d44e0a9ea486e27f47763f5bb4c39dfd2 State: Running Started: Mon, 08 Jun 2020 10:17:56 -0400 Ready: True Restart Count: 0 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . proxy: Container ID: cri-o://3f77032abaddbb1652c116278652908dc01860320b8a4e741d06894b2f8f9aa1 State: Running Started: Mon, 08 Jun 2020 10:18:38 -0400 Ready: True Restart Count: 0 . Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True . Events: <none>",
"oc get deployment --selector component=elasticsearch -o name",
"deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3",
"oc describe deployment elasticsearch-cdm-1gon-1",
". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Conditions: Type Status Reason ---- ------ ------ Progressing Unknown DeploymentPaused Available True MinimumReplicasAvailable . Events: <none>",
"oc get replicaSet --selector component=elasticsearch -o name replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495 replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7d",
"oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495",
". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8@sha256:4265742c7cdd85359140e2d7d703e4311b6497eec7676957f455d6908e7b1c25 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Events: <none>",
"eo_elasticsearch_cr_cluster_management_state{state=\"managed\"} 1 eo_elasticsearch_cr_cluster_management_state{state=\"unmanaged\"} 0",
"eo_elasticsearch_cr_restart_total{reason=\"cert_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"rolling_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"scheduled_restart\"} 3",
"Total number of Namespaces. es_index_namespaces_total 5",
"es_index_document_count{namespace=\"namespace_1\"} 25 es_index_document_count{namespace=\"namespace_2\"} 10 es_index_document_count{namespace=\"namespace_3\"} 5",
"message\": \"Secret \\\"elasticsearch\\\" fields are either missing or empty: [admin-cert, admin-key, logging-es.crt, logging-es.key]\", \"reason\": \"Missing Required Secrets\",",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging spec: managementState: Managed 2 logStore: type: elasticsearch 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: <storage_class_name> 6 size: 200G resources: 7 limits: memory: 16Gi requests: memory: 16Gi proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: SingleRedundancy visualization: type: kibana 9 kibana: replicas: 1 collection: type: fluentd 10 fluentd: {}",
"oc get deployment",
"cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cd-tuhduuw-1-f5c885dbf-dlqws 1/1 Running 0 2m4s elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s",
"cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s collector-587vb 1/1 Running 0 2m26s collector-7mpb9 1/1 Running 0 2m30s collector-flm6j 1/1 Running 0 2m33s collector-gn4rn 1/1 Running 0 2m26s collector-nlgb6 1/1 Running 0 2m30s collector-snpkt 1/1 Running 0 2m28s kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\"",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {}",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: <channel> 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator",
"oc apply -f <filename>.yaml",
"oc get csv --all-namespaces",
"NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded kube-node-lease elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded kube-public elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded kube-system elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-apiserver elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-authentication-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-authentication elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-cloud-controller-manager-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-cloud-controller-manager elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-cloud-credential-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging spec: managementState: Managed 2 logStore: type: elasticsearch 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: <storage_class_name> 6 size: 200G resources: 7 limits: memory: 16Gi requests: memory: 16Gi proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: SingleRedundancy visualization: type: kibana 9 kibana: replicas: 1 collection: type: fluentd 10 fluentd: {}",
"oc get deployment",
"cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s collector-587vb 1/1 Running 0 2m26s collector-7mpb9 1/1 Running 0 2m30s collector-flm6j 1/1 Running 0 2m33s collector-gn4rn 1/1 Running 0 2m26s collector-nlgb6 1/1 Running 0 2m30s collector-snpkt 1/1 Running 0 2m28s kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-logging: \"true\" openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m collector-222js 2/2 Running 0 18m collector-g9ddv 2/2 Running 0 18m collector-hfqq8 2/2 Running 0 18m collector-sphwg 2/2 Running 0 18m collector-vv7zn 2/2 Running 0 18m collector-wk5zz 2/2 Running 0 18m logging-view-plugin-6f76fbb78f-n2n4n 1/1 Running 0 18m lokistack-sample-compactor-0 1/1 Running 0 42m lokistack-sample-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m lokistack-sample-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m lokistack-sample-gateway-5f6c75f879-xhq98 2/2 Running 0 42m lokistack-sample-index-gateway-0 1/1 Running 0 42m lokistack-sample-ingester-0 1/1 Running 0 42m lokistack-sample-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m lokistack-sample-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed",
"oc -n openshift-logging delete subscription <subscription>",
"oc -n openshift-logging delete operatorgroup <operator_group_name>",
"oc delete clusterserviceversion cluster-logging.<version>",
"oc get operatorgroup <operator_group_name> -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-logging-f52cn namespace: openshift-logging spec: upgradeStrategy: Default status: namespaces: - \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack spec: storage: schemas: # version: v12 1 - effectiveDate: \"<yyyy>-<mm>-<future_dd>\" 2 version: v13",
"oc edit lokistack <name> -n openshift-logging",
"oc get pod -n openshift-logging --selector component=elasticsearch",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m",
"oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"green\", }",
"oc project openshift-logging",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s",
"oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices",
"Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0",
"oc get kibana kibana -o json",
"[ { \"clusterCondition\": { \"kibana-5fdd766ffd-nb2jj\": [ { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" }, { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" } ] }, \"deployment\": \"kibana\", \"pods\": { \"failed\": [], \"notReady\": [] \"ready\": [] }, \"replicaSets\": [ \"kibana-5fdd766ffd\" ], \"replicas\": 1 } ]",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: visualization: type: <visualizer_type> 1 kibana: 2 resources: {} nodeSelector: {} proxy: {} replicas: {} tolerations: {} ocpConsole: 3 logsLimit: {} timeout: {}",
"oc apply -f <filename>.yaml",
"oc logs -f <pod_name> -c <container_name>",
"oc logs ruby-58cd97df55-mww7r",
"oc logs -f ruby-57f7f4855b-znl92 -c ruby",
"oc logs <object_type>/<resource_name> 1",
"oc logs deployment/ruby",
"oc get consoles.operator.openshift.io cluster -o yaml |grep logging-view-plugin || oc patch consoles.operator.openshift.io cluster --type=merge --patch '{ \"spec\": { \"plugins\": [\"logging-view-plugin\"]}}'",
"oc patch clusterlogging instance --type=merge --patch '{ \"metadata\": { \"annotations\": { \"logging.openshift.io/ocp-console-migration-target\": \"lokistack-dev\" }}}' -n openshift-logging",
"clusterlogging.logging.openshift.io/instance patched",
"oc get clusterlogging instance -o=jsonpath='{.metadata.annotations.logging\\.openshift\\.io/ocp-console-migration-target}' -n openshift-logging",
"\"lokistack-dev\"",
"oc auth can-i get pods --subresource log -n <project>",
"yes",
"oc auth can-i get pods --subresource log -n <project>",
"yes",
"{ \"_index\": \"infra-000001\", \"_type\": \"_doc\", \"_id\": \"YmJmYTBlNDkZTRmLTliMGQtMjE3NmFiOGUyOWM3\", \"_version\": 1, \"_score\": null, \"_source\": { \"docker\": { \"container_id\": \"f85fa55bbef7bb783f041066be1e7c267a6b88c4603dfce213e32c1\" }, \"kubernetes\": { \"container_name\": \"registry-server\", \"namespace_name\": \"openshift-marketplace\", \"pod_name\": \"redhat-marketplace-n64gc\", \"container_image\": \"registry.redhat.io/redhat/redhat-marketplace-index:v4.7\", \"container_image_id\": \"registry.redhat.io/redhat/redhat-marketplace-index@sha256:65fc0c45aabb95809e376feb065771ecda9e5e59cc8b3024c4545c168f\", \"pod_id\": \"8f594ea2-c866-4b5c-a1c8-a50756704b2a\", \"host\": \"ip-10-0-182-28.us-east-2.compute.internal\", \"master_url\": \"https://kubernetes.default.svc\", \"namespace_id\": \"3abab127-7669-4eb3-b9ef-44c04ad68d38\", \"namespace_labels\": { \"openshift_io/cluster-monitoring\": \"true\" }, \"flat_labels\": [ \"catalogsource_operators_coreos_com/update=redhat-marketplace\" ] }, \"message\": \"time=\\\"2020-09-23T20:47:03Z\\\" level=info msg=\\\"serving registry\\\" database=/database/index.db port=50051\", \"level\": \"unknown\", \"hostname\": \"ip-10-0-182-28.internal\", \"pipeline_metadata\": { \"collector\": { \"ipaddr4\": \"10.0.182.28\", \"inputname\": \"fluent-plugin-systemd\", \"name\": \"fluentd\", \"received_at\": \"2020-09-23T20:47:15.007583+00:00\", \"version\": \"1.7.4 1.6.0\" } }, \"@timestamp\": \"2020-09-23T20:47:03.422465+00:00\", \"viaq_msg_id\": \"YmJmYTBlNDktMDMGQtMjE3NmFiOGUyOWM3\", \"openshift\": { \"labels\": { \"logging\": \"infra\" } } }, \"fields\": { \"@timestamp\": [ \"2020-09-23T20:47:03.422Z\" ], \"pipeline_metadata.collector.received_at\": [ \"2020-09-23T20:47:15.007Z\" ] }, \"sort\": [ 1600894023422 ] }",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd",
"oc -n openshift-logging edit ClusterLogging instance",
"oc edit ClusterLogging instance apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging . spec: visualization: type: \"kibana\" kibana: replicas: 1 1",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd",
"variant: openshift version: 4.15.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: \"worker\" storage: files: - path: /etc/systemd/journald.conf mode: 0644 1 overwrite: true contents: inline: | Compress=yes 2 ForwardToConsole=no 3 ForwardToSyslog=no MaxRetentionSec=1month 4 RateLimitBurst=10000 5 RateLimitIntervalSec=30s Storage=persistent 6 SyncIntervalSec=1s 7 SystemMaxUse=8G 8 SystemKeepFree=20% 9 SystemMaxFileSize=10M 10",
"butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml",
"oc apply -f 40-worker-custom-journald.yaml",
"oc describe machineconfigpool/worker",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: logs: type: vector vector: {}",
"oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>",
"{\"level\":\"info\",\"name\":\"fred\",\"home\":\"bedrock\"}",
"pipelines: - inputRefs: [ application ] outputRefs: myFluentd parse: json",
"{\"structured\": { \"level\": \"info\", \"name\": \"fred\", \"home\": \"bedrock\" }, \"more fields...\"}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat pipelines: - inputRefs: - application outputRefs: - default parse: json 2",
"{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"apache\", ...}} }",
"{ \"structured\":{\"name\":\"wilma\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"google\", ...}} }",
"outputDefaults: elasticsearch: structuredTypeKey: openshift.labels.myLabel 1 structuredTypeName: nologformat pipelines: - name: application-logs inputRefs: - application - audit outputRefs: - elasticsearch-secure - default parse: json labels: myLabel: myValue 2",
"{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"openshift\":{\"labels\":{\"myLabel\": \"myValue\", ...}} }",
"outputDefaults: elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json",
"oc create -f <filename>.yaml",
"oc delete pod --selector logging-infra=collector",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat enableStructuredContainerLogs: true 2 pipelines: - inputRefs: - application name: application-logs outputRefs: - default parse: json",
"apiVersion: v1 kind: Pod metadata: annotations: containerType.logging.openshift.io/heavy: heavy 1 containerType.logging.openshift.io/low: low spec: containers: - name: heavy 2 image: heavyimage - name: low image: lowimage",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: elasticsearch-secure 4 type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: elasticsearch - name: elasticsearch-insecure 5 type: \"elasticsearch\" url: http://elasticsearch.insecure.com:9200 - name: kafka-app 6 type: \"kafka\" url: tls://kafka.secure.com:9093/app-topic inputs: 7 - name: my-app-logs application: namespaces: - my-project pipelines: - name: audit-logs 8 inputRefs: - audit outputRefs: - elasticsearch-secure - default labels: secure: \"true\" 9 datacenter: \"east\" - name: infrastructure-logs 10 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: datacenter: \"west\" - name: my-app 11 inputRefs: - my-app-logs outputRefs: - default - inputRefs: 12 - application outputRefs: - kafka-app labels: datacenter: \"south\"",
"oc create secret generic -n <namespace> <secret_name> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 pipelines: - inputRefs: - <log_type> 4 outputRefs: - <output_name> 5 outputs: - name: <output_name> 6 type: <output_type> 7 url: <log_output_url> 8",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: tuning: delivery: AtLeastOnce 1 compression: none 2 maxWrite: <integer> 3 minRetryDuration: 1s 4 maxRetryDuration: 1s 5",
"java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: - name: my-app-logs inputRefs: - application outputRefs: - default detectMultilineErrors: true",
"[transforms.detect_exceptions_app-logs] type = \"detect_exceptions\" inputs = [\"application\"] languages = [\"All\"] group_by = [\"kubernetes.namespace_name\",\"kubernetes.pod_name\",\"kubernetes.container_name\"] expire_after_ms = 2000 multiline_flush_interval_ms = 1000",
"<label @MULTILINE_APP_LOGS> <match kubernetes.**> @type detect_exceptions remove_tag_prefix 'kubernetes' message message force_line_breaks true multiline_flush_interval .2 </match> </label>",
"oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json= <your_service_account_key_file.json>",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: gcp-1 type: googleCloudLogging secret: name: gcp-secret googleCloudLogging: projectId : \"openshift-gce-devel\" 4 logId : \"app-gcp\" 5 pipelines: - name: test-app inputRefs: 6 - application outputRefs: - gcp-1",
"oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: splunk-receiver 4 secret: name: vector-splunk-secret 5 type: splunk 6 url: <http://your.splunk.hec.url:8088> 7 pipelines: 8 - inputRefs: - application - infrastructure name: 9 outputRefs: - splunk-receiver 10",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: httpout-app type: http url: 4 http: headers: 5 h1: v1 h2: v2 method: POST secret: name: 6 tls: insecureSkipVerify: 7 pipelines: - name: inputRefs: - application outputRefs: - httpout-app 8",
"apiVersion: v1 kind: Secret metadata: name: my-secret namespace: openshift-logging type: Opaque data: shared_key: <your_shared_key> 1",
"Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName \"<resource_name>\" -Name \"<workspace_name>\"",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor type: azureMonitor azureMonitor: customerId: my-customer-id 1 logType: my_log_type 2 secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor-app type: azureMonitor azureMonitor: customerId: my-customer-id logType: application_log 1 secret: name: my-secret - name: azure-monitor-infra type: azureMonitor azureMonitor: customerId: my-customer-id logType: infra_log # secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor-app - name: infra-pipeline inputRefs: - infrastructure outputRefs: - azure-monitor-infra",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor type: azureMonitor azureMonitor: customerId: my-customer-id logType: my_log_type azureResourceId: \"/subscriptions/111111111\" 1 host: \"ods.opinsights.azure.com\" 2 secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs: 7 - name: my-app-logs application: namespaces: - my-project 8 pipelines: - name: forward-to-fluentd-insecure 9 inputRefs: 10 - my-app-logs outputRefs: 11 - fluentd-server-insecure labels: project: \"my-project\" 12 - name: forward-to-fluentd-secure 13 inputRefs: - application 14 - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: \"C1234\"",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: pipelines: - inputRefs: [ myAppLogData ] 3 outputRefs: [ default ] 4 inputs: 5 - name: myAppLogData application: selector: matchLabels: 6 environment: production app: nginx namespaces: 7 - app1 - app2 outputs: 8 - <output_name>",
"- inputRefs: [ myAppLogData, myOtherAppLogData ]",
"oc create -f <file-name>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 outputRefs: default filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: loki-insecure 4 type: \"loki\" 5 url: http://loki.insecure.com:3100 6 loki: tenantKey: kubernetes.namespace_name labelKeys: - kubernetes.labels.foo - name: loki-secure 7 type: \"loki\" url: https://loki.secure.com:3100 secret: name: loki-secret 8 loki: tenantKey: kubernetes.namespace_name 9 labelKeys: - kubernetes.labels.foo 10 pipelines: - name: application-logs 11 inputRefs: 12 - application - audit outputRefs: 13 - loki-secure",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: elasticsearch-example 4 type: elasticsearch 5 elasticsearch: version: 8 6 url: http://elasticsearch.example.com:9200 7 secret: name: es-secret 8 pipelines: - name: application-logs 9 inputRefs: 10 - application - audit outputRefs: - elasticsearch-example 11 - default 12 labels: myLabel: \"myValue\" 13",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: openshift-test-secret data: username: <username> password: <password>",
"oc create secret -n openshift-logging openshift-test-secret.yaml",
"kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: openshift-test-secret",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure 7 inputRefs: 8 - application - audit outputRefs: - fluentd-server-secure 9 - default 10 labels: clusterId: \"C1234\" 11 - name: forward-to-fluentd-insecure 12 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: \"C1234\"",
"oc create -f <file-name>.yaml",
"input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } } filter { } output { stdout { codec => rubydebug } }",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: rsyslog-east 4 type: syslog 5 syslog: 6 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514' 7 secret: 8 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'tcp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east 9 inputRefs: 10 - audit - application outputRefs: 11 - rsyslog-east - default 12 labels: secure: \"true\" 13 syslog: \"east\" - name: syslog-west 14 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: \"west\"",
"oc create -f <filename>.yaml",
"spec: outputs: - name: syslogout syslog: addLogSource: true facility: user payloadKey: message rfc: RFC3164 severity: debug tag: mytag type: syslog url: tls://syslog-receiver.openshift-logging.svc:24224 pipelines: - inputRefs: - application name: test-app outputRefs: - syslogout",
"<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {\"msgcontent\"=>\"Message Contents\", \"timestamp\"=>\"2020-11-15 17:06:09\", \"tag_key\"=>\"rec_tag\", \"index\"=>56}",
"<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={\"msgcontent\":\"My life is my message\", \"timestamp\":\"2020-11-16 10:49:36\", \"tag_key\":\"rec_tag\", \"index\":76}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: app-logs 4 type: kafka 5 url: tls://kafka.example.devlab.com:9093/app-topic 6 secret: name: kafka-secret 7 - name: infra-logs type: kafka url: tcp://kafka.devlab2.example.com:9093/infra-topic 8 - name: audit-logs type: kafka url: tls://kafka.qelab.example.com:9093/audit-topic secret: name: kafka-secret-qe pipelines: - name: app-topic 9 inputRefs: 10 - application outputRefs: 11 - app-logs labels: logType: \"application\" 12 - name: infra-topic 13 inputRefs: - infrastructure outputRefs: - infra-logs labels: logType: \"infra\" - name: audit-topic inputRefs: - audit outputRefs: - audit-logs labels: logType: \"audit\"",
"spec: outputs: - name: app-logs type: kafka secret: name: kafka-secret-dev kafka: 1 brokers: 2 - tls://kafka-broker1.example.com:9093/ - tls://kafka-broker2.example.com:9093/ topic: app-topic 3",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: cw-secret namespace: openshift-logging data: aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=",
"oc apply -f cw-secret.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: cw 4 type: cloudwatch 5 cloudwatch: groupBy: logType 6 groupPrefix: <group prefix> 7 region: us-east-2 8 secret: name: cw-secret 9 pipelines: - name: infra-logs 10 inputRefs: 11 - infrastructure - audit - application outputRefs: - cw 12",
"oc create -f <file-name>.yaml",
"oc get Infrastructure/cluster -ojson | jq .status.infrastructureName \"mycluster-7977k\"",
"oc run busybox --image=busybox -- sh -c 'while true; do echo \"My life is my message\"; sleep 3; done' oc logs -f busybox My life is my message My life is my message My life is my message",
"oc get ns/app -ojson | jq .metadata.uid \"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\"",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: cw type: cloudwatch cloudwatch: groupBy: logType region: us-east-2 secret: name: cw-secret pipelines: - name: all-logs inputRefs: - infrastructure - audit - application outputRefs: - cw",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.application\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName \"kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log\"",
"aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log { \"events\": [ { \"timestamp\": 1629422704178, \"message\": \"{\\\"docker\\\":{\\\"container_id\\\":\\\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\\\"},\\\"kubernetes\\\":{\\\"container_name\\\":\\\"busybox\\\",\\\"namespace_name\\\":\\\"app\\\",\\\"pod_name\\\":\\\"busybox\\\",\\\"container_image\\\":\\\"docker.io/library/busybox:latest\\\",\\\"container_image_id\\\":\\\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\\\",\\\"pod_id\\\":\\\"870be234-90a3-4258-b73f-4f4d6e2777c7\\\",\\\"host\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"labels\\\":{\\\"run\\\":\\\"busybox\\\"},\\\"master_url\\\":\\\"https://kubernetes.default.svc\\\",\\\"namespace_id\\\":\\\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\\\",\\\"namespace_labels\\\":{\\\"kubernetes_io/metadata_name\\\":\\\"app\\\"}},\\\"message\\\":\\\"My life is my message\\\",\\\"level\\\":\\\"unknown\\\",\\\"hostname\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"pipeline_metadata\\\":{\\\"collector\\\":{\\\"ipaddr4\\\":\\\"10.0.216.3\\\",\\\"inputname\\\":\\\"fluent-plugin-systemd\\\",\\\"name\\\":\\\"fluentd\\\",\\\"received_at\\\":\\\"2021-08-20T01:25:08.085760+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-20T01:25:04.178986+00:00\\\",\\\"viaq_index_name\\\":\\\"app-write\\\",\\\"viaq_msg_id\\\":\\\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\\\",\\\"log_type\\\":\\\"application\\\",\\\"time\\\":\\\"2021-08-20T01:25:04+00:00\\\"}\", \"ingestionTime\": 1629422744016 },",
"cloudwatch: groupBy: logType groupPrefix: demo-group-prefix region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"demo-group-prefix.application\" \"demo-group-prefix.audit\" \"demo-group-prefix.infrastructure\"",
"cloudwatch: groupBy: namespaceName region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.app\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"cloudwatch: groupBy: namespaceUUID region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf\" // uid of the \"app\" namespace \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"oc create secret generic cw-sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/my-role_with-permissions",
"apiVersion: v1 kind: Secret metadata: namespace: openshift-logging name: my-secret-name stringData: role_arn: arn:aws:iam::123456789012:role/my-role_with-permissions",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <your_role_name>-credrequest namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - logs:PutLogEvents - logs:CreateLogGroup - logs:PutRetentionPolicy - logs:CreateLogStream - logs:DescribeLogGroups - logs:DescribeLogStreams effect: Allow resource: arn:aws:logs:*:*:* secretRef: name: <your_role_name> namespace: openshift-logging serviceAccountNames: - logcollector",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com 1",
"oc apply -f output/manifests/openshift-logging-<your_role_name>-credentials.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: clf-collector 3 outputs: - name: cw 4 type: cloudwatch 5 cloudwatch: groupBy: logType 6 groupPrefix: <group prefix> 7 region: us-east-2 8 secret: name: <your_secret_name> 9 pipelines: - name: to-cloudwatch 10 inputRefs: 11 - infrastructure - audit - application outputRefs: - cw 12",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: collection: type: <log_collector_type> 1 resources: {} tolerations: {}",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1alpha1 kind: LogFileMetricExporter metadata: name: instance namespace: openshift-logging spec: nodeSelector: {} 1 resources: 2 limits: cpu: 500m memory: 256Mi requests: cpu: 200m memory: 128Mi tolerations: [] 3",
"oc apply -f <filename>.yaml",
"oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-logging",
"NAME READY STATUS RESTARTS AGE logfilesmetricexporter-9qbjj 1/1 Running 0 2m46s logfilesmetricexporter-cbc4v 1/1 Running 0 2m46s",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: type: fluentd resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi",
"apiVersion: logging.openshift.io/v1beta1 kind: ClusterLogForwarder metadata: spec: serviceAccountName: <service_account_name> inputs: - name: http-receiver 1 receiver: type: http 2 http: format: kubeAPIAudit 3 port: 8443 4 pipelines: 5 - name: http-pipeline inputRefs: - http-receiver",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: inputs: - name: http-receiver 1 receiver: type: http 2 http: format: kubeAPIAudit 3 port: 8443 4 pipelines: 5 - inputRefs: - http-receiver name: http-pipeline",
"oc apply -f <filename>.yaml",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: \"300s\" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9",
"oc get pods -l component=collector -n openshift-logging",
"oc extract configmap/collector-config --confirm",
"<buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size \"#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}\" total_limit_size \"#{ENV['TOTAL_LIMIT_SIZE_PER_BUFFER'] || '8589934592'}\" chunk_limit_size 8m overflow_action throw_exception disable_chunk_backup true </buffer>",
"apiVersion: template.openshift.io/v1 kind: Template metadata: name: eventrouter-template annotations: description: \"A pod forwarding kubernetes events to OpenShift Logging stack.\" tags: \"events,EFK,logging,cluster-logging\" objects: - kind: ServiceAccount 1 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} - kind: ClusterRole 2 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader rules: - apiGroups: [\"\"] resources: [\"events\"] verbs: [\"get\", \"watch\", \"list\"] - kind: ClusterRoleBinding 3 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader-binding subjects: - kind: ServiceAccount name: eventrouter namespace: USD{NAMESPACE} roleRef: kind: ClusterRole name: event-reader - kind: ConfigMap 4 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} data: config.json: |- { \"sink\": \"stdout\" } - kind: Deployment 5 apiVersion: apps/v1 metadata: name: eventrouter namespace: USD{NAMESPACE} labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" spec: selector: matchLabels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" replicas: 1 template: metadata: labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" name: eventrouter spec: serviceAccount: eventrouter containers: - name: kube-eventrouter image: USD{IMAGE} imagePullPolicy: IfNotPresent resources: requests: cpu: USD{CPU} memory: USD{MEMORY} volumeMounts: - name: config-volume mountPath: /etc/eventrouter securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: config-volume configMap: name: eventrouter parameters: - name: IMAGE 6 displayName: Image value: \"registry.redhat.io/openshift-logging/eventrouter-rhel9:v0.4\" - name: CPU 7 displayName: CPU value: \"100m\" - name: MEMORY 8 displayName: Memory value: \"128Mi\" - name: NAMESPACE displayName: Namespace value: \"openshift-logging\" 9",
"oc process -f <templatefile> | oc apply -n openshift-logging -f -",
"oc process -f eventrouter.yaml | oc apply -n openshift-logging -f -",
"serviceaccount/eventrouter created clusterrole.rbac.authorization.k8s.io/event-reader created clusterrolebinding.rbac.authorization.k8s.io/event-reader-binding created configmap/eventrouter created deployment.apps/eventrouter created",
"oc get pods --selector component=eventrouter -o name -n openshift-logging",
"pod/cluster-logging-eventrouter-d649f97c8-qvv8r",
"oc logs <cluster_logging_eventrouter_pod> -n openshift-logging",
"oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging",
"{\"verb\":\"ADDED\",\"event\":{\"metadata\":{\"name\":\"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"namespace\":\"openshift-service-catalog-removed\",\"selfLink\":\"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"uid\":\"787d7b26-3d2f-4017-b0b0-420db4ae62c0\",\"resourceVersion\":\"21399\",\"creationTimestamp\":\"2020-09-08T15:40:26Z\"},\"involvedObject\":{\"kind\":\"Job\",\"namespace\":\"openshift-service-catalog-removed\",\"name\":\"openshift-service-catalog-controller-manager-remover\",\"uid\":\"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f\",\"apiVersion\":\"batch/v1\",\"resourceVersion\":\"21280\"},\"reason\":\"Completed\",\"message\":\"Job completed\",\"source\":{\"component\":\"job-controller\"},\"firstTimestamp\":\"2020-09-08T15:40:26Z\",\"lastTimestamp\":\"2020-09-08T15:40:26Z\",\"count\":1,\"type\":\"Normal\"}}",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed",
"apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-5.9\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-5.9\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - effectiveDate: '2023-10-15' version: v13 secret: name: logging-loki-s3 3 type: s3 4 credentialMode: 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-logging: \"true\" openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m collector-222js 2/2 Running 0 18m collector-g9ddv 2/2 Running 0 18m collector-hfqq8 2/2 Running 0 18m collector-sphwg 2/2 Running 0 18m collector-vv7zn 2/2 Running 0 18m collector-wk5zz 2/2 Running 0 18m logging-view-plugin-6f76fbb78f-n2n4n 1/1 Running 0 18m lokistack-sample-compactor-0 1/1 Running 0 42m lokistack-sample-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m lokistack-sample-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m lokistack-sample-gateway-5f6c75f879-xhq98 2/2 Running 0 42m lokistack-sample-index-gateway-0 1/1 Running 0 42m lokistack-sample-ingester-0 1/1 Running 0 42m lokistack-sample-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m lokistack-sample-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m",
"oc create secret generic -n openshift-logging <your_secret_name> --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>",
"oc get secrets",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - effectiveDate: '2023-10-15' version: v13 secret: name: logging-loki-s3 3 type: s3 4 credentialMode: 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging",
"oc get pods -n openshift-logging",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-78fddc697-mnl82 1/1 Running 0 14m collector-6cglq 2/2 Running 0 45s collector-8r664 2/2 Running 0 45s collector-8z7px 2/2 Running 0 45s collector-pdxl9 2/2 Running 0 45s collector-tc9dx 2/2 Running 0 45s collector-xkd76 2/2 Running 0 45s logging-loki-compactor-0 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-25j9g 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-xwjs6 1/1 Running 0 8m2s logging-loki-gateway-7bb86fd855-hjhl4 2/2 Running 0 8m2s logging-loki-gateway-7bb86fd855-qjtlb 2/2 Running 0 8m2s logging-loki-index-gateway-0 1/1 Running 0 8m2s logging-loki-index-gateway-1 1/1 Running 0 7m29s logging-loki-ingester-0 1/1 Running 0 8m2s logging-loki-ingester-1 1/1 Running 0 6m46s logging-loki-querier-f5cf9cb87-9fdjd 1/1 Running 0 8m2s logging-loki-querier-f5cf9cb87-fp9v5 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-lfvbc 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-tjf9k 1/1 Running 0 8m2s logging-view-plugin-79448d8df6-ckgmx 1/1 Running 0 46s",
"oc create secret generic logging-loki-aws --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<aws_bucket_endpoint>\" --from-literal=access_key_id=\"<aws_access_key_id>\" --from-literal=access_key_secret=\"<aws_access_key_secret>\" --from-literal=region=\"<aws_region_of_your_bucket>\"",
"oc -n openshift-logging create secret generic \"logging-loki-aws\" --from-literal=bucketnames=\"<s3_bucket_name>\" --from-literal=region=\"<bucket_region>\" --from-literal=audience=\"<oidc_audience>\" 1",
"oc create secret generic logging-loki-azure --from-literal=container=\"<azure_container_name>\" --from-literal=environment=\"<azure_environment>\" \\ 1 --from-literal=account_name=\"<azure_account_name>\" --from-literal=account_key=\"<azure_account_key>\"",
"oc -n openshift-logging create secret generic logging-loki-azure --from-literal=environment=\"<azure_environment>\" --from-literal=account_name=\"<storage_account_name>\" --from-literal=container=\"<container_name>\"",
"oc create secret generic logging-loki-gcs --from-literal=bucketname=\"<bucket_name>\" --from-file=key.json=\"<path/to/key.json>\"",
"oc create secret generic logging-loki-minio --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<minio_bucket_endpoint>\" --from-literal=access_key_id=\"<minio_access_key_id>\" --from-literal=access_key_secret=\"<minio_access_key_secret>\"",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: loki-bucket-odf namespace: openshift-logging spec: generateBucketName: loki-bucket-odf storageClassName: openshift-storage.noobaa.io",
"BUCKET_HOST=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}') BUCKET_NAME=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}') BUCKET_PORT=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}')",
"ACCESS_KEY_ID=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d) SECRET_ACCESS_KEY=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d)",
"oc create -n openshift-logging secret generic logging-loki-odf --from-literal=access_key_id=\"<access_key_id>\" --from-literal=access_key_secret=\"<secret_access_key>\" --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"https://<bucket_host>:<bucket_port>\"",
"oc create secret generic logging-loki-swift --from-literal=auth_url=\"<swift_auth_url>\" --from-literal=username=\"<swift_usernameclaim>\" --from-literal=user_domain_name=\"<swift_user_domain_name>\" --from-literal=user_domain_id=\"<swift_user_domain_id>\" --from-literal=user_id=\"<swift_user_id>\" --from-literal=password=\"<swift_password>\" --from-literal=domain_id=\"<swift_domain_id>\" --from-literal=domain_name=\"<swift_domain_name>\" --from-literal=container_name=\"<swift_container_name>\"",
"oc create secret generic logging-loki-swift --from-literal=auth_url=\"<swift_auth_url>\" --from-literal=username=\"<swift_usernameclaim>\" --from-literal=user_domain_name=\"<swift_user_domain_name>\" --from-literal=user_domain_id=\"<swift_user_domain_id>\" --from-literal=user_id=\"<swift_user_id>\" --from-literal=password=\"<swift_password>\" --from-literal=domain_id=\"<swift_domain_id>\" --from-literal=domain_name=\"<swift_domain_name>\" --from-literal=container_name=\"<swift_container_name>\" --from-literal=project_id=\"<swift_project_id>\" --from-literal=project_name=\"<swift_project_name>\" --from-literal=project_domain_id=\"<swift_project_domain_id>\" --from-literal=project_domain_name=\"<swift_project_domain_name>\" --from-literal=region=\"<swift_region>\"",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {}",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: stable-x.y 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator",
"oc apply -f <filename>.yaml",
"oc get csv -n --all-namespaces",
"NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-node-lease elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-public elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-system elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded non-destructive-test elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki",
"oc apply -f <filename>.yaml",
"oc adm groups new cluster-admin",
"oc adm groups add-users cluster-admin <username>",
"oc adm policy add-cluster-role-to-group cluster-admin cluster-admin",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4",
"get pods --field-selector status.phase==Pending -n openshift-logging",
"NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m",
"get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r",
"storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1",
"delete pvc __<pvc_name>__ -n openshift-logging",
"delete pod __<pod_name>__ -n openshift-logging",
"patch pvc __<pvc_name>__ -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging",
"kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: logging-all-application-logs-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view 1 subjects: 2 - kind: Group name: system:authenticated apiGroup: rbac.authorization.k8s.io",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: allow-read-logs namespace: log-test-0 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view subjects: - kind: User apiGroup: rbac.authorization.k8s.io name: testuser-0",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: tenants: mode: openshift-logging 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\",\"type\": \"memberlist\"}}}}'",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: \"elasticsearch\" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: \"fluentdForward\" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3",
"apiVersion: \"logging.openshift.io/v1\" kind: \"Elasticsearch\" metadata: name: \"elasticsearch\" spec: indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: 1 resources: limits: 2 memory: \"32Gi\" requests: 3 cpu: \"1\" memory: \"16Gi\" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi",
"resources: limits: 1 memory: \"32Gi\" requests: 2 cpu: \"8\" memory: \"32Gi\"",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: redundancyPolicy: \"SingleRedundancy\" 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"gp2\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"oc project openshift-logging",
"oc get pods -l component=elasticsearch",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"false\"}}}}}'",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"{\"_shards\":{\"total\":4,\"successful\":4,\"failed\":0},\".security\":{\"total\":2,\"successful\":2,\"failed\":0},\".kibana_1\":{\"total\":2,\"successful\":2,\"failed\":0}}",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"{\"acknowledged\":true,\"persistent\":{\"cluster\":{\"routing\":{\"allocation\":{\"enable\":\"primaries\"}}}},\"transient\":",
"oc rollout resume deployment/<deployment-name>",
"oc rollout resume deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 resumed",
"oc get pods -l component=elasticsearch-",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h",
"oc rollout pause deployment/<deployment-name>",
"oc rollout pause deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 paused",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"yellow\", 1 \"timed_out\" : false, \"number_of_nodes\" : 3, \"number_of_data_nodes\" : 3, \"active_primary_shards\" : 8, \"active_shards\" : 16, \"relocating_shards\" : 0, \"initializing_shards\" : 0, \"unassigned_shards\" : 1, \"delayed_unassigned_shards\" : 0, \"number_of_pending_tasks\" : 0, \"number_of_in_flight_fetch\" : 0, \"task_max_waiting_in_queue_millis\" : 0, \"active_shards_percent_as_number\" : 100.0 }",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"{ \"acknowledged\" : true, \"persistent\" : { }, \"transient\" : { \"cluster\" : { \"routing\" : { \"allocation\" : { \"enable\" : \"all\" } } } } }",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"true\"}}}}}'",
"oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging",
"172.30.183.229",
"oc get service elasticsearch -n openshift-logging",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h",
"oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://172.30.183.229:9200/_cat/health\"",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108",
"oc project openshift-logging",
"oc extract secret/elasticsearch --to=. --keys=admin-ca",
"admin-ca",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1",
"cat ./admin-ca | sed -e \"s/^/ /\" >> <file-name>.yaml",
"oc create -f <file-name>.yaml",
"route.route.openshift.io/elasticsearch created",
"token=USD(oc whoami -t)",
"routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`",
"curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://USD{routeES}\"",
"{ \"name\" : \"elasticsearch-cdm-i40ktba0-1\", \"cluster_name\" : \"elasticsearch\", \"cluster_uuid\" : \"0eY-tJzcR3KOdpgeMJo-MQ\", \"version\" : { \"number\" : \"6.8.1\", \"build_flavor\" : \"oss\", \"build_type\" : \"zip\", \"build_hash\" : \"Unknown\", \"build_date\" : \"Unknown\", \"build_snapshot\" : true, \"lucene_version\" : \"7.7.0\", \"minimum_wire_compatibility_version\" : \"5.6.0\", \"minimum_index_compatibility_version\" : \"5.0.0\" }, \"<tagline>\" : \"<for search>\" }",
"outputRefs: - default",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" collection: type: \"fluentd\" fluentd: {}",
"oc get pods -l component=collector -n openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: <name> namespace: <namespace> spec: rules: enabled: true 1 selector: matchLabels: openshift.io/<label_name>: \"true\" 2 namespaceSelector: matchLabels: openshift.io/<label_name>: \"true\" 3",
"oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>",
"oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: outputs: - name: kafka-example 1 type: kafka 2 limit: maxRecordsPerSecond: 1000000 3",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: inputs: - name: <input_name> 1 application: selector: matchLabels: { example: label } 2 containerLimit: maxRecordsPerSecond: 0 3",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: inputs: - name: <input_name> 1 application: namespaces: [ example-ns-1, example-ns-2 ] 2 containerLimit: maxRecordsPerSecond: 10 3 - name: <input_name> application: namespaces: [ test ] containerLimit: maxRecordsPerSecond: 1000",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: important type: drop drop: test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: important type: drop drop: test: - field: .kubernetes.namespace_name matches: \"^open\" test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4",
"oc apply -f <filename>.yaml",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1",
"oc apply -f <filename>.yaml",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: mylogs1 infrastructure: sources: 1 - node - name: mylogs2 audit: sources: 2 - kubeAPI - openshiftAPI - ovn",
"oc apply -f <filename>.yaml",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux topology.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' topology.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos node.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 kubernetes.io/arch: amd64 region: east 1 type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: 1 region: east type: user-node #",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: defaultNodeSelector: type=user-node,region=east #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: region: east #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: \"region=east\" #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: namespace: east-region # spec: nodeSelector: region: east type: user-node #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Pod metadata: name: west-region # spec: nodeSelector: region: west #",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: <name> 1 namespace: <namespace> 2 spec: managementState: \"Managed\" collection: type: \"vector\" tolerations: - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 1Gi requests: cpu: 100m memory: 1Gi nodeSelector: collector: needed",
"oc apply -f <filename>.yaml",
"oc get pods --selector component=collector -o wide -n <project_name>",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES collector-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> collector-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> collector-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> collector-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> collector-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>",
"apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: v1 kind: Pod metadata: name: collector-example namespace: openshift-logging spec: collection: type: vector tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 collector=node:NoExecute",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: collection: type: vector tolerations: - key: collector 1 operator: Exists 2 effect: NoExecute 3 tolerationSeconds: 6000 4 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: <name> 1 namespace: <namespace> 2 spec: managementState: \"Managed\" collection: type: \"vector\" tolerations: - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 1Gi requests: cpu: 100m memory: 1Gi nodeSelector: collector: needed",
"oc apply -f <filename>.yaml",
"oc get pods --selector component=collector -o wide -n <project_name>",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES collector-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> collector-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> collector-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> collector-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> collector-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>",
"oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV",
"currentCSV: serverless-operator.v1.28.0",
"oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless",
"subscription.operators.coreos.com \"serverless-operator\" deleted",
"oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless",
"clusterserviceversion.operators.coreos.com \"serverless-operator.v1.28.0\" deleted"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/logging/index
|
7.50. ethtool
|
7.50. ethtool 7.50.1. RHEA-2015:1306 - ethtool enhancement update Updated ethtool packages that add one enhancement are now available for Red Hat Enterprise Linux 6. The ethtool packages provide the ethtool utility that enables querying and changing settings such as speed, port, autonegotiation, PCI locations, and checksum offload on many network devices, especially of Ethernet devices. Enhancement BZ# 1066605 This update enables the ethtool utility to accept a user-defined Receive-Side Scaling (RSS) hash key value for the Ethernet driver, which improves the performance and security of RSS. As a result, the user can set the RSS hash key value for the Ethernet driver with ethtool. Users of ethtool are advised to upgrade to these updated packages, which add this enhancement.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-ethtool
|
Chapter 1. Post-installation configuration overview
|
Chapter 1. Post-installation configuration overview After installing OpenShift Container Platform, a cluster administrator can configure and customize the following components: Machine Bare metal Cluster Node Network Storage Users Alerts and notifications 1.1. Configuration tasks to perform after installation Cluster administrators can perform the following post-installation configuration tasks: Configure operating system features : Machine Config Operator (MCO) manages MachineConfig objects. By using MCO, you can perform the following tasks on an OpenShift Container Platform cluster: Configure nodes by using MachineConfig objects Configure MCO-related custom resources Configure bare metal nodes : The Bare Metal Operator (BMO) implements a Kubernetes API for managing bare metal hosts. It maintains an inventory of available bare metal hosts as instances of the BareMetalHost Custom Resource Definition (CRD). The Bare Metal Operator can: Inspect the host's hardware details and report them on the corresponding BareMetalHost. This includes information about CPUs, RAM, disks, NICs, and more. Inspect the host's firmware and configure BIOS settings. Provision hosts with a desired image. Clean a host's disk contents before or after provisioning. Configure cluster features : As a cluster administrator, you can modify the configuration resources of the major features of an OpenShift Container Platform cluster. These features include: Image registry Networking configuration Image build behavior Identity provider The etcd configuration Machine set creation to handle the workloads Cloud provider credential management Configure cluster components to be private : By default, the installation program provisions OpenShift Container Platform by using a publicly accessible DNS and endpoints. If you want your cluster to be accessible only from within an internal network, configure the following components to be private: DNS Ingress Controller API server Perform node operations : By default, OpenShift Container Platform uses Red Hat Enterprise Linux CoreOS (RHCOS) compute machines. As a cluster administrator, you can perform the following operations with the machines in your OpenShift Container Platform cluster: Add and remove compute machines Add and remove taints and tolerations to the nodes Configure the maximum number of pods per node Enable Device Manager Configure network : After installing OpenShift Container Platform, you can configure the following: Ingress cluster traffic Node port service range Network policy Enabling the cluster-wide proxy Configure storage : By default, containers operate using ephemeral storage or transient local storage. The ephemeral storage has a lifetime limitation. TO store the data for a long time, you must configure persistent storage. You can configure storage by using one of the following methods: Dynamic provisioning : You can dynamically provision storage on demand by defining and creating storage classes that control different levels of storage, including storage access. Static provisioning : You can use Kubernetes persistent volumes to make existing storage available to a cluster. Static provisioning can support various device configurations and mount options. Configure users : OAuth access tokens allow users to authenticate themselves to the API. As a cluster administrator, you can configure OAuth to perform the following tasks: Specify an identity provider Use role-based access control to define and supply permissions to users Install an Operator from OperatorHub Manage alerts and notifications : By default, firing alerts are displayed on the Alerting UI of the web console. You can also configure OpenShift Container Platform to send alert notifications to external systems.
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/post-installation_configuration/post-install-configuration-overview
|
Chapter 4. Network Observability Operator in OpenShift Container Platform
|
Chapter 4. Network Observability Operator in OpenShift Container Platform Network Observability is an OpenShift operator that deploys a monitoring pipeline to collect and enrich network traffic flows that are produced by the Network Observability eBPF agent. 4.1. Viewing statuses The Network Observability Operator provides the Flow Collector API. When a Flow Collector resource is created, it deploys pods and services to create and store network flows in the Loki log store, as well as to display dashboards, metrics, and flows in the OpenShift Container Platform web console. Procedure Run the following command to view the state of FlowCollector : USD oc get flowcollector/cluster Example output Check the status of pods running in the netobserv namespace by entering the following command: USD oc get pods -n netobserv Example output flowlogs-pipeline pods collect flows, enriches the collected flows, then send flows to the Loki storage. netobserv-plugin pods create a visualization plugin for the OpenShift Container Platform Console. Check the status of pods running in the namespace netobserv-privileged by entering the following command: USD oc get pods -n netobserv-privileged Example output netobserv-ebpf-agent pods monitor network interfaces of the nodes to get flows and send them to flowlogs-pipeline pods. If you are using the Loki Operator, check the status of pods running in the openshift-operators-redhat namespace by entering the following command: USD oc get pods -n openshift-operators-redhat Example output 4.2. Network Observablity Operator architecture The Network Observability Operator provides the FlowCollector API, which is instantiated at installation and configured to reconcile the eBPF agent , the flowlogs-pipeline , and the netobserv-plugin components. Only a single FlowCollector per cluster is supported. The eBPF agent runs on each cluster node with some privileges to collect network flows. The flowlogs-pipeline receives the network flows data and enriches the data with Kubernetes identifiers. If you choose to use Loki, the flowlogs-pipeline sends flow logs data to Loki for storing and indexing. The netobserv-plugin , which is a dynamic OpenShift Container Platform web console plugin, queries Loki to fetch network flows data. Cluster-admins can view the data in the web console. If you do not use Loki, you can generate metrics with Prometheus. Those metrics and their related dashboards are accessible in the web console. For more information, see "Network Observability without Loki". If you are using the Kafka option, the eBPF agent sends the network flow data to Kafka, and the flowlogs-pipeline reads from the Kafka topic before sending to Loki, as shown in the following diagram. Additional resources Network Observability without Loki 4.3. Viewing Network Observability Operator status and configuration You can inspect the status and view the details of the FlowCollector using the oc describe command. Procedure Run the following command to view the status and configuration of the Network Observability Operator: USD oc describe flowcollector/cluster
|
[
"oc get flowcollector/cluster",
"NAME AGENT SAMPLING (EBPF) DEPLOYMENT MODEL STATUS cluster EBPF 50 DIRECT Ready",
"oc get pods -n netobserv",
"NAME READY STATUS RESTARTS AGE flowlogs-pipeline-56hbp 1/1 Running 0 147m flowlogs-pipeline-9plvv 1/1 Running 0 147m flowlogs-pipeline-h5gkb 1/1 Running 0 147m flowlogs-pipeline-hh6kf 1/1 Running 0 147m flowlogs-pipeline-w7vv5 1/1 Running 0 147m netobserv-plugin-cdd7dc6c-j8ggp 1/1 Running 0 147m",
"oc get pods -n netobserv-privileged",
"NAME READY STATUS RESTARTS AGE netobserv-ebpf-agent-4lpp6 1/1 Running 0 151m netobserv-ebpf-agent-6gbrk 1/1 Running 0 151m netobserv-ebpf-agent-klpl9 1/1 Running 0 151m netobserv-ebpf-agent-vrcnf 1/1 Running 0 151m netobserv-ebpf-agent-xf5jh 1/1 Running 0 151m",
"oc get pods -n openshift-operators-redhat",
"NAME READY STATUS RESTARTS AGE loki-operator-controller-manager-5f6cff4f9d-jq25h 2/2 Running 0 18h lokistack-compactor-0 1/1 Running 0 18h lokistack-distributor-654f87c5bc-qhkhv 1/1 Running 0 18h lokistack-distributor-654f87c5bc-skxgm 1/1 Running 0 18h lokistack-gateway-796dc6ff7-c54gz 2/2 Running 0 18h lokistack-index-gateway-0 1/1 Running 0 18h lokistack-index-gateway-1 1/1 Running 0 18h lokistack-ingester-0 1/1 Running 0 18h lokistack-ingester-1 1/1 Running 0 18h lokistack-ingester-2 1/1 Running 0 18h lokistack-querier-66747dc666-6vh5x 1/1 Running 0 18h lokistack-querier-66747dc666-cjr45 1/1 Running 0 18h lokistack-querier-66747dc666-xh8rq 1/1 Running 0 18h lokistack-query-frontend-85c6db4fbd-b2xfb 1/1 Running 0 18h lokistack-query-frontend-85c6db4fbd-jm94f 1/1 Running 0 18h",
"oc describe flowcollector/cluster"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/network_observability/nw-network-observability-operator
|
2.6. Reauthentication
|
2.6. Reauthentication JBoss Data Virtualization connections (defined by the org.teiid.jdbc.TeiidConnection interface) support the changeUser method to reauthenticate a given connection. If reauthentication is successful, the current connection may be used with the given identity. Existing statements and resultsets are still available for use under the old identity.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/Reauthentication2
|
Chapter 4. Using AMQ Management Console
|
Chapter 4. Using AMQ Management Console AMQ Management Console is a web console included in the AMQ Broker installation that enables you to use a web browser to manage AMQ Broker. AMQ Management Console is based on hawtio . 4.1. Overview AMQ Broker is a full-featured, message-oriented middleware broker. It offers specialized queueing behaviors, message persistence, and manageability. It supports multiple protocols and client languages, freeing you to use many of your application assets. AMQ Broker's key features allow you to: monitor your AMQ brokers and clients view the topology view network health at a glance manage AMQ brokers using: AMQ Management Console Command-line Interface (CLI) Management API The supported web browsers for AMQ Management Console are Firefox and Chrome. For more information on supported browser versions, see AMQ 7 Supported Configurations . 4.2. Configuring local and remote access to AMQ Management Console The procedure in this section shows how to configure local and remote access to AMQ Management Console. Remote access to the console can take one of two forms: Within a console session on a local broker, you use the Connect tab to connect to another, remote broker From a remote host, you connect to the console for the local broker, using an externally-reachable IP address for the local broker Prerequisites You must upgrade to at least AMQ Broker 7.1.0. As part of this upgrade, an access-management configuration file named jolokia-access.xml is added to the broker instance. For more information about upgrading, see Upgrading a Broker instance from 7.0.x to 7.1.0 . Procedure Open the <broker_instance_dir> /etc/bootstrap.xml file. Within the web element, observe that the web port is bound only to localhost by default. <web path="web"> <binding uri="http://localhost:8161"> <app url="redhat-branding" war="redhat-branding.war"/> <app url="artemis-plugin" war="artemis-plugin.war"/> <app url="dispatch-hawtio-console" war="dispatch-hawtio-console.war"/> <app url="console" war="console.war"/> </binding> </web> To enable connection to the console for the local broker from a remote host, change the web port binding to a network-reachable interface. For example: <web path="web"> <binding uri="http://0.0.0.0:8161"> In the preceding example, by specifying 0.0.0.0 , you bind the web port to all interfaces on the local broker. Save the bootstrap.xml file. Open the <broker_instance_dir> /etc/jolokia-access.xml file. Within the <cors> (that is, Cross-Origin Resource Sharing ) element, add an allow-origin entry for each HTTP origin request header that you want to allow to access the console. For example: <cors> <allow-origin>*://localhost*</allow-origin> <allow-origin>*://192.168.0.49*</allow-origin> <allow-origin>*://192.168.0.51*</allow-origin> <!-- Check for the proper origin on the server side, too --> <strict-checking/> </cors> In the preceding configuration, you specify that the following connections are allowed: Connection from the local host (that is, the host machine for your local broker instance) to the console. The first asterisk ( * ) wildcard character allows either the http or https scheme to be specified in the connection request, based on whether you have configured the console for secure connections. The second asterisk wildcard character allows any port on the host machine to be used for the connection. Connection from a remote host to the console for the local broker, using the externally-reachable IP address of the local broker. In this case, the externally-reachable IP address of the local broker is 192.168.0.49 . Connection from within a console session opened on another, remote broker to the local broker. In this case, the IP address of the remote broker is 192.168.0.51 . Save the jolokia-access.xml file. Open the <broker_instance_dir> /etc/artemis.profile file. To enable the Connect tab in the console, set the value of the Dhawtio.disableProxy argument to false . -Dhawtio.disableProxy=false Important It is recommended that you enable remote connections from the console (that is, set the value of the Dhawtio.disableProxy argument to false ) only if the console is exposed to a secure network. Add a new argument, Dhawtio.proxyWhitelist , to the JAVA_ARGS list of Java system arguments. As a comma-separated list, specify IP addresses for any remote brokers that you want to connect to from the local broker (that is, by using the Connect tab within a console session running on the local broker). For example: -Dhawtio.proxyWhitelist=192.168.0.51 Based on the preceding configuration, you can use the Connect tab within a console session on the local broker to connect to another, remote broker with an IP address of 192.168.0.51 . Save the aretmis.profile file. Additional resources To learn how to access the console, see Section 4.3, "Accessing AMQ Management Console" . For more information about: Cross-Origin Resource Sharing, see W3C Recommendations . Jolokia security, see Jolokia Protocols . Securing connections to the console, see Section 4.4.3, "Securing network access to AMQ Management Console" . 4.3. Accessing AMQ Management Console The procedure in this section shows how to: Open AMQ Management Console from the local broker Connect to other brokers from within a console session on the local broker Open a console instance for the local broker from a remote host using the externally-reachable IP address of the local broker Prerequisites You must have already configured local and remote access to the console. For more information, see Section 4.2, "Configuring local and remote access to AMQ Management Console" . Procedure In your web browser, navigate to the console address for the local broker. The console address is http:// <host:port> /console/login . If you are using the default address, navigate to http://localhost:8161/console/login . Otherwise, use the values of host and port that are defined for the bind attribute of the web element in the <broker_instance_dir> /etc/bootstrap.xml configuration file. Figure 4.1. Console login page Log in to AMQ Management Console using the default user name and password that you created when you created the broker. To connect to another, remote broker from the console session of the local broker: In the left menu, click the Connect tab. In the main pane, on the Remote tab, click the Add connection button. In the Add Connection dialog box, specify the following details: Name Name for the remote connection, for example, my_other_broker . Scheme Protocol to use for the remote connection. Select http for a non-secured connection, or https for a secured connection. Host IP address of a remote broker. You must have already configured console access for this remote broker. Port Port on the local broker to use for the remote connection. Specify the port value that is defined for the bind attribute of the web element in the <broker_instance_dir> /etc/bootstrap.xml configuration file. The default value is 8161 . Path Path to use for console access. Specify console/jolokia . To test the connection, click the Test Connection button. If the connection test is successful, click the Add button. If the connection test fails, review and modify the connection details as needed. Test the connection again. On the Remote page, for a connection that you have added, click the Connect button. A new web browser tab opens for the console instance on the remote broker. In the Log In dialog box, enter the user name and password for the remote broker. Click Log In . The console instance for the remote broker opens. To connect to the console for the local broker from a remote host, specify the Jolokia endpoint for the local broker in a web browser. This endpoint includes the externally-reachable IP address that you specified for the local broker when configuring remote console access. For example: 4.4. Configuring AMQ Management Console Configure user access and request access to resources on the broker. 4.4.1. Securing AMQ Management Console using Red Hat Single Sign-On Prerequisites Red Hat Single Sign-On 7.4 Procedure Configure Red Hat Single Sign-On: Navigate to the realm in Red Hat Single Sign-On that you want to use for securing AMQ Management Console. Each realm in Red Hat Single Sign-On includes a client named Broker . This client is not related to AMQ. Create a new client in Red Hat Single Sign-On, for example artemis-console . Navigate to the client settings page and set: Valid Redirect URIs to the AMQ Management Console URL followed by * , for example: Web Origins to the same value as Valid Redirect URIs . Red Hat Single Sign-On allows you enter + , indicating that allowed CORS origins includes the value for Valid Redirect URIs . Create a role for the client, for example guest . Make sure all users who require access to AMQ Management Console are assigned the above role, for example, using Red Hat Single Sign-On groups. Configure the AMQ Broker instance: Add the following to your <broker-instance-dir> /instances/broker0/etc/login.config file to configure AMQ Management Console to use Red Hat Single Sign-On: Adding this configuration sets up a JAAS principal and a requirement for a bearer token from Red Hat Single Sign-On. The connection to Red Hat Single Sign-On is defined in the keycloak-bearer-token.json file, as described in the step. Create a file <broker-instance-dir> /etc/keycloak-bearer-token.json with the following contents to specify the connection to Red Hat Single Sign-On used for the bearer token exchange: { "realm": " <realm-name> ", "resource": " <client-name> ", "auth-server-url": " <RHSSO-URL> /auth", "principal-attribute": "preferred_username", "use-resource-role-mappings": true, "ssl-required": "external", "confidential-port": 0 } <realm-name> the name of the realm in Red Hat Single Sign-On <client-name> the name of the client in Red Hat Single Sign-On <RHSSO-URL> the URL of Red Hat Single Sign-On Create a file <broker-instance-dir> /etc/keycloak-js-token.json with the following contents to specify the Red Hat Single Sign-On authentication endpoint: { "realm": "<realm-name>", "clientId": "<client-name>", "url": " <RHSSO-URL> /auth" } Configure the security settings by editing the the <broker-instance-dir> /etc/broker.xml file. For example, to allow users with the amq role consume messages and allow users with the guest role send messages, add the following: <security-setting match="Info"> <permission roles="amq" type="createDurableQueue"/> <permission roles="amq" type="deleteDurableQueue"/> <permission roles="amq" type="createNonDurableQueue"/> <permission roles="amq" type="deleteNonDurableQueue"/> <permission roles="guest" type="send"/> <permission roles="amq" type="consume"/> </security-setting> Run the AMQ Broker instance and validate AMQ Management Console configuration. 4.4.2. Setting up user access to AMQ Management Console You can access AMQ Management Console using the broker login credentials. The following table provides information about different methods to add additional broker users to access AMQ Management Console: Authentication Method Description Guest authentication Enables anonymous access. In this configuration, any user who connects without credentials or with the wrong credentials will be authenticated automatically and assigned a specific user and role. For more information, see Configuring guest access in Configuring AMQ Broker . Basic user and password authentication For each user, you must define a username and password and assign a security role. Users can only log into AMQ Management Console using these credentials. For more information, see Configuring basic user and password authentication in Configuring AMQ Broker . LDAP authentication Users are authenticated and authorized by checking the credentials against user data stored in a central X.500 directory server. For more information, see Configuring LDAP to authenticate clients in Configuring AMQ Broker . 4.4.3. Securing network access to AMQ Management Console To secure AMQ Management Console when the console is being accessed over a WAN or the internet, use SSL to specify that network access uses https instead of http . Prerequisites The following should be located in the <broker_instance_dir> /etc/ directory: Java key store Java trust store (needed only if you require client authentication) Procedure Open the <broker_instance_dir> /etc/bootstrap.xml file. In the <web> element, add the following attributes: <web path="web"> <binding uri="https://0.0.0.0:8161" keyStorePath="<path_to_keystore>" keyStorePassword="<password>" clientAuth="<true/false>" trustStorePath="<path_to_truststore>" trustStorePassword="<password>"> </binding> </web> bind For secure connections to the console, change the URI scheme to https . keyStorePath Path of the keystore file. For example: keyStorePath=" <broker_instance_dir> /etc/keystore.jks" keyStorePassword Key store password. This password can be encrypted. clientAuth Specifies whether client authentication is required. The default value is false . trustStorePath Path of the trust store file. You need to define this attribute only if clientAuth is set to true . trustStorePassword Trust store password. This password can be encrypted. Additional resources For more information about encrypting passwords in broker configuration files, including bootstrap.xml , see Encrypting Passwords in Configuration Files . 4.4.4. Configuring AMQ Management Console to use certificate-based authentication You can configure AMQ Management Console to authenticate users by using certificates instead of passwords. Procedure Obtain certificates for the broker and clients from a trusted certificate authority or generate self-signed certificates. If you want to generate self-signed certificates, complete the following steps: Generate a self-signed certificate for the broker. USD keytool -storetype pkcs12 -keystore broker-keystore.p12 -storepass securepass -keypass securepass -alias client -genkey -keyalg "RSA" -keysize 2048 -dname "CN=ActiveMQ Broker, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ" -ext bc=ca:false -ext eku=cA Export the certificate from the broker keystore, so that it can be shared with clients. USD keytool -storetype pkcs12 -keystore broker-keystore.p12 -storepass securepass -alias client -exportcert -rfc > broker.crt On the client, import the broker certificate into the client truststore. USD keytool -storetype pkcs12 -keystore client-truststore.p12 -storepass securepass -keypass securepass -importcert -alias client-ca -file broker.crt -noprompt On the client, generate a self-signed certificate for the client. USD keytool -storetype pkcs12 -keystore client-keystore.p12 -storepass securepass -keypass securepass -alias client -genkey -keyalg "RSA" -keysize 2048 -dname "CN=ActiveMQ Client, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ" -ext bc=ca:false -ext eku=cA Export the client certificate from the client keystore to a file so that it can be added to the broker truststore. USD keytool -storetype pkcs12 -keystore client-keystore.p12 -storepass securepass -alias client -exportcert -rfc > client.crt Import the client certificate into the broker truststore. USD keytool -storetype pkcs12 -keystore client-truststore.p12 -storepass securepass -keypass securepass -importcert -alias client-ca -file client.crt -noprompt Note On the broker machine, ensure that the keystore and truststore files are in a location that is accessible to the broker. In the <broker_instance_dir>/etc/bootstrap.xml file, update the web configuration to enable the HTTPS protocol and client authentication for the broker console. For example: ... <web path="web"> <binding uri="https://localhost:8161" keyStorePath="USD{artemis.instance}/etc/server-keystore.p12" keyStorePassword="password" clientAuth="true" trustStorePath="USD{artemis.instance}/etc/client-truststore.p12" trustStorePassword="password"> ... </binding> </web> ... binding uri Specify the https protocol to enable SSL and add a host name and port. keystorePath The path to the keystore where the broker certificate is installed. keystorePassword The password of the keystore where the broker certificate is installed. ClientAuth Set to true to configure the broker to require that each client presents a certificate when a client tries to connect to the broker console. trustStorePath If clients are using self-signed certificates, specify the path to the truststore where client certificates are installed. trustStorePassword If clients are using self-signed certificates, specify the password of the truststore where client certificates are installed . NOTE. You need to configure the trustStorePath and trustStorePassword properties only if clients are using self-signed certificates. Obtain the Subject Distinguished Names (DNs) from each client certificate so you can create a mapping between each client certificate and a broker user. Export each client certificate from the client's keystore file into a temporary file. For example: Print the contents of the exported certificate: The output is similar to that shown below: The Owner entry is the Subject DN. The format used to enter the Subject DN depends on your platform. The string above could also be represented as; Enable certificate-based authentication for the broker's console. Open the <broker_instance_dir> /etc/login.config configuration file. Add the certificate login module and reference the user and roles properties files. For example: activemq { org.apache.activemq.artemis.spi.core.security.jaas.TextFileCertificateLoginModule debug=true org.apache.activemq.jaas.textfiledn.user="artemis-users.properties" org.apache.activemq.jaas.textfiledn.role="artemis-roles.properties"; }; org.apache.activemq.artemis.spi.core.security.jaas.TextFileCertificateLoginModule The implementation class. org.apache.activemq.jaas.textfiledn.user Specifies the location of the user properties file relative to the directory that contains the login configuration file. org.apache.activemq.jaas.textfiledn.role Specifies the properties file that maps users to defined roles for the login module implementation. Note If you change the default name of the certificate login module configuration in the <broker_instance_dir> /etc/login.config file, you must update the value of the -dhawtio.realm argument in the <broker_instance_dir>/etc/artemis.profile file to match the new name. The default name is activemq . Open the <broker_instance_dir>/etc/artemis-users.properties file. Create a mapping between client certificates and broker users by adding the Subject DNS that you obtained from each client certificate to a broker user. For example: user1=CN=user1,O=Progress,C=US user2=CN=user2,O=Progress,C=US In this example, the user1 broker user is mapped to the client certificate that has a Subject Distinguished Name of CN=user1,O=Progress,C=US Subject DN. After you create a mapping between a client certificate and a broker user, the broker can authenticate the user by using the certificate. Open the <broker_instance_dir>/etc/artemis-roles.properties file. Grant users permission to log in to the console by adding them to the role that is specified for the HAWTIO_ROLE variable in the <broker_instance_dir>/etc/artemis.profile file. The default value of the HAWTIO_ROLE variable is amq . For example: amq=user1, user2 Configure the following recommended security properties for the HTTPS protocol. Open the <broker_instance_dir>/etc/artemis.profile file. Set the hawtio.http.strictTransportSecurity property to allow only HTTPS requests to the AMQ Management Console and to convert any HTTP requests to HTTPS. For example: hawtio.http.strictTransportSecurity = max-age=31536000; includeSubDomains; preload Set the hawtio.http.publicKeyPins property to instruct the web browser to associate a specific cryptographic public key with the AMQ Management Console to decrease the risk of "man-in-the-middle" attacks using forged certificates. For example: hawtio.http.publicKeyPins = pin-sha256="..."; max-age=5184000; includeSubDomains 4.4.5. Configuring AMQ Management Console to handle X-forwarded headers If requests to AMQ Management Console are routed through a proxy server, you can configure the AMQ Broker embedded web server, which hosts AMQ Management Console, to handle X-Forwarded headers. By handling X-Forwarded headers, AMQ Management Console can receive header information that is otherwise altered or lost when a proxy is involved in the path of a request. For example, the proxy can expose AMQ Management Console using HTTPS, and the AMQ Management Console, which uses HTTP, can identify from the X-Forwarded header that the connection between the browser and the proxy uses HTTPS and switch to HTTPS to serve browser requests. Procedure Open the <broker_instance_dir> /etc/bootstrap.xml file. In the <web> element, add the customizer attribute with a value of org.eclipse.jetty.server.ForwardedRequestCustomizer . For example: <web path="web" customizer="org.eclipse.jetty.server.ForwardedRequestCustomizer"> .. </web> Save the bootstrap.xml file. Start or restart the broker by entering the following command: On Linux: <broker_instance_dir> /bin/artemis run On Windows: <broker_instance_dir> \bin\artemis-service.exe start 4.5. Managing brokers using AMQ Management Console You can use AMQ Management Console to view information about a running broker and manage the following resources: Incoming network connections (acceptors) Addresses Queues 4.5.1. Viewing details about the broker To see how the broker is configured, in the left menu, click Artemis . In the folder tree, the local broker is selected by default. In the main pane, the following tabs are available: Status Displays information about the current status of the broker, such as uptime and cluster information. Also displays the amount of address memory that the broker is currently using. The graph shows this value as a proportion of the global-max-size configuration parameter. Figure 4.2. Status tab Connections Displays information about broker connections, including client, cluster, and bridge connections. Sessions Displays information about all sessions currently open on the broker. Consumers Displays information about all consumers currently open on the broker. Producers Displays information about producers currently open on the broker. Addresses Displays information about addresses on the broker. This includes internal addresses, such as store-and-forward addresses. Queues Displays information about queues on the broker. This includes internal queues, such as store-and-forward queues. Attributes Displays detailed information about attributes configured on the broker. Operations Displays JMX operations that you can execute on the broker from the console. When you click an operation, a dialog box opens that enables you to specify parameter values for the operation. Chart Displays real-time data for attributes configured on the broker. You can edit the chart to specify the attributes that are included in the chart. Broker diagram Displays a diagram of the cluster topology. This includes all brokers in the cluster and any addresses and queues on the local broker. 4.5.2. Viewing the broker diagram You can view a diagram of all AMQ Broker resources in your topology, including brokers (live and backup brokers), producers and consumers, addresses, and queues. Procedure In the left menu, click Artemis . In the main pane, click the Broker diagram tab. The console displays a diagram of the cluster topology. This includes all brokers in the cluster and any addresses and queues on the local broker, as shown in the figure. Figure 4.3. Broker diagram tab To change what items are displayed on the diagram, use the check boxes at the top of the diagram. Click Refresh . To show attributes for the local broker or an address or queue that is connected to it, click that node in the diagram. For example, the following figure shows a diagram that also includes attributes for the local broker. Figure 4.4. Broker diagram tab, including attributes 4.5.3. Viewing acceptors You can view details about the acceptors configured for the broker. Procedure In the left menu, click Artemis . In the folder tree, click acceptors . To view details about how an acceptor is configured, click the acceptor. The console shows the corresponding attributes on the Attributes tab, as shown in the figure. Figure 4.5. AMQP acceptor attributes To see complete details for an attribute, click the attribute. An additional window opens to show the details. 4.5.4. Managing addresses and queues An address represents a messaging endpoint. Within the configuration, a typical address is given a unique name. A queue is associated with an address. There can be multiple queues per address. Once an incoming message is matched to an address, the message is sent on to one or more of its queues, depending on the routing type configured. Queues can be configured to be automatically created and deleted. 4.5.4.1. Creating addresses A typical address is given a unique name, zero or more queues, and a routing type. A routing type determines how messages are sent to the queues associated with an address. Addresses can be configured with two different routing types. If you want your messages routed to... Use this routing type... A single queue within the matching address, in a point-to-point manner. Anycast Every queue within the matching address, in a publish-subscribe manner. Multicast You can create and configure addresses and queues, and then delete them when they are no longer in use. Procedure In the left menu, click Artemis . In the folder tree, click addresses . In the main pane, click the Create address tab. A page appears for you to create an address, as shown in the figure. Figure 4.6. Create Address page Complete the following fields: Address name The routing name of the address. Routing type Select one of the following options: Multicast : Messages sent to the address will be distributed to all subscribers in a publish-subscribe manner. Anycast : Messages sent to this address will be distributed to only one subscriber in a point-to-point manner. Both : Enables you to define more than one routing type per address. This typically results in an anti-pattern and is not recommended. Note If an address does use both routing types, and the client does not show a preference for either one, the broker defaults to the anycast routing type. The one exception is when the client uses the MQTT protocol. In that case, the default routing type is multicast . Click Create Address . 4.5.4.2. Sending messages to an address The following procedure shows how to use the console to send a message to an address. Procedure In the left menu, click Artemis . In the folder tree, select an address. On the navigation bar in the main pane, click More Send message . A page appears for you to create a message, as shown in the figure. Figure 4.7. Send Message page By default messages are sent using the credentials that you used to log in to AMQ Management Console. If you want to use different credentials, clear the Use current logon user checkbox and specify values in the Username and Password fields, which are displayed after you clear the checkbox. If necessary, click the Add Header button to add message header information. Enter the message body. In the Format drop-down menu, select an option for the format of the message body, and then click Format . The message body is formatted in a human-readable style for the format you selected. Click Send message . The message is sent. To send additional messages, change any of the information you entered, and then click Send message . 4.5.4.3. Creating queues Queues provide a channel between a producer and a consumer. Prerequisites The address to which you want to bind the queue must exist. To learn how to use the console to create an address, see Section 4.5.4.1, "Creating addresses" . Procedure In the left menu, click Artemis . In the folder tree, select the address to which you want to bind the queue. In the main pane, click the Create queue tab. A page appears for you to create a queue, as shown in the figure. Figure 4.8. Create Queue page Complete the following fields: Queue name A unique name for the queue. Routing type Select one of the following options: Multicast : Messages sent to the parent address will be distributed to all queues bound to the address. Anycast : Only one queue bound to the parent address will receive a copy of the message. Messages will be distributed evenly among all of the queues bound to the address. Durable If you select this option, the queue and its messages will be persistent. Filter The username to be used when connecting to the broker. Max Consumers The maximum number of consumers that can access the queue at a given time. Purge when no consumers If selected, the queue will be purged when no consumers are connected. Click Create Queue . 4.5.4.4. Checking the status of a queue Charts provide a real-time view of the status of a queue on a broker. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. In the main pane, click the Chart tab. The console displays a chart that shows real-time data for all of the queue attributes. Figure 4.9. Chart tab for a queue Note To view a chart for multiple queues on an address, select the anycast or multicast folder that contains the queues. If necessary, select different criteria for the chart: In the main pane, click Edit . On the Attributes list, select one or more attributes that you want to include in the chart. To select multiple attributes, press and hold the Ctrl key and select each attribute. Click the View Chart button. The chart is updated based on the attributes that you selected. 4.5.4.5. Browsing queues Browsing a queue displays all of the messages in the queue. You can also filter and sort the list to find specific messages. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. Queues are located within the addresses to which they are bound. On the navigation bar in the main pane, click More Browse queue . The messages in the queue are displayed. By default, the first 200 messages are displayed. Figure 4.10. Browse Queue page To browse for a specific message or group of messages, do one of the following: To... Do this... Filter the list of messages In the Filter... text field, enter filter criteria. Click the search (that is, magnifying glass) icon. Sort the list of messages In the list of messages, click a column header. To sort the messages in descending order, click the header a second time. To view the content of a message, click the Show button. You can view the message header, properties, and body. 4.5.4.6. Sending messages to a queue After creating a queue, you can send a message to it. The following procedure outlines the steps required to send a message to an existing queue. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. In the main pane, click the Send message tab. A page appears for you to compose the message. Figure 4.11. Send Message page for a queue By default messages are sent using the credentials that you used to log in to AMQ Management Console. If you want to use different credentials, clear the Use current logon user checkbox and specify values in the Username and Password fields, which are displayed after you clear the checkbox. If necessary, click the Add Header button to add message header information. Enter the message body. In the Format drop-down menu, select an option for the format of the message body, and then click Format . The message body is formatted in a human-readable style for the format you selected. Click Send message . The message is sent. To send additional messages, change any of the information you entered, and click Send message . 4.5.4.7. Resending messages to a queue You can resend previously sent messages. Procedure Browse for the message you want to resend . Click the check box to the message that you want to resend. Click the Resend button. The message is displayed. Update the message header and body as needed, and then click Send message . 4.5.4.8. Moving messages to a different queue You can move one or more messages in a queue to a different queue. Procedure Browse for the messages you want to move . Click the check box to each message that you want to move. In the navigation bar, click Move Messages . A confirmation dialog box appears. From the drop-down menu, select the name of the queue to which you want to move the messages. Click Move . 4.5.4.9. Deleting messages or queues You can delete a queue or purge all of the messages from a queue. Procedure Browse for the queue you want to delete or purge . Do one of the following: To... Do this... Delete a message from the queue Click the check box to each message that you want to delete. Click the Delete button. Purge all messages from the queue On the navigation bar in the main pane, click Delete queue . Click the Purge Queue button. Delete the queue On the navigation bar in the main pane, click Delete queue . Click the Delete Queue button.
|
[
"<web path=\"web\"> <binding uri=\"http://localhost:8161\"> <app url=\"redhat-branding\" war=\"redhat-branding.war\"/> <app url=\"artemis-plugin\" war=\"artemis-plugin.war\"/> <app url=\"dispatch-hawtio-console\" war=\"dispatch-hawtio-console.war\"/> <app url=\"console\" war=\"console.war\"/> </binding> </web>",
"<web path=\"web\"> <binding uri=\"http://0.0.0.0:8161\">",
"<cors> <allow-origin>*://localhost*</allow-origin> <allow-origin>*://192.168.0.49*</allow-origin> <allow-origin>*://192.168.0.51*</allow-origin> <!-- Check for the proper origin on the server side, too --> <strict-checking/> </cors>",
"-Dhawtio.disableProxy=false",
"-Dhawtio.proxyWhitelist=192.168.0.51",
"http://192.168.0.49/console/jolokia",
"https://broker.example.com:8161/console/*",
"console { org.keycloak.adapters.jaas.BearerTokenLoginModule required keycloak-config-file=\"USD{artemis.instance}/etc/keycloak-bearer-token.json\" role-principal-class=org.apache.activemq.artemis.spi.core.security.jaas.RolePrincipal ; };",
"{ \"realm\": \" <realm-name> \", \"resource\": \" <client-name> \", \"auth-server-url\": \" <RHSSO-URL> /auth\", \"principal-attribute\": \"preferred_username\", \"use-resource-role-mappings\": true, \"ssl-required\": \"external\", \"confidential-port\": 0 }",
"{ \"realm\": \"<realm-name>\", \"clientId\": \"<client-name>\", \"url\": \" <RHSSO-URL> /auth\" }",
"<security-setting match=\"Info\"> <permission roles=\"amq\" type=\"createDurableQueue\"/> <permission roles=\"amq\" type=\"deleteDurableQueue\"/> <permission roles=\"amq\" type=\"createNonDurableQueue\"/> <permission roles=\"amq\" type=\"deleteNonDurableQueue\"/> <permission roles=\"guest\" type=\"send\"/> <permission roles=\"amq\" type=\"consume\"/> </security-setting>",
"<web path=\"web\"> <binding uri=\"https://0.0.0.0:8161\" keyStorePath=\"<path_to_keystore>\" keyStorePassword=\"<password>\" clientAuth=\"<true/false>\" trustStorePath=\"<path_to_truststore>\" trustStorePassword=\"<password>\"> </binding> </web>",
"keyStorePath=\" <broker_instance_dir> /etc/keystore.jks\"",
"keytool -storetype pkcs12 -keystore broker-keystore.p12 -storepass securepass -keypass securepass -alias client -genkey -keyalg \"RSA\" -keysize 2048 -dname \"CN=ActiveMQ Broker, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ\" -ext bc=ca:false -ext eku=cA",
"keytool -storetype pkcs12 -keystore broker-keystore.p12 -storepass securepass -alias client -exportcert -rfc > broker.crt",
"keytool -storetype pkcs12 -keystore client-truststore.p12 -storepass securepass -keypass securepass -importcert -alias client-ca -file broker.crt -noprompt",
"keytool -storetype pkcs12 -keystore client-keystore.p12 -storepass securepass -keypass securepass -alias client -genkey -keyalg \"RSA\" -keysize 2048 -dname \"CN=ActiveMQ Client, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ\" -ext bc=ca:false -ext eku=cA",
"keytool -storetype pkcs12 -keystore client-keystore.p12 -storepass securepass -alias client -exportcert -rfc > client.crt",
"keytool -storetype pkcs12 -keystore client-truststore.p12 -storepass securepass -keypass securepass -importcert -alias client-ca -file client.crt -noprompt",
"<web path=\"web\"> <binding uri=\"https://localhost:8161\" keyStorePath=\"USD{artemis.instance}/etc/server-keystore.p12\" keyStorePassword=\"password\" clientAuth=\"true\" trustStorePath=\"USD{artemis.instance}/etc/client-truststore.p12\" trustStorePassword=\"password\"> </binding> </web>",
"keytool -export -file <file_name> -alias broker-localhost -keystore broker.ks -storepass <password>",
"keytool -printcert -file <file_name>",
"Owner: CN=AMQ Client, OU=Artemis, O=AMQ, L=AMQ, ST=AMQ, C=AMQ Issuer: CN=AMQ Client, OU=Artemis, O=AMQ, L=AMQ, ST=AMQ, C=AMQ Serial number: 51461f5d Valid from: Sun Apr 17 12:20:14 IST 2022 until: Sat Jul 16 12:20:14 IST 2022 Certificate fingerprints: SHA1: EC:94:13:16:04:93:57:4F:FD:CA:AD:D8:32:68:A4:13:CC:EA:7A:67 SHA256: 85:7F:D5:4A:69:80:3B:5B:86:27:99:A7:97:B8:E4:E8:7D:6F:D1:53:08:D8:7A:BA:A7:0A:7A:96:F3:6B:98:81",
"Owner: `CN=localhost,\\ OU=broker,\\ O=Unknown,\\ L=Unknown,\\ ST=Unknown,\\ C=Unknown`",
"activemq { org.apache.activemq.artemis.spi.core.security.jaas.TextFileCertificateLoginModule debug=true org.apache.activemq.jaas.textfiledn.user=\"artemis-users.properties\" org.apache.activemq.jaas.textfiledn.role=\"artemis-roles.properties\"; };",
"user1=CN=user1,O=Progress,C=US user2=CN=user2,O=Progress,C=US",
"amq=user1, user2",
"hawtio.http.strictTransportSecurity = max-age=31536000; includeSubDomains; preload",
"hawtio.http.publicKeyPins = pin-sha256=\"...\"; max-age=5184000; includeSubDomains",
"<web path=\"web\" customizer=\"org.eclipse.jetty.server.ForwardedRequestCustomizer\"> .. </web>"
] |
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/managing_amq_broker/assembly-using-amq-console-managing
|
Chapter 22. DetectionService
|
Chapter 22. DetectionService 22.1. DetectBuildTime POST /v1/detect/build DetectBuildTime checks if any images violate build time policies. 22.1.1. Description 22.1.2. Parameters 22.1.2.1. Body Parameter Name Description Required Default Pattern body V1BuildDetectionRequest X 22.1.3. Return Type V1BuildDetectionResponse 22.1.4. Content Type application/json 22.1.5. Responses Table 22.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1BuildDetectionResponse 0 An unexpected error response. GooglerpcStatus 22.1.6. Samples 22.1.7. Common object reference 22.1.7.1. AlertDeploymentContainer Field Name Required Nullable Type Description Format image StorageContainerImage name String 22.1.7.2. AlertEnforcement Field Name Required Nullable Type Description Format action StorageEnforcementAction UNSET_ENFORCEMENT, SCALE_TO_ZERO_ENFORCEMENT, UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT, KILL_POD_ENFORCEMENT, FAIL_BUILD_ENFORCEMENT, FAIL_KUBE_REQUEST_ENFORCEMENT, FAIL_DEPLOYMENT_CREATE_ENFORCEMENT, FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT, message String 22.1.7.3. AlertEntityType Enum Values UNSET DEPLOYMENT CONTAINER_IMAGE RESOURCE 22.1.7.4. AlertProcessViolation Field Name Required Nullable Type Description Format message String processes List of StorageProcessIndicator 22.1.7.5. AlertResourceResourceType Enum Values UNKNOWN SECRETS CONFIGMAPS CLUSTER_ROLES CLUSTER_ROLE_BINDINGS NETWORK_POLICIES SECURITY_CONTEXT_CONSTRAINTS EGRESS_FIREWALLS 22.1.7.6. AlertViolation Field Name Required Nullable Type Description Format message String keyValueAttrs ViolationKeyValueAttrs networkFlowInfo ViolationNetworkFlowInfo type AlertViolationType GENERIC, K8S_EVENT, NETWORK_FLOW, NETWORK_POLICY, time Date Indicates violation time. This field differs from top-level field 'time' which represents last time the alert occurred in case of multiple occurrences of the policy alert. As of 55.0, this field is set only for kubernetes event violations, but may not be limited to it in future. date-time 22.1.7.7. AlertViolationType Enum Values GENERIC K8S_EVENT NETWORK_FLOW NETWORK_POLICY 22.1.7.8. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 22.1.7.9. KeyValueAttrsKeyValueAttr Field Name Required Nullable Type Description Format key String value String 22.1.7.10. NetworkFlowInfoEntity Field Name Required Nullable Type Description Format name String entityType StorageNetworkEntityInfoType UNKNOWN_TYPE, DEPLOYMENT, INTERNET, LISTEN_ENDPOINT, EXTERNAL_SOURCE, INTERNAL_ENTITIES, deploymentNamespace String deploymentType String port Integer int32 22.1.7.11. PolicyMitreAttackVectors Field Name Required Nullable Type Description Format tactic String techniques List of string 22.1.7.12. ProcessSignalLineageInfo Field Name Required Nullable Type Description Format parentUid Long int64 parentExecFilePath String 22.1.7.13. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 22.1.7.13.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 22.1.7.14. StorageAlert Field Name Required Nullable Type Description Format id String policy StoragePolicy lifecycleStage StorageLifecycleStage DEPLOY, BUILD, RUNTIME, clusterId String clusterName String namespace String namespaceId String deployment StorageAlertDeployment image StorageContainerImage resource StorageAlertResource violations List of AlertViolation For run-time phase alert, a maximum of 40 violations are retained. processViolation AlertProcessViolation enforcement AlertEnforcement time Date date-time firstOccurred Date date-time resolvedAt Date The time at which the alert was resolved. Only set if ViolationState is RESOLVED. date-time state StorageViolationState ACTIVE, SNOOZED, RESOLVED, ATTEMPTED, snoozeTill Date date-time platformComponent Boolean entityType AlertEntityType UNSET, DEPLOYMENT, CONTAINER_IMAGE, RESOURCE, 22.1.7.15. StorageAlertDeployment Field Name Required Nullable Type Description Format id String name String type String namespace String This field has to be duplicated in Alert for scope management and search. namespaceId String This field has to be duplicated in Alert for scope management and search. labels Map of string clusterId String This field has to be duplicated in Alert for scope management and search. clusterName String This field has to be duplicated in Alert for scope management and search. containers List of AlertDeploymentContainer annotations Map of string inactive Boolean 22.1.7.16. StorageAlertResource Field Name Required Nullable Type Description Format resourceType AlertResourceResourceType UNKNOWN, SECRETS, CONFIGMAPS, CLUSTER_ROLES, CLUSTER_ROLE_BINDINGS, NETWORK_POLICIES, SECURITY_CONTEXT_CONSTRAINTS, EGRESS_FIREWALLS, name String clusterId String This field has to be duplicated in Alert for scope management and search. clusterName String This field has to be duplicated in Alert for scope management and search. namespace String This field has to be duplicated in Alert for scope management and search. namespaceId String This field has to be duplicated in Alert for scope management and search. 22.1.7.17. StorageBooleanOperator Enum Values OR AND 22.1.7.18. StorageContainerImage Field Name Required Nullable Type Description Format id String name StorageImageName notPullable Boolean isClusterLocal Boolean 22.1.7.19. StorageEnforcementAction FAIL_KUBE_REQUEST_ENFORCEMENT: FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_CREATE_ENFORCEMENT: FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT: FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Enum Values UNSET_ENFORCEMENT SCALE_TO_ZERO_ENFORCEMENT UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT KILL_POD_ENFORCEMENT FAIL_BUILD_ENFORCEMENT FAIL_KUBE_REQUEST_ENFORCEMENT FAIL_DEPLOYMENT_CREATE_ENFORCEMENT FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT 22.1.7.20. StorageEventSource Enum Values NOT_APPLICABLE DEPLOYMENT_EVENT AUDIT_LOG_EVENT 22.1.7.21. StorageExclusion Field Name Required Nullable Type Description Format name String deployment StorageExclusionDeployment image StorageExclusionImage expiration Date date-time 22.1.7.22. StorageExclusionDeployment Field Name Required Nullable Type Description Format name String scope StorageScope 22.1.7.23. StorageExclusionImage Field Name Required Nullable Type Description Format name String 22.1.7.24. StorageImageName Field Name Required Nullable Type Description Format registry String remote String tag String fullName String 22.1.7.25. StorageL4Protocol Enum Values L4_PROTOCOL_UNKNOWN L4_PROTOCOL_TCP L4_PROTOCOL_UDP L4_PROTOCOL_ICMP L4_PROTOCOL_RAW L4_PROTOCOL_SCTP L4_PROTOCOL_ANY 22.1.7.26. StorageLifecycleStage Enum Values DEPLOY BUILD RUNTIME 22.1.7.27. StorageNetworkEntityInfoType INTERNAL_ENTITIES: INTERNAL_ENTITIES is for grouping all internal entities under a single network graph node Enum Values UNKNOWN_TYPE DEPLOYMENT INTERNET LISTEN_ENDPOINT EXTERNAL_SOURCE INTERNAL_ENTITIES 22.1.7.28. StoragePolicy Field Name Required Nullable Type Description Format id String name String Name of the policy. Must be unique. description String Free-form text description of this policy. rationale String remediation String Describes how to remediate a violation of this policy. disabled Boolean Toggles whether or not this policy will be executing and actively firing alerts. categories List of string List of categories that this policy falls under. Category names must already exist in Central. lifecycleStages List of StorageLifecycleStage Describes which policy lifecylce stages this policy applies to. Choices are DEPLOY, BUILD, and RUNTIME. eventSource StorageEventSource NOT_APPLICABLE, DEPLOYMENT_EVENT, AUDIT_LOG_EVENT, exclusions List of StorageExclusion Define deployments or images that should be excluded from this policy. scope List of StorageScope Defines clusters, namespaces, and deployments that should be included in this policy. No scopes defined includes everything. severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, enforcementActions List of StorageEnforcementAction FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates/updates. FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Lists the enforcement actions to take when a violation from this policy is identified. Possible value are UNSET_ENFORCEMENT, SCALE_TO_ZERO_ENFORCEMENT, UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT, KILL_POD_ENFORCEMENT, FAIL_BUILD_ENFORCEMENT, FAIL_KUBE_REQUEST_ENFORCEMENT, FAIL_DEPLOYMENT_CREATE_ENFORCEMENT, and. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT. notifiers List of string List of IDs of the notifiers that should be triggered when a violation from this policy is identified. IDs should be in the form of a UUID and are found through the Central API. lastUpdated Date date-time SORTName String For internal use only. SORTLifecycleStage String For internal use only. SORTEnforcement Boolean For internal use only. policyVersion String policySections List of StoragePolicySection PolicySections define the violation criteria for this policy. mitreAttackVectors List of PolicyMitreAttackVectors criteriaLocked Boolean Read-only field. If true, the policy's criteria fields are rendered read-only. mitreVectorsLocked Boolean Read-only field. If true, the policy's MITRE ATT&CK fields are rendered read-only. isDefault Boolean Read-only field. Indicates the policy is a default policy if true and a custom policy if false. source StoragePolicySource IMPERATIVE, DECLARATIVE, 22.1.7.29. StoragePolicyGroup Field Name Required Nullable Type Description Format fieldName String Defines which field on a deployment or image this PolicyGroup evaluates. See https://docs.openshift.com/acs/operating/manage-security-policies.html#policy-criteria_manage-security-policies for a complete list of possible values. booleanOperator StorageBooleanOperator OR, AND, negate Boolean Determines if the evaluation of this PolicyGroup is negated. Default to false. values List of StoragePolicyValue 22.1.7.30. StoragePolicySection Field Name Required Nullable Type Description Format sectionName String policyGroups List of StoragePolicyGroup The set of policies groups that make up this section. Each group can be considered an individual criterion. 22.1.7.31. StoragePolicySource Enum Values IMPERATIVE DECLARATIVE 22.1.7.32. StoragePolicyValue Field Name Required Nullable Type Description Format value String 22.1.7.33. StorageProcessIndicator Field Name Required Nullable Type Description Format id String deploymentId String containerName String podId String podUid String signal StorageProcessSignal clusterId String namespace String containerStartTime Date date-time imageId String 22.1.7.34. StorageProcessSignal Field Name Required Nullable Type Description Format id String A unique UUID for identifying the message We have this here instead of at the top level because we want to have each message to be self contained. containerId String time Date date-time name String args String execFilePath String pid Long int64 uid Long int64 gid Long int64 lineage List of string scraped Boolean lineageInfo List of ProcessSignalLineageInfo 22.1.7.35. StorageScope Field Name Required Nullable Type Description Format cluster String namespace String label StorageScopeLabel 22.1.7.36. StorageScopeLabel Field Name Required Nullable Type Description Format key String value String 22.1.7.37. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 22.1.7.38. StorageViolationState Enum Values ACTIVE SNOOZED RESOLVED ATTEMPTED 22.1.7.39. V1BuildDetectionRequest Field Name Required Nullable Type Description Format image StorageContainerImage imageName String noExternalMetadata Boolean sendNotifications Boolean force Boolean policyCategories List of string cluster String Cluster to delegate scan to, may be the cluster's name or ID. 22.1.7.40. V1BuildDetectionResponse Field Name Required Nullable Type Description Format alerts List of StorageAlert 22.1.7.41. ViolationKeyValueAttrs Field Name Required Nullable Type Description Format attrs List of KeyValueAttrsKeyValueAttr 22.1.7.42. ViolationNetworkFlowInfo Field Name Required Nullable Type Description Format protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, source NetworkFlowInfoEntity destination NetworkFlowInfoEntity 22.2. DetectDeployTime POST /v1/detect/deploy DetectDeployTime checks if any deployments violate deploy time policies. 22.2.1. Description 22.2.2. Parameters 22.2.2.1. Body Parameter Name Description Required Default Pattern body V1DeployDetectionRequest X 22.2.3. Return Type V1DeployDetectionResponse 22.2.4. Content Type application/json 22.2.5. Responses Table 22.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1DeployDetectionResponse 0 An unexpected error response. GooglerpcStatus 22.2.6. Samples 22.2.7. Common object reference 22.2.7.1. AlertDeploymentContainer Field Name Required Nullable Type Description Format image StorageContainerImage name String 22.2.7.2. AlertEnforcement Field Name Required Nullable Type Description Format action StorageEnforcementAction UNSET_ENFORCEMENT, SCALE_TO_ZERO_ENFORCEMENT, UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT, KILL_POD_ENFORCEMENT, FAIL_BUILD_ENFORCEMENT, FAIL_KUBE_REQUEST_ENFORCEMENT, FAIL_DEPLOYMENT_CREATE_ENFORCEMENT, FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT, message String 22.2.7.3. AlertEntityType Enum Values UNSET DEPLOYMENT CONTAINER_IMAGE RESOURCE 22.2.7.4. AlertProcessViolation Field Name Required Nullable Type Description Format message String processes List of StorageProcessIndicator 22.2.7.5. AlertResourceResourceType Enum Values UNKNOWN SECRETS CONFIGMAPS CLUSTER_ROLES CLUSTER_ROLE_BINDINGS NETWORK_POLICIES SECURITY_CONTEXT_CONSTRAINTS EGRESS_FIREWALLS 22.2.7.6. AlertViolation Field Name Required Nullable Type Description Format message String keyValueAttrs ViolationKeyValueAttrs networkFlowInfo ViolationNetworkFlowInfo type AlertViolationType GENERIC, K8S_EVENT, NETWORK_FLOW, NETWORK_POLICY, time Date Indicates violation time. This field differs from top-level field 'time' which represents last time the alert occurred in case of multiple occurrences of the policy alert. As of 55.0, this field is set only for kubernetes event violations, but may not be limited to it in future. date-time 22.2.7.7. AlertViolationType Enum Values GENERIC K8S_EVENT NETWORK_FLOW NETWORK_POLICY 22.2.7.8. ContainerConfigEnvironmentConfig Field Name Required Nullable Type Description Format key String value String envVarSource EnvironmentConfigEnvVarSource UNSET, RAW, SECRET_KEY, CONFIG_MAP_KEY, FIELD, RESOURCE_FIELD, UNKNOWN, 22.2.7.9. DeployDetectionResponseRun Field Name Required Nullable Type Description Format name String type String alerts List of StorageAlert 22.2.7.10. EnvironmentConfigEnvVarSource Enum Values UNSET RAW SECRET_KEY CONFIG_MAP_KEY FIELD RESOURCE_FIELD UNKNOWN 22.2.7.11. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 22.2.7.12. KeyValueAttrsKeyValueAttr Field Name Required Nullable Type Description Format key String value String 22.2.7.13. NetworkFlowInfoEntity Field Name Required Nullable Type Description Format name String entityType StorageNetworkEntityInfoType UNKNOWN_TYPE, DEPLOYMENT, INTERNET, LISTEN_ENDPOINT, EXTERNAL_SOURCE, INTERNAL_ENTITIES, deploymentNamespace String deploymentType String port Integer int32 22.2.7.14. PolicyMitreAttackVectors Field Name Required Nullable Type Description Format tactic String techniques List of string 22.2.7.15. PortConfigExposureInfo Field Name Required Nullable Type Description Format level PortConfigExposureLevel UNSET, EXTERNAL, NODE, INTERNAL, HOST, ROUTE, serviceName String serviceId String serviceClusterIp String servicePort Integer int32 nodePort Integer int32 externalIps List of string externalHostnames List of string 22.2.7.16. PortConfigExposureLevel Enum Values UNSET EXTERNAL NODE INTERNAL HOST ROUTE 22.2.7.17. ProcessSignalLineageInfo Field Name Required Nullable Type Description Format parentUid Long int64 parentExecFilePath String 22.2.7.18. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 22.2.7.18.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 22.2.7.19. SeccompProfileProfileType Enum Values UNCONFINED RUNTIME_DEFAULT LOCALHOST 22.2.7.20. SecurityContextSELinux Field Name Required Nullable Type Description Format user String role String type String level String 22.2.7.21. SecurityContextSeccompProfile Field Name Required Nullable Type Description Format type SeccompProfileProfileType UNCONFINED, RUNTIME_DEFAULT, LOCALHOST, localhostProfile String 22.2.7.22. StorageAlert Field Name Required Nullable Type Description Format id String policy StoragePolicy lifecycleStage StorageLifecycleStage DEPLOY, BUILD, RUNTIME, clusterId String clusterName String namespace String namespaceId String deployment StorageAlertDeployment image StorageContainerImage resource StorageAlertResource violations List of AlertViolation For run-time phase alert, a maximum of 40 violations are retained. processViolation AlertProcessViolation enforcement AlertEnforcement time Date date-time firstOccurred Date date-time resolvedAt Date The time at which the alert was resolved. Only set if ViolationState is RESOLVED. date-time state StorageViolationState ACTIVE, SNOOZED, RESOLVED, ATTEMPTED, snoozeTill Date date-time platformComponent Boolean entityType AlertEntityType UNSET, DEPLOYMENT, CONTAINER_IMAGE, RESOURCE, 22.2.7.23. StorageAlertDeployment Field Name Required Nullable Type Description Format id String name String type String namespace String This field has to be duplicated in Alert for scope management and search. namespaceId String This field has to be duplicated in Alert for scope management and search. labels Map of string clusterId String This field has to be duplicated in Alert for scope management and search. clusterName String This field has to be duplicated in Alert for scope management and search. containers List of AlertDeploymentContainer annotations Map of string inactive Boolean 22.2.7.24. StorageAlertResource Field Name Required Nullable Type Description Format resourceType AlertResourceResourceType UNKNOWN, SECRETS, CONFIGMAPS, CLUSTER_ROLES, CLUSTER_ROLE_BINDINGS, NETWORK_POLICIES, SECURITY_CONTEXT_CONSTRAINTS, EGRESS_FIREWALLS, name String clusterId String This field has to be duplicated in Alert for scope management and search. clusterName String This field has to be duplicated in Alert for scope management and search. namespace String This field has to be duplicated in Alert for scope management and search. namespaceId String This field has to be duplicated in Alert for scope management and search. 22.2.7.25. StorageBooleanOperator Enum Values OR AND 22.2.7.26. StorageContainer Field Name Required Nullable Type Description Format id String config StorageContainerConfig image StorageContainerImage securityContext StorageSecurityContext volumes List of StorageVolume ports List of StoragePortConfig secrets List of StorageEmbeddedSecret resources StorageResources name String livenessProbe StorageLivenessProbe readinessProbe StorageReadinessProbe 22.2.7.27. StorageContainerConfig Field Name Required Nullable Type Description Format env List of ContainerConfigEnvironmentConfig command List of string args List of string directory String user String uid String int64 appArmorProfile String 22.2.7.28. StorageContainerImage Field Name Required Nullable Type Description Format id String name StorageImageName notPullable Boolean isClusterLocal Boolean 22.2.7.29. StorageDeployment Field Name Required Nullable Type Description Format id String name String hash String uint64 type String namespace String namespaceId String orchestratorComponent Boolean replicas String int64 labels Map of string podLabels Map of string labelSelector StorageLabelSelector created Date date-time clusterId String clusterName String containers List of StorageContainer annotations Map of string priority String int64 inactive Boolean imagePullSecrets List of string serviceAccount String serviceAccountPermissionLevel StoragePermissionLevel UNSET, NONE, DEFAULT, ELEVATED_IN_NAMESPACE, ELEVATED_CLUSTER_WIDE, CLUSTER_ADMIN, automountServiceAccountToken Boolean hostNetwork Boolean hostPid Boolean hostIpc Boolean runtimeClass String tolerations List of StorageToleration ports List of StoragePortConfig stateTimestamp String int64 riskScore Float float platformComponent Boolean 22.2.7.30. StorageEmbeddedSecret Field Name Required Nullable Type Description Format name String path String 22.2.7.31. StorageEnforcementAction FAIL_KUBE_REQUEST_ENFORCEMENT: FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_CREATE_ENFORCEMENT: FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT: FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Enum Values UNSET_ENFORCEMENT SCALE_TO_ZERO_ENFORCEMENT UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT KILL_POD_ENFORCEMENT FAIL_BUILD_ENFORCEMENT FAIL_KUBE_REQUEST_ENFORCEMENT FAIL_DEPLOYMENT_CREATE_ENFORCEMENT FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT 22.2.7.32. StorageEventSource Enum Values NOT_APPLICABLE DEPLOYMENT_EVENT AUDIT_LOG_EVENT 22.2.7.33. StorageExclusion Field Name Required Nullable Type Description Format name String deployment StorageExclusionDeployment image StorageExclusionImage expiration Date date-time 22.2.7.34. StorageExclusionDeployment Field Name Required Nullable Type Description Format name String scope StorageScope 22.2.7.35. StorageExclusionImage Field Name Required Nullable Type Description Format name String 22.2.7.36. StorageImageName Field Name Required Nullable Type Description Format registry String remote String tag String fullName String 22.2.7.37. StorageL4Protocol Enum Values L4_PROTOCOL_UNKNOWN L4_PROTOCOL_TCP L4_PROTOCOL_UDP L4_PROTOCOL_ICMP L4_PROTOCOL_RAW L4_PROTOCOL_SCTP L4_PROTOCOL_ANY 22.2.7.38. StorageLabelSelector available tag: 3 Field Name Required Nullable Type Description Format matchLabels Map of string This is actually a oneof, but we can't make it one due to backwards compatibility constraints. requirements List of StorageLabelSelectorRequirement 22.2.7.39. StorageLabelSelectorOperator Enum Values UNKNOWN IN NOT_IN EXISTS NOT_EXISTS 22.2.7.40. StorageLabelSelectorRequirement Field Name Required Nullable Type Description Format key String op StorageLabelSelectorOperator UNKNOWN, IN, NOT_IN, EXISTS, NOT_EXISTS, values List of string 22.2.7.41. StorageLifecycleStage Enum Values DEPLOY BUILD RUNTIME 22.2.7.42. StorageLivenessProbe Field Name Required Nullable Type Description Format defined Boolean 22.2.7.43. StorageNetworkEntityInfoType INTERNAL_ENTITIES: INTERNAL_ENTITIES is for grouping all internal entities under a single network graph node Enum Values UNKNOWN_TYPE DEPLOYMENT INTERNET LISTEN_ENDPOINT EXTERNAL_SOURCE INTERNAL_ENTITIES 22.2.7.44. StoragePermissionLevel Enum Values UNSET NONE DEFAULT ELEVATED_IN_NAMESPACE ELEVATED_CLUSTER_WIDE CLUSTER_ADMIN 22.2.7.45. StoragePolicy Field Name Required Nullable Type Description Format id String name String Name of the policy. Must be unique. description String Free-form text description of this policy. rationale String remediation String Describes how to remediate a violation of this policy. disabled Boolean Toggles whether or not this policy will be executing and actively firing alerts. categories List of string List of categories that this policy falls under. Category names must already exist in Central. lifecycleStages List of StorageLifecycleStage Describes which policy lifecylce stages this policy applies to. Choices are DEPLOY, BUILD, and RUNTIME. eventSource StorageEventSource NOT_APPLICABLE, DEPLOYMENT_EVENT, AUDIT_LOG_EVENT, exclusions List of StorageExclusion Define deployments or images that should be excluded from this policy. scope List of StorageScope Defines clusters, namespaces, and deployments that should be included in this policy. No scopes defined includes everything. severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, enforcementActions List of StorageEnforcementAction FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates/updates. FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Lists the enforcement actions to take when a violation from this policy is identified. Possible value are UNSET_ENFORCEMENT, SCALE_TO_ZERO_ENFORCEMENT, UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT, KILL_POD_ENFORCEMENT, FAIL_BUILD_ENFORCEMENT, FAIL_KUBE_REQUEST_ENFORCEMENT, FAIL_DEPLOYMENT_CREATE_ENFORCEMENT, and. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT. notifiers List of string List of IDs of the notifiers that should be triggered when a violation from this policy is identified. IDs should be in the form of a UUID and are found through the Central API. lastUpdated Date date-time SORTName String For internal use only. SORTLifecycleStage String For internal use only. SORTEnforcement Boolean For internal use only. policyVersion String policySections List of StoragePolicySection PolicySections define the violation criteria for this policy. mitreAttackVectors List of PolicyMitreAttackVectors criteriaLocked Boolean Read-only field. If true, the policy's criteria fields are rendered read-only. mitreVectorsLocked Boolean Read-only field. If true, the policy's MITRE ATT&CK fields are rendered read-only. isDefault Boolean Read-only field. Indicates the policy is a default policy if true and a custom policy if false. source StoragePolicySource IMPERATIVE, DECLARATIVE, 22.2.7.46. StoragePolicyGroup Field Name Required Nullable Type Description Format fieldName String Defines which field on a deployment or image this PolicyGroup evaluates. See https://docs.openshift.com/acs/operating/manage-security-policies.html#policy-criteria_manage-security-policies for a complete list of possible values. booleanOperator StorageBooleanOperator OR, AND, negate Boolean Determines if the evaluation of this PolicyGroup is negated. Default to false. values List of StoragePolicyValue 22.2.7.47. StoragePolicySection Field Name Required Nullable Type Description Format sectionName String policyGroups List of StoragePolicyGroup The set of policies groups that make up this section. Each group can be considered an individual criterion. 22.2.7.48. StoragePolicySource Enum Values IMPERATIVE DECLARATIVE 22.2.7.49. StoragePolicyValue Field Name Required Nullable Type Description Format value String 22.2.7.50. StoragePortConfig Field Name Required Nullable Type Description Format name String containerPort Integer int32 protocol String exposure PortConfigExposureLevel UNSET, EXTERNAL, NODE, INTERNAL, HOST, ROUTE, exposedPort Integer int32 exposureInfos List of PortConfigExposureInfo 22.2.7.51. StorageProcessIndicator Field Name Required Nullable Type Description Format id String deploymentId String containerName String podId String podUid String signal StorageProcessSignal clusterId String namespace String containerStartTime Date date-time imageId String 22.2.7.52. StorageProcessSignal Field Name Required Nullable Type Description Format id String A unique UUID for identifying the message We have this here instead of at the top level because we want to have each message to be self contained. containerId String time Date date-time name String args String execFilePath String pid Long int64 uid Long int64 gid Long int64 lineage List of string scraped Boolean lineageInfo List of ProcessSignalLineageInfo 22.2.7.53. StorageReadinessProbe Field Name Required Nullable Type Description Format defined Boolean 22.2.7.54. StorageResources Field Name Required Nullable Type Description Format cpuCoresRequest Float float cpuCoresLimit Float float memoryMbRequest Float float memoryMbLimit Float float 22.2.7.55. StorageScope Field Name Required Nullable Type Description Format cluster String namespace String label StorageScopeLabel 22.2.7.56. StorageScopeLabel Field Name Required Nullable Type Description Format key String value String 22.2.7.57. StorageSecurityContext Field Name Required Nullable Type Description Format privileged Boolean selinux SecurityContextSELinux dropCapabilities List of string addCapabilities List of string readOnlyRootFilesystem Boolean seccompProfile SecurityContextSeccompProfile allowPrivilegeEscalation Boolean 22.2.7.58. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 22.2.7.59. StorageTaintEffect Enum Values UNKNOWN_TAINT_EFFECT NO_SCHEDULE_TAINT_EFFECT PREFER_NO_SCHEDULE_TAINT_EFFECT NO_EXECUTE_TAINT_EFFECT 22.2.7.60. StorageToleration Field Name Required Nullable Type Description Format key String operator StorageTolerationOperator TOLERATION_OPERATION_UNKNOWN, TOLERATION_OPERATOR_EXISTS, TOLERATION_OPERATOR_EQUAL, value String taintEffect StorageTaintEffect UNKNOWN_TAINT_EFFECT, NO_SCHEDULE_TAINT_EFFECT, PREFER_NO_SCHEDULE_TAINT_EFFECT, NO_EXECUTE_TAINT_EFFECT, 22.2.7.61. StorageTolerationOperator Enum Values TOLERATION_OPERATION_UNKNOWN TOLERATION_OPERATOR_EXISTS TOLERATION_OPERATOR_EQUAL 22.2.7.62. StorageViolationState Enum Values ACTIVE SNOOZED RESOLVED ATTEMPTED 22.2.7.63. StorageVolume Field Name Required Nullable Type Description Format name String source String destination String readOnly Boolean type String mountPropagation VolumeMountPropagation NONE, HOST_TO_CONTAINER, BIDIRECTIONAL, 22.2.7.64. V1DeployDetectionRemark Field Name Required Nullable Type Description Format name String permissionLevel String appliedNetworkPolicies List of string 22.2.7.65. V1DeployDetectionRequest Field Name Required Nullable Type Description Format deployment StorageDeployment noExternalMetadata Boolean enforcementOnly Boolean clusterId String 22.2.7.66. V1DeployDetectionResponse Field Name Required Nullable Type Description Format runs List of DeployDetectionResponseRun ignoredObjectRefs List of string The reference will be in the format: namespace/name[<group>/<version>, Kind=<kind>]. remarks List of V1DeployDetectionRemark 22.2.7.67. ViolationKeyValueAttrs Field Name Required Nullable Type Description Format attrs List of KeyValueAttrsKeyValueAttr 22.2.7.68. ViolationNetworkFlowInfo Field Name Required Nullable Type Description Format protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, source NetworkFlowInfoEntity destination NetworkFlowInfoEntity 22.2.7.69. VolumeMountPropagation Enum Values NONE HOST_TO_CONTAINER BIDIRECTIONAL 22.3. DetectDeployTimeFromYAML POST /v1/detect/deploy/yaml DetectDeployTimeFromYAML checks if the given deployment yaml violates any deploy time policies. 22.3.1. Description 22.3.2. Parameters 22.3.2.1. Body Parameter Name Description Required Default Pattern body V1DeployYAMLDetectionRequest X 22.3.3. Return Type V1DeployDetectionResponse 22.3.4. Content Type application/json 22.3.5. Responses Table 22.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1DeployDetectionResponse 0 An unexpected error response. GooglerpcStatus 22.3.6. Samples 22.3.7. Common object reference 22.3.7.1. AlertDeploymentContainer Field Name Required Nullable Type Description Format image StorageContainerImage name String 22.3.7.2. AlertEnforcement Field Name Required Nullable Type Description Format action StorageEnforcementAction UNSET_ENFORCEMENT, SCALE_TO_ZERO_ENFORCEMENT, UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT, KILL_POD_ENFORCEMENT, FAIL_BUILD_ENFORCEMENT, FAIL_KUBE_REQUEST_ENFORCEMENT, FAIL_DEPLOYMENT_CREATE_ENFORCEMENT, FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT, message String 22.3.7.3. AlertEntityType Enum Values UNSET DEPLOYMENT CONTAINER_IMAGE RESOURCE 22.3.7.4. AlertProcessViolation Field Name Required Nullable Type Description Format message String processes List of StorageProcessIndicator 22.3.7.5. AlertResourceResourceType Enum Values UNKNOWN SECRETS CONFIGMAPS CLUSTER_ROLES CLUSTER_ROLE_BINDINGS NETWORK_POLICIES SECURITY_CONTEXT_CONSTRAINTS EGRESS_FIREWALLS 22.3.7.6. AlertViolation Field Name Required Nullable Type Description Format message String keyValueAttrs ViolationKeyValueAttrs networkFlowInfo ViolationNetworkFlowInfo type AlertViolationType GENERIC, K8S_EVENT, NETWORK_FLOW, NETWORK_POLICY, time Date Indicates violation time. This field differs from top-level field 'time' which represents last time the alert occurred in case of multiple occurrences of the policy alert. As of 55.0, this field is set only for kubernetes event violations, but may not be limited to it in future. date-time 22.3.7.7. AlertViolationType Enum Values GENERIC K8S_EVENT NETWORK_FLOW NETWORK_POLICY 22.3.7.8. DeployDetectionResponseRun Field Name Required Nullable Type Description Format name String type String alerts List of StorageAlert 22.3.7.9. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 22.3.7.10. KeyValueAttrsKeyValueAttr Field Name Required Nullable Type Description Format key String value String 22.3.7.11. NetworkFlowInfoEntity Field Name Required Nullable Type Description Format name String entityType StorageNetworkEntityInfoType UNKNOWN_TYPE, DEPLOYMENT, INTERNET, LISTEN_ENDPOINT, EXTERNAL_SOURCE, INTERNAL_ENTITIES, deploymentNamespace String deploymentType String port Integer int32 22.3.7.12. PolicyMitreAttackVectors Field Name Required Nullable Type Description Format tactic String techniques List of string 22.3.7.13. ProcessSignalLineageInfo Field Name Required Nullable Type Description Format parentUid Long int64 parentExecFilePath String 22.3.7.14. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 22.3.7.14.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 22.3.7.15. StorageAlert Field Name Required Nullable Type Description Format id String policy StoragePolicy lifecycleStage StorageLifecycleStage DEPLOY, BUILD, RUNTIME, clusterId String clusterName String namespace String namespaceId String deployment StorageAlertDeployment image StorageContainerImage resource StorageAlertResource violations List of AlertViolation For run-time phase alert, a maximum of 40 violations are retained. processViolation AlertProcessViolation enforcement AlertEnforcement time Date date-time firstOccurred Date date-time resolvedAt Date The time at which the alert was resolved. Only set if ViolationState is RESOLVED. date-time state StorageViolationState ACTIVE, SNOOZED, RESOLVED, ATTEMPTED, snoozeTill Date date-time platformComponent Boolean entityType AlertEntityType UNSET, DEPLOYMENT, CONTAINER_IMAGE, RESOURCE, 22.3.7.16. StorageAlertDeployment Field Name Required Nullable Type Description Format id String name String type String namespace String This field has to be duplicated in Alert for scope management and search. namespaceId String This field has to be duplicated in Alert for scope management and search. labels Map of string clusterId String This field has to be duplicated in Alert for scope management and search. clusterName String This field has to be duplicated in Alert for scope management and search. containers List of AlertDeploymentContainer annotations Map of string inactive Boolean 22.3.7.17. StorageAlertResource Field Name Required Nullable Type Description Format resourceType AlertResourceResourceType UNKNOWN, SECRETS, CONFIGMAPS, CLUSTER_ROLES, CLUSTER_ROLE_BINDINGS, NETWORK_POLICIES, SECURITY_CONTEXT_CONSTRAINTS, EGRESS_FIREWALLS, name String clusterId String This field has to be duplicated in Alert for scope management and search. clusterName String This field has to be duplicated in Alert for scope management and search. namespace String This field has to be duplicated in Alert for scope management and search. namespaceId String This field has to be duplicated in Alert for scope management and search. 22.3.7.18. StorageBooleanOperator Enum Values OR AND 22.3.7.19. StorageContainerImage Field Name Required Nullable Type Description Format id String name StorageImageName notPullable Boolean isClusterLocal Boolean 22.3.7.20. StorageEnforcementAction FAIL_KUBE_REQUEST_ENFORCEMENT: FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_CREATE_ENFORCEMENT: FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT: FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Enum Values UNSET_ENFORCEMENT SCALE_TO_ZERO_ENFORCEMENT UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT KILL_POD_ENFORCEMENT FAIL_BUILD_ENFORCEMENT FAIL_KUBE_REQUEST_ENFORCEMENT FAIL_DEPLOYMENT_CREATE_ENFORCEMENT FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT 22.3.7.21. StorageEventSource Enum Values NOT_APPLICABLE DEPLOYMENT_EVENT AUDIT_LOG_EVENT 22.3.7.22. StorageExclusion Field Name Required Nullable Type Description Format name String deployment StorageExclusionDeployment image StorageExclusionImage expiration Date date-time 22.3.7.23. StorageExclusionDeployment Field Name Required Nullable Type Description Format name String scope StorageScope 22.3.7.24. StorageExclusionImage Field Name Required Nullable Type Description Format name String 22.3.7.25. StorageImageName Field Name Required Nullable Type Description Format registry String remote String tag String fullName String 22.3.7.26. StorageL4Protocol Enum Values L4_PROTOCOL_UNKNOWN L4_PROTOCOL_TCP L4_PROTOCOL_UDP L4_PROTOCOL_ICMP L4_PROTOCOL_RAW L4_PROTOCOL_SCTP L4_PROTOCOL_ANY 22.3.7.27. StorageLifecycleStage Enum Values DEPLOY BUILD RUNTIME 22.3.7.28. StorageNetworkEntityInfoType INTERNAL_ENTITIES: INTERNAL_ENTITIES is for grouping all internal entities under a single network graph node Enum Values UNKNOWN_TYPE DEPLOYMENT INTERNET LISTEN_ENDPOINT EXTERNAL_SOURCE INTERNAL_ENTITIES 22.3.7.29. StoragePolicy Field Name Required Nullable Type Description Format id String name String Name of the policy. Must be unique. description String Free-form text description of this policy. rationale String remediation String Describes how to remediate a violation of this policy. disabled Boolean Toggles whether or not this policy will be executing and actively firing alerts. categories List of string List of categories that this policy falls under. Category names must already exist in Central. lifecycleStages List of StorageLifecycleStage Describes which policy lifecylce stages this policy applies to. Choices are DEPLOY, BUILD, and RUNTIME. eventSource StorageEventSource NOT_APPLICABLE, DEPLOYMENT_EVENT, AUDIT_LOG_EVENT, exclusions List of StorageExclusion Define deployments or images that should be excluded from this policy. scope List of StorageScope Defines clusters, namespaces, and deployments that should be included in this policy. No scopes defined includes everything. severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, enforcementActions List of StorageEnforcementAction FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates/updates. FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Lists the enforcement actions to take when a violation from this policy is identified. Possible value are UNSET_ENFORCEMENT, SCALE_TO_ZERO_ENFORCEMENT, UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT, KILL_POD_ENFORCEMENT, FAIL_BUILD_ENFORCEMENT, FAIL_KUBE_REQUEST_ENFORCEMENT, FAIL_DEPLOYMENT_CREATE_ENFORCEMENT, and. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT. notifiers List of string List of IDs of the notifiers that should be triggered when a violation from this policy is identified. IDs should be in the form of a UUID and are found through the Central API. lastUpdated Date date-time SORTName String For internal use only. SORTLifecycleStage String For internal use only. SORTEnforcement Boolean For internal use only. policyVersion String policySections List of StoragePolicySection PolicySections define the violation criteria for this policy. mitreAttackVectors List of PolicyMitreAttackVectors criteriaLocked Boolean Read-only field. If true, the policy's criteria fields are rendered read-only. mitreVectorsLocked Boolean Read-only field. If true, the policy's MITRE ATT&CK fields are rendered read-only. isDefault Boolean Read-only field. Indicates the policy is a default policy if true and a custom policy if false. source StoragePolicySource IMPERATIVE, DECLARATIVE, 22.3.7.30. StoragePolicyGroup Field Name Required Nullable Type Description Format fieldName String Defines which field on a deployment or image this PolicyGroup evaluates. See https://docs.openshift.com/acs/operating/manage-security-policies.html#policy-criteria_manage-security-policies for a complete list of possible values. booleanOperator StorageBooleanOperator OR, AND, negate Boolean Determines if the evaluation of this PolicyGroup is negated. Default to false. values List of StoragePolicyValue 22.3.7.31. StoragePolicySection Field Name Required Nullable Type Description Format sectionName String policyGroups List of StoragePolicyGroup The set of policies groups that make up this section. Each group can be considered an individual criterion. 22.3.7.32. StoragePolicySource Enum Values IMPERATIVE DECLARATIVE 22.3.7.33. StoragePolicyValue Field Name Required Nullable Type Description Format value String 22.3.7.34. StorageProcessIndicator Field Name Required Nullable Type Description Format id String deploymentId String containerName String podId String podUid String signal StorageProcessSignal clusterId String namespace String containerStartTime Date date-time imageId String 22.3.7.35. StorageProcessSignal Field Name Required Nullable Type Description Format id String A unique UUID for identifying the message We have this here instead of at the top level because we want to have each message to be self contained. containerId String time Date date-time name String args String execFilePath String pid Long int64 uid Long int64 gid Long int64 lineage List of string scraped Boolean lineageInfo List of ProcessSignalLineageInfo 22.3.7.36. StorageScope Field Name Required Nullable Type Description Format cluster String namespace String label StorageScopeLabel 22.3.7.37. StorageScopeLabel Field Name Required Nullable Type Description Format key String value String 22.3.7.38. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 22.3.7.39. StorageViolationState Enum Values ACTIVE SNOOZED RESOLVED ATTEMPTED 22.3.7.40. V1DeployDetectionRemark Field Name Required Nullable Type Description Format name String permissionLevel String appliedNetworkPolicies List of string 22.3.7.41. V1DeployDetectionResponse Field Name Required Nullable Type Description Format runs List of DeployDetectionResponseRun ignoredObjectRefs List of string The reference will be in the format: namespace/name[<group>/<version>, Kind=<kind>]. remarks List of V1DeployDetectionRemark 22.3.7.42. V1DeployYAMLDetectionRequest Field Name Required Nullable Type Description Format yaml String noExternalMetadata Boolean enforcementOnly Boolean force Boolean policyCategories List of string cluster String Cluster to delegate scan to, may be the cluster's name or ID. namespace String 22.3.7.43. ViolationKeyValueAttrs Field Name Required Nullable Type Description Format attrs List of KeyValueAttrsKeyValueAttr 22.3.7.44. ViolationNetworkFlowInfo Field Name Required Nullable Type Description Format protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, source NetworkFlowInfoEntity destination NetworkFlowInfoEntity
|
[
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next available tag: 24",
"Represents an alert on a kubernetes resource other than a deployment (configmaps, secrets, etc.)",
"Next tag: 12",
"Next tag: 28",
"Next available tag: 13",
"For any update to EnvVarSource, please also update 'ui/src/messages/common.js'",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next available tag: 24",
"Represents an alert on a kubernetes resource other than a deployment (configmaps, secrets, etc.)",
"Next tag: 12",
"Next available tag: 36",
"Label selector components are joined with logical AND, see https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/",
"Next available tag: 4",
"For any update to PermissionLevel, also update: - pkg/searchbasedpolicies/builders/k8s_rbac.go - ui/src/messages/common.js",
"Next tag: 28",
"Next Available Tag: 6",
"Next available tag: 13",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next available tag: 24",
"Represents an alert on a kubernetes resource other than a deployment (configmaps, secrets, etc.)",
"Next tag: 12",
"Next tag: 28",
"Next available tag: 13"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/detectionservice
|
Chapter 17. Accessing odf-console with the ovs-multitenant plugin by manually enabling global pod networking
|
Chapter 17. Accessing odf-console with the ovs-multitenant plugin by manually enabling global pod networking In OpenShift Container Platform, when ovs-multitenant plugin is used for software-defined networking (SDN), pods from different projects cannot send packets to or receive packets from pods and services of a different project. By default, pods can not communicate between namespaces or projects because a project's pod networking is not global. To access odf-console, the OpenShift console pod in the openshift-console namespace needs to connect with the OpenShift Data Foundation odf-console in the openshift-storage namespace. This is possible only when you manually enable global pod networking. Issue When`ovs-multitenant` plugin is used in the OpenShift Container Platform, the odf-console plugin fails with the following message: Resolution Make the pod networking for the OpenShift Data Foundation project global:
|
[
"GET request for \"odf-console\" plugin failed: Get \"https://odf-console-service.openshift-storage.svc.cluster.local:9001/locales/en/plugin__odf-console.json\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)",
"oc adm pod-network make-projects-global openshift-storage"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/troubleshooting_openshift_data_foundation/accessing-odf-console-with-ovs-multitenant-plugin-by-manually-enabling-global-pod-networking_rhodf
|
Chapter 6. Red Hat Quay repository overview
|
Chapter 6. Red Hat Quay repository overview A repository provides a central location for storing a related set of container images. These images can be used to build applications along with their dependencies in a standardized format. Repositories are organized by namespaces. Each namespace can have multiple repositories. For example, you might have a namespace for your personal projects, one for your company, or one for a specific team within your organization. With a paid plan, Quay.io provides users with access controls for their repositories. Users can make a repository public, meaning that anyone can pull, or download, the images from it, or users can make it private, restricting access to authorized users or teams. Note The free tier of Quay.io does not allow for private repositories. You must upgrade to a paid tier of Quay.io to create a private repository. For more information, see "Information about Quay.io pricing". There are two ways to create a repository in Quay.io: by pushing an image with the relevant podman command, or by using the Quay.io UI. You can also use the UI to delete a repository. If you push an image through the command-line interface (CLI) without first creating a repository on the UI, the created repository is set to Private , regardless of the plan you have. Note It is recommended that you create a repository on the Quay.io UI before pushing an image. Quay.io checks the plan status and does not allow creation of a private repository if a plan is not active. 6.1. Creating a repository by using the UI Use the following procedure to create a repository using the Quay.io UI. Procedure Use the following procedure to create a repository using the v2 UI. Procedure Click Repositories on the navigation pane. Click Create Repository . Select a namespace, for example, quayadmin , and then enter a Repository name , for example, testrepo . Important Do not use the following words in your repository name: * build * trigger * tag * notification When these words are used for repository names, users are unable access the repository, and are unable to permanently delete the repository. Attempting to delete these repositories returns the following error: Failed to delete repository <repository_name>, HTTP404 - Not Found. Click Create . Now, your example repository should populate under the Repositories page. Optional. Click Settings Repository visibility Make private to set the repository to private. 6.2. Creating a repository by using Podman With the proper credentials, you can push an image to a repository using Podman that does not yet exist in your Quay.io instance. Pushing an image refers to the process of uploading a container image from your local system or development environment to a container registry like Quay.io. After pushing an image to your registry, a repository is created. If you push an image through the command-line interface (CLI) without first creating a repository on the UI, the created repository is set to Private . If you push an image through the command-line interface (CLI) without first creating a repository on the UI, the created repository is set to Private , regardless of the plan you have. Note It is recommended that you create a repository on the Quay.io UI before pushing an image. Quay.io checks the plan status and does not allow creation of a private repository if a plan is not active. Use the following procedure to create an image repository by pushing an image. Prerequisites You have download and installed the podman CLI. You have logged into your registry. You have pulled an image, for example, busybox. Procedure Pull a sample page from an example registry. For example: USD podman pull busybox Example output Trying to pull docker.io/library/busybox... Getting image source signatures Copying blob 4c892f00285e done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9 Tag the image on your local system with the new repository and image name. For example: USD podman tag docker.io/library/busybox quay.io/quayadmin/busybox:test Push the image to the registry. Following this step, you can use your browser to see the tagged image in your repository. USD podman push --tls-verify=false quay.io/quayadmin/busybox:test Example output Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 6.3. Deleting a repository by using the UI You can delete a repository directly on the UI. Prerequisites You have created a repository. Procedure On the Repositories page of the v2 UI, check the box of the repository that you want to delete, for example, quayadmin/busybox . Click the Actions drop-down menu. Click Delete . Type confirm in the box, and then click Delete . After deletion, you are returned to the Repositories page. 6.4. User settings The User Settings page provides users a way to set their email address, password, account type, set up desktop notifications, select an avatar, delete an account, adjust the time machine setting, and view billing information. 6.4.1. Navigating to the User Settings page Use the following procedure to navigate to the User Settings page. Procedure On Quay.io, click your username in the header. Select Account Settings . You are redirected to the User Settings page. 6.4.2. Adjusting user settings Use the following procedure to adjust user settings. Procedure To change your email address, select the current email address for Email Address . In the pop-up window, enter a new email address, then, click Change Email . A verification email will be sent before the change is applied. To change your password, click Change password . Enter the new password in both boxes, then click Change Password . Change the account type by clicking Individual Account , or the option to Account Type . In some cases, you might have to leave an organization prior to changing the account type. Adjust your desktop notifications by clicking the option to Desktop Notifications . Users can either enable, or disable, this feature. You can delete an account by clicking Begin deletion . You cannot delete an account if you have an active plan, or if you are a member of an organization where you are the only administrator. You must confirm deletion by entering the namespace. Important Deleting an account is not reversible and will delete all of the account's data including repositories, created build triggers, and notifications. You can set the time machine feature by clicking the drop-box to Time Machine . This feature dictates the amount of time after a tag is deleted that the tag is accessible in time machine before being garbage collected. After selecting a time, click Save Expiration Time . 6.4.3. Billing information You can view billing information on the User Settings . In this section, the following information is available: Current Plan . This section denotes the current plan Quay.io plan that you are signed up for. It also shows the amount of private repositories you have. Invoices . If you are on a paid plan, you can click View Invoices to view a list of invoices. Receipts . If you are on a paid plan, you can select whether to have receipts for payment emailed to you, another user, or to opt out of receipts altogether.
|
[
"podman pull busybox",
"Trying to pull docker.io/library/busybox Getting image source signatures Copying blob 4c892f00285e done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9",
"podman tag docker.io/library/busybox quay.io/quayadmin/busybox:test",
"podman push --tls-verify=false quay.io/quayadmin/busybox:test",
"Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/about_quay_io/use-quay-create-repo
|
Chapter 3. API index
|
Chapter 3. API index API API group APIService apiregistration.k8s.io/v1 Binding v1 CertificateSigningRequest certificates.k8s.io/v1 ClusterRole rbac.authorization.k8s.io/v1 ClusterRoleBinding rbac.authorization.k8s.io/v1 ComponentStatus v1 ConfigMap v1 ControllerRevision apps/v1 CronJob batch/v1 CSIDriver storage.k8s.io/v1 CSINode storage.k8s.io/v1 CSIStorageCapacity storage.k8s.io/v1 CustomResourceDefinition apiextensions.k8s.io/v1 DaemonSet apps/v1 Deployment apps/v1 Endpoints v1 EndpointSlice discovery.k8s.io/v1 Event v1 Event events.k8s.io/v1 Eviction policy/v1 FlowSchema flowcontrol.apiserver.k8s.io/v1beta3 HorizontalPodAutoscaler autoscaling/v2 Ingress networking.k8s.io/v1 IngressClass networking.k8s.io/v1 Job batch/v1 Lease coordination.k8s.io/v1 LimitRange v1 LocalSubjectAccessReview authorization.k8s.io/v1 LogicalVolume topolvm.io/v1 MutatingWebhookConfiguration admissionregistration.k8s.io/v1 Namespace v1 NetworkPolicy networking.k8s.io/v1 Node v1 PersistentVolume v1 PersistentVolumeClaim v1 Pod v1 PodDisruptionBudget policy/v1 PodTemplate v1 PriorityClass scheduling.k8s.io/v1 PriorityLevelConfiguration flowcontrol.apiserver.k8s.io/v1beta3 RangeAllocation security.internal.openshift.io/v1 ReplicaSet apps/v1 ReplicationController v1 ResourceQuota v1 Role rbac.authorization.k8s.io/v1 RoleBinding rbac.authorization.k8s.io/v1 Route route.openshift.io/v1 RuntimeClass node.k8s.io/v1 Scale autoscaling/v1 Secret v1 SecurityContextConstraints security.openshift.io/v1 SelfSubjectAccessReview authorization.k8s.io/v1 SelfSubjectRulesReview authorization.k8s.io/v1 Service v1 ServiceAccount v1 StatefulSet apps/v1 StorageClass storage.k8s.io/v1 StorageVersionMigration migration.k8s.io/v1alpha1 SubjectAccessReview authorization.k8s.io/v1 TokenRequest authentication.k8s.io/v1 TokenReview authentication.k8s.io/v1 ValidatingWebhookConfiguration admissionregistration.k8s.io/v1 VolumeAttachment storage.k8s.io/v1 VolumeSnapshot snapshot.storage.k8s.io/v1 VolumeSnapshotClass snapshot.storage.k8s.io/v1 VolumeSnapshotContent snapshot.storage.k8s.io/v1
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/api_reference/api-index
|
Chapter 135. YAML DSL
|
Chapter 135. YAML DSL Since Camel 3.9 The YAML DSL provides the capability to define your Camel routes, route templates & REST DSL configuration in YAML. 135.1. Defining a route A route is a collection of elements defined as follows: - from: 1 uri: "direct:start" steps: 2 - filter: expression: simple: "USD{in.header.continue} == true" steps: - to: uri: "log:filtered" - to: uri: "log:original" Where, 1 Route entry point, by default from and rest are supported. 2 Processing steps Note Each step represents a YAML map that has a single entry where the field name is the EIP name. As a general rule, each step provides all the parameters the related definition declares, but there are some minor differences/enhancements: Output Aware Steps Some steps, such as filter and split , have their own pipeline when an exchange matches the filter expression or for the items generated by the split expression. You can define these pipelines in the steps field: filter: expression: simple: "USD{in.header.continue} == true" steps: - to: uri: "log:filtered" Expression Aware Steps Some EIP, such as filter and split , supports the definition of an expression through the expression field: Explicit Expression field filter: expression: simple: "USD{in.header.continue} == true" To make the DSL less verbose, you can omit the expression field. Implicit Expression field filter: simple: "USD{in.header.continue} == true" In general, expressions can be defined inline, such as within the examples above but if you need provide more information, you can 'unroll' the expression definition and configure any single parameter the expression defines. Full Expression definition filter: tokenize: token: "<" end-token: ">" Data Format Aware Steps The EIP marshal and unmarshal supports the definition of data formats: marshal: json: library: Gson Note In case you want to use the data-format's default settings, you need to place an empty block as data format parameters, like json: {} 135.2. Defining endpoints To define an endpoint with the YAML DSL you have two options: Using a classic Camel URI: - from: uri: "timer:tick?period=1s" steps: - to: uri: "telegram:bots?authorizationToken=XXX" Using URI and parameters: - from: uri: "timer://tick" parameters: period: "1s" steps: - to: uri: "telegram:bots" parameters: authorizationToken: "XXX" 135.3. Defining beans In addition to the general support for creating beans provided by Camel Main , the YAML DSL provide a convenient syntax to define and configure them: - beans: - name: beanFromMap 1 type: com.acme.MyBean 2 properties: 3 foo: bar Where, 1 The name of the bean which will bound the instance to the Camel Registry. 2 The full qualified class name of the bean 3 The properties of the bean to be set The properties of the bean can be defined using either a map or properties style, as shown in the example below: - beans: # map style - name: beanFromMap type: com.acme.MyBean properties: field1: 'f1' field2: 'f2' nested: field1: 'nf1' field2: 'nf2' # properties style - name: beanFromProps type: com.acme.MyBean properties: field1: 'f1_p' field2: 'f2_p' nested.field1: 'nf1_p' nested.field2: 'nf2_p' Note The beans elements is only used as root element. 135.4. Configuring Options Camel components are configured on two levels: Component level Endpoint level 135.4.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 135.4.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 135.5. Configuring Options on languages Some languages have additional configurations that you may need to use. For example, the JSONPath can be configured to ignore JSON parsing errors. This is intended when you use a Content Based Router and want to route the message to different endpoints. The JSON payload of the message can be in different forms, meaning that the JSonPath expressions in some cases would fail with an exception, and other times not. In this situation you must set suppress-exception to true, as shown below: - from: uri: "direct:start" steps: - choice: when: - jsonpath: expression: "person.middlename" suppress-exceptions: true steps: - to: "mock:middle" - jsonpath: expression: "person.lastname" suppress-exceptions: true steps: - to: "mock:last" otherwise: steps: - to: "mock:other" In the route above, the following message would have failed the JSonPath expression person.middlename because the JSON payload does not have a middlename field. To remedy this, we have suppressed the exception. { "person": { "firstname": "John", "lastname": "Doe" } } 135.6. External examples You can find a set of examples using main-yaml in Camel examples that demonstrate how to create the Camel Routes with YAML. You can also refer to Camel Kamelets where each Kamelet is defined using YAML.
|
[
"- from: 1 uri: \"direct:start\" steps: 2 - filter: expression: simple: \"USD{in.header.continue} == true\" steps: - to: uri: \"log:filtered\" - to: uri: \"log:original\"",
"filter: expression: simple: \"USD{in.header.continue} == true\" steps: - to: uri: \"log:filtered\"",
"filter: expression: simple: \"USD{in.header.continue} == true\"",
"filter: simple: \"USD{in.header.continue} == true\"",
"filter: tokenize: token: \"<\" end-token: \">\"",
"marshal: json: library: Gson",
"- from: uri: \"timer:tick?period=1s\" steps: - to: uri: \"telegram:bots?authorizationToken=XXX\"",
"- from: uri: \"timer://tick\" parameters: period: \"1s\" steps: - to: uri: \"telegram:bots\" parameters: authorizationToken: \"XXX\"",
"- beans: - name: beanFromMap 1 type: com.acme.MyBean 2 properties: 3 foo: bar",
"- beans: # map style - name: beanFromMap type: com.acme.MyBean properties: field1: 'f1' field2: 'f2' nested: field1: 'nf1' field2: 'nf2' # properties style - name: beanFromProps type: com.acme.MyBean properties: field1: 'f1_p' field2: 'f2_p' nested.field1: 'nf1_p' nested.field2: 'nf2_p'",
"- from: uri: \"direct:start\" steps: - choice: when: - jsonpath: expression: \"person.middlename\" suppress-exceptions: true steps: - to: \"mock:middle\" - jsonpath: expression: \"person.lastname\" suppress-exceptions: true steps: - to: \"mock:last\" otherwise: steps: - to: \"mock:other\"",
"{ \"person\": { \"firstname\": \"John\", \"lastname\": \"Doe\" } }"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-yaml-dsl-component-starter
|
Chapter 3. Starting JDK Flight Recorder
|
Chapter 3. Starting JDK Flight Recorder 3.1. Starting JDK Flight Recorder when JVM starts You can start the JDK Flight Recorder (JFR) when a Java process starts. You can modify the behavior of the JFR by adding optional parameters. Procedure Run the java command using the --XX option. USD java -XX:StartFlightRecording Demo where Demo is the name of the Java application. The JFR starts with the Java application. Example The following command starts a Java process ( Demo ) and with it initiates an hour-long flight recording which is saved to a file called demorecording.jfr : USD java -XX:StartFlightRecording=duration=1h,filename=demorecording.jfr Demo Additional resources For a detailed list of JFR options, see Java tools reference . 3.2. Starting JDK Flight Recorder on a running JVM You can use the jcmd utility to send diagnostic command requests to a running JVM. jcmd includes commands for interacting with JFR, with the most basic commands being start , dump , and stop . To interact with a JVM, jcmd requires the process id (pid) of the JVM. You can retrieve the by using the jcmd -l command which displays a list of the running JVM process ids, as well as other information such as the main class and command-line arguments that were used to launch the processes. The jcmd utility is located under USDJAVA_HOME/bin . Procedure Start a flight recording using the following command: USD jcmd <pid> JFR.start <options> For example, the following command starts a recording named demorecording , which keeps data from the last four hours, and has size limit of 400 MB: USD jcmd <pid> JFR.start name=demorecording maxage=4h maxsize=400MB Additional resources For a detailed list of jcmd options, see jcmd Tools Reference . 3.3. Starting the JDK Flight Recorder on JVM by using the JDK Mission Control application The JDK Mission Control (JMC) application has a Flight Recording Wizard that allows for a streamlined experience of starting and configuring flight recordings. Procedure Open the JVM Browser. USD JAVA_HOME/bin/jmc Right-click a JVM in JVM Browser view and select Start Flight Recording . The Flight Recording Wizard opens. Figure 3.1. JMC JFR Wizard The JDK Flight Recording Wizard has three pages: The first page of the wizard contains general settings for the flight recording including: Name of the recording Path and filename to which the recording is saved Whether the recording is a fixed-time or continuous recording, which event template will be used Description of the recording The second page contains event options for the flight recording. You can configure the level of detail that Garbage Collections, Memory Profiling, and Method Sampling and other events record. The third page contains settings for the event details. You can turn events on or off, enable the recording of stack traces, and alter the time threshold required to record an event. Edit the settings for the recording. Click Finish . The wizard exits and the flight recording starts. 3.4. Defining and using the custom event API The JDK Flight Recorder (JFR) is an event recorder that includes the custom event API. The custom event API, stored in the jdk.jfr module, is the software interface that enables your application to communicate with the JFR. The JFR API includes classes that you can use to manage recordings and create custom events for your Java application, JVM, or operating system. Before you use the custom event API to monitor an event, you must define a name and metadata for your custom event type. You can define a JFR base event, such as a Duration , Instant , Requestable , or Time event , by extending the Event class. Specifically, you can add fields, such as duration values, to the class that matches data types defined by the application payload attributes. After you define an Event class, you can create event objects. This procedure demonstrates how to use a custom event type with JFR and JDK Mission Control (JMC) to analyze the runtime performance of a simple example program. Procedure In your custom event type, in the Event class, use the @name annotation to name the custom event. This name displays in the JMC graphical user interface (GUI). Example of defining a custom event type name in the Event class @Name("SampleCustomEvent") public class SampleCustomEvent extends Event {...} Define the metadata for your Event class and its attributes, such as name, category, and labels. Labels display event types for a client, such as JMC. Note Large recording files might cause performance issues, and this might affect how you would like to interact with the files. Make sure you correctly define the number of event recording annotations you need. Defining unnecessary annotations might increase the size of your recording files. Example of defining annotations for a sample Event class @Name("SampleCustomEvent") 1 @Label("Sample Custom Event") @Category("Sample events") @Description("Custom Event to demonstrate the Custom Events API") @StackTrace(false) 2 public class SampleCustomEvent extends Event { @Label("Method") 3 public String method; @Label("Generated Number") public int number; @Label("Size") @DataAmount 4 public int size; } 1 Details annotations, such as @Name , that define metadata for how the custom event displays on the JMC GUI. 2 The @StackTrace annotation increases the size of a flight recording. By default, the JFR does not include the stackTrace of the location that was created for the event. 3 The @Label annotations define parameters for each method, such as resource methods for HTTP requests. 4 The @DataAmount annotation includes an attribute that defines the data amount in bits of bytes. JMC automatically renders the data amount in other units, such as megabytes (MB). Define contextual information in your Event class. This information sets the request handling behavior of your custom event type, so that you configure an event type to collect specific JFR data. Example of defining a simple main class and an event loop In the preceding example, the simple main class registers events, and the event loop populates the event fields and then emits the custom events. Examine an event type in the application of your choice, such as the JMC or the JFR tool. Figure 3.2. Example of examining an event type in JMC A JFR recording can include different event types. You can examine each event type in your application. Additional resources For more information about JMC, see Introduction to JDK Mission Control .
|
[
"@Name(\"SampleCustomEvent\") public class SampleCustomEvent extends Event {...}",
"@Name(\"SampleCustomEvent\") 1 @Label(\"Sample Custom Event\") @Category(\"Sample events\") @Description(\"Custom Event to demonstrate the Custom Events API\") @StackTrace(false) 2 public class SampleCustomEvent extends Event { @Label(\"Method\") 3 public String method; @Label(\"Generated Number\") public int number; @Label(\"Size\") @DataAmount 4 public int size; }",
"public class Main { private static int requestsSent; public static void main(String[] args) { // Register the custom event FlightRecorder.register(SampleCustomEvent.class); // Do some work to generate the events while (requestsSent <= 1000) { try { eventLoopBody(); Thread.sleep(100); } catch (Exception e) { e.printStackTrace(); } } } private static void eventLoopBody() { // Create and begin the event SampleCustomEvent event = new SampleCustomEvent(); event.begin(); // Generate some data for the event Random r = new Random(); int someData = r.nextInt(1000000); // Set the event fields event.method = \"eventLoopBody\"; event.number = someData; event.size = 4; // End the event event.end(); event.commit(); requestsSent++; }"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/using_jdk_flight_recorder_with_red_hat_build_of_openjdk/starting-jdk-flight-recorder
|
8.100. libvirt-snmp
|
8.100. libvirt-snmp 8.100.1. RHBA-2013:1666 - libvirt-snmp bug fix update Updated libvirt-snmp packages that fix one bug are now available for Red Hat Enterprise Linux 6. The libvirt-snmp packages allow users to control and monitor the libvirt virtualization management tool through Simple Network Management Protocol (SNMP). Bug Fix BZ# 736258 Previously, closing the libvirtMib_subagent using the Ctrl+C key combination led to a memory leak. The libvirtd daemon could be also terminated sometimes. A patch has been applied to address this issue, and a memory leak no longer occurs in this scenario. Users of libvirt-snmp are advised to upgrade to these updated packages, which fix this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/libvirt-snmp
|
14.9.2. Shutting Down Red Hat Enterprise Linux 6 Guests on a Red Hat Enterprise Linux 7 Host
|
14.9.2. Shutting Down Red Hat Enterprise Linux 6 Guests on a Red Hat Enterprise Linux 7 Host Installing Red Hat Enterprise Linux 6 guest virtual machines with the Minimal installation option does not install the acpid package. Red Hat Enterprise Linux 7 no longer requires this package, as it has been taken over by systemd . However, Red Hat Enterprise Linux 6 guest virtual machines running on a Red Hat Enterprise Linux 7 host still require it. Without the acpid package, the Red Hat Enterprise Linux 6 guest virtual machine does not shut down when the virsh shutdown command is executed. The virsh shutdown command is designed to gracefully shut down guest virtual machines. Using virsh shutdown is easier and safer for system administration. Without graceful shut down with the virsh shutdown command a system administrator must log into a guest virtual machine manually or send the Ctrl - Alt - Del key combination to each guest virtual machine. Note Other virtualized operating systems may be affected by this issue. The virsh shutdown command requires that the guest virtual machine operating system is configured to handle ACPI shut down requests. Many operating systems require additional configuration on the guest virtual machine operating system to accept ACPI shut down requests. Procedure 14.4. Workaround for Red Hat Enterprise Linux 6 guests Install the acpid package The acpid service listen and processes ACPI requests. Log into the guest virtual machine and install the acpid package on the guest virtual machine: Enable the acpid service Set the acpid service to start during the guest virtual machine boot sequence and start the service: Prepare guest domain xml Edit the domain XML file to include the following element. Replace the virtio serial port with org.qemu.guest_agent.0 and use your guest's name instead of USDguestname <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/{USDguestname}.agent'/> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel> Figure 14.2. Guest XML replacement Install the QEMU guest agent Install the QEMU guest agent (QEMU-GA) and start the service as directed in Chapter 10, QEMU-img and QEMU Guest Agent . If you are running a Windows guest there are instructions in this chapter for that as well. Shutdown the guest Run the following commands Shut down the guest virtual machine Wait a few seconds for the guest virtual machine to shut down. Start the domain named rhel6 , with the XML file you edited. Shut down the acpi in the rhel6 guest virtual machine. List all the domains again, rhel6 should still be on the list, and it should indicate it is shut off. Start the domain named rhel6 , with the XML file you edited. Shut down the rhel6 guest virtual machine guest agent. List the domains. rhel6 should still be on the list, and it should indicate it is shut off The guest virtual machine will shut down using the virsh shutdown command for the consecutive shutdowns, without using the workaround described above. In addition to the method described above, a guest can be automatically shutdown, by stopping the libvirt-guest service. Refer to Section 14.9.3, "Manipulating the libvirt-guests Configuration Settings" for more information on this method.
|
[
"yum install acpid",
"chkconfig acpid on service acpid start",
"<channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/{USDguestname}.agent'/> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>",
"virsh list --all - this command lists all of the known domains Id Name State ---------------------------------- rhel6 running",
"virsh shutdown rhel6 Domain rhel6 is being shutdown",
"virsh list --all Id Name State ---------------------------------- . rhel6 shut off",
"virsh start rhel6",
"virsh shutdown --mode acpi rhel6",
"virsh list --all Id Name State ---------------------------------- rhel6 shut off",
"virsh start rhel6",
"virsh shutdown --mode agent rhel6",
"virsh list --all Id Name State ---------------------------------- rhel6 shut off"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-shutting_down_rednbsphat_enterprisenbsplinuxnbsp6_guests_on_a_rednbsphat_enterprisenbsplinuxnbsp7_host
|
Chapter 4. Installing a cluster
|
Chapter 4. Installing a cluster 4.1. Cleaning up installations In case of an earlier failed deployment, remove the artifacts from the failed attempt before trying to deploy OpenShift Container Platform again. Procedure Power off all bare-metal nodes before installing the OpenShift Container Platform cluster by using the following command: USD ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off Remove all old bootstrap resources if any remain from an earlier deployment attempt by using the following script: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done Delete the artifacts that the earlier installation generated by using the following command: USD cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json \ .openshift_install.log .openshift_install_state.json Re-create the OpenShift Container Platform manifests by using the following command: USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests 4.2. Deploying the cluster via the OpenShift Container Platform installer Run the OpenShift Container Platform installer: USD ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster 4.3. Following the progress of the installation During the deployment process, you can check the installation's overall status by issuing the tail command to the .openshift_install.log log file in the install directory folder: USD tail -f /path/to/install-dir/.openshift_install.log 4.4. Verifying static IP address configuration If the DHCP reservation for a cluster node specifies an infinite lease, after the installer successfully provisions the node, the dispatcher script checks the node's network configuration. If the script determines that the network configuration contains an infinite DHCP lease, it creates a new connection using the IP address of the DHCP lease as a static IP address. Note The dispatcher script might run on successfully provisioned nodes while the provisioning of other nodes in the cluster is ongoing. Verify the network configuration is working properly. Procedure Check the network interface configuration on the node. Turn off the DHCP server and reboot the OpenShift Container Platform node and ensure that the network configuration works properly. 4.5. Additional resources Understanding update channels and releases
|
[
"ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json .openshift_install.log .openshift_install_state.json",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster",
"tail -f /path/to/install-dir/.openshift_install.log"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-installing-a-cluster
|
Chapter 23. dns
|
Chapter 23. dns This chapter describes the commands under the dns command. 23.1. dns quota list List quotas Usage: Table 23.1. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None --project-id PROJECT_ID Project id default: current project Table 23.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 23.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 23.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 23.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 23.2. dns quota reset Reset quotas Usage: Table 23.6. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None --project-id PROJECT_ID Project id 23.3. dns quota set Set quotas Usage: Table 23.7. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None --project-id PROJECT_ID Project id --api-export-size <api-export-size> New value for the api-export-size quota --recordset-records <recordset-records> New value for the recordset-records quota --zone-records <zone-records> New value for the zone-records quota --zone-recordsets <zone-recordsets> New value for the zone-recordsets quota --zones <zones> New value for the zones quota Table 23.8. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 23.9. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 23.10. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 23.11. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 23.4. dns service list List service statuses Usage: Table 23.12. Command arguments Value Summary -h, --help Show this help message and exit --hostname HOSTNAME Hostname --service_name SERVICE_NAME Service name --status STATUS Status --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 23.13. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 23.14. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 23.15. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 23.16. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 23.5. dns service show Show service status details Usage: Table 23.17. Positional arguments Value Summary id Service status id Table 23.18. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 23.19. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 23.20. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 23.21. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 23.22. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
|
[
"openstack dns quota list [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] [--project-id PROJECT_ID]",
"openstack dns quota reset [-h] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] [--project-id PROJECT_ID]",
"openstack dns quota set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] [--project-id PROJECT_ID] [--api-export-size <api-export-size>] [--recordset-records <recordset-records>] [--zone-records <zone-records>] [--zone-recordsets <zone-recordsets>] [--zones <zones>]",
"openstack dns service list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--hostname HOSTNAME] [--service_name SERVICE_NAME] [--status STATUS] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID]",
"openstack dns service show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/dns
|
Chapter 31. Delegating permissions to user groups to manage users using Ansible playbooks
|
Chapter 31. Delegating permissions to user groups to manage users using Ansible playbooks Delegation is one of the access control methods in IdM, along with self-service rules and role-based access control (RBAC). You can use delegation to assign permissions to one group of users to manage entries for another group of users. This section covers the following topics: Delegation rules Creating the Ansible inventory file for IdM Using Ansible to ensure that a delegation rule is present Using Ansible to ensure that a delegation rule is absent Using Ansible to ensure that a delegation rule has specific attributes Using Ansible to ensure that a delegation rule does not have specific attributes 31.1. Delegation rules You can delegate permissions to user groups to manage users by creating delegation rules . Delegation rules allow a specific user group to perform write (edit) operations on specific attributes for users in another user group. This form of access control rule is limited to editing the values of a subset of attributes you specify in a delegation rule; it does not grant the ability to add or remove whole entries or control over unspecified attributes. Delegation rules grant permissions to existing user groups in IdM. You can use delegation to, for example, allow the managers user group to manage selected attributes of users in the employees user group. 31.2. Creating an Ansible inventory file for IdM When working with Ansible, it is good practice to create, in your home directory, a subdirectory dedicated to Ansible playbooks that you copy and adapt from the /usr/share/doc/ansible-freeipa/* and /usr/share/doc/rhel-system-roles/* subdirectories. This practice has the following advantages: You can find all your playbooks in one place. You can run your playbooks without invoking root privileges. Procedure Create a directory for your Ansible configuration and playbooks in your home directory: Change into the ~/MyPlaybooks/ directory: Create the ~/MyPlaybooks/ansible.cfg file with the following content: Create the ~/MyPlaybooks/inventory file with the following content: This configuration defines two host groups, eu and us , for hosts in these locations. Additionally, this configuration defines the ipaserver host group, which contains all hosts from the eu and us groups. 31.3. Using Ansible to ensure that a delegation rule is present The following procedure describes how to use an Ansible playbook to define privileges for a new IdM delegation rule and ensure its presence. In the example, the new basic manager attributes delegation rule grants the managers group the ability to read and write the following attributes for members of the employees group: businesscategory departmentnumber employeenumber employeetype Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the delegation-present.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/delegation/ directory: Open the delegation-present-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipadelegation task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the new delegation rule. Set the permission variable to a comma-separated list of permissions to grant: read and write . Set the attribute variable to a list of attributes the delegated user group can manage: businesscategory , departmentnumber , employeenumber , and employeetype . Set the group variable to the name of the group that is being given access to view or modify attributes. Set the membergroup variable to the name of the group whose attributes can be viewed or modified. This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Delegation rules The README-delegation.md file in the /usr/share/doc/ansible-freeipa/ directory Sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/ipadelegation directory 31.4. Using Ansible to ensure that a delegation rule is absent The following procedure describes how to use an Ansible playbook to ensure a specified delegation rule is absent from your IdM configuration. The example below describes how to make sure the custom basic manager attributes delegation rule does not exist in IdM. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the delegation-absent.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/delegation/ directory: Open the delegation-absent-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipadelegation task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the delegation rule. Set the state variable to absent . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Delegation rules The README-delegation.md file in the /usr/share/doc/ansible-freeipa/ directory Sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/ipadelegation directory 31.5. Using Ansible to ensure that a delegation rule has specific attributes The following procedure describes how to use an Ansible playbook to ensure that a delegation rule has specific settings. You can use this playbook to modify a delegation role you have previously created. In the example, you ensure the basic manager attributes delegation rule only has the departmentnumber member attribute. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The basic manager attributes delegation rule exists in IdM. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the delegation-member-present.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/delegation/ directory: Open the delegation-member-present-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipadelegation task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the delegation rule to modify. Set the attribute variable to departmentnumber . Set the action variable to member . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Delegation rules The README-delegation.md file in the /usr/share/doc/ansible-freeipa/ directory The sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/ipadelegation directory 31.6. Using Ansible to ensure that a delegation rule does not have specific attributes The following procedure describes how to use an Ansible playbook to ensure that a delegation rule does not have specific settings. You can use this playbook to make sure a delegation role does not grant undesired access. In the example, you ensure the basic manager attributes delegation rule does not have the employeenumber and employeetype member attributes. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The basic manager attributes delegation rule exists in IdM. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the delegation-member-absent.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/delegation/ directory: Open the delegation-member-absent-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipadelegation task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the delegation rule to modify. Set the attribute variable to employeenumber and employeetype . Set the action variable to member . Set the state variable to absent . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Delegation rules The README-delegation.md file in the /usr/share/doc/ansible-freeipa/ directory. Sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/ipadelegation directory
|
[
"mkdir ~/MyPlaybooks/",
"cd ~/MyPlaybooks",
"[defaults] inventory = /home/ <username> /MyPlaybooks/inventory [privilege_escalation] become=True",
"[eu] server.idm.example.com [us] replica.idm.example.com [ipaserver:children] eu us",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/delegation/delegation-present.yml delegation-present-copy.yml",
"--- - name: Playbook to manage a delegation rule hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure delegation \"basic manager attributes\" is present ipadelegation: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"basic manager attributes\" permission: read, write attribute: - businesscategory - departmentnumber - employeenumber - employeetype group: managers membergroup: employees",
"ansible-playbook --vault-password-file=password_file -v -i ~/ MyPlaybooks /inventory delegation-present-copy.yml",
"cd ~/ MyPlaybooks> /",
"cp /usr/share/doc/ansible-freeipa/playbooks/delegation/delegation-present.yml delegation-absent-copy.yml",
"--- - name: Delegation absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure delegation \"basic manager attributes\" is absent ipadelegation: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"basic manager attributes\" state: absent",
"ansible-playbook --vault-password-file=password_file -v -i ~/ MyPlaybooks /inventory delegation-absent-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/delegation/delegation-member-present.yml delegation-member-present-copy.yml",
"--- - name: Delegation member present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure delegation \"basic manager attributes\" member attribute departmentnumber is present ipadelegation: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"basic manager attributes\" attribute: - departmentnumber action: member",
"ansible-playbook --vault-password-file=password_file -v -i ~/ MyPlaybooks /inventory delegation-member-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/delegation/delegation-member-absent.yml delegation-member-absent-copy.yml",
"--- - name: Delegation member absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure delegation \"basic manager attributes\" member attributes employeenumber and employeetype are absent ipadelegation: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"basic manager attributes\" attribute: - employeenumber - employeetype action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i ~/ MyPlaybooks /inventory delegation-member-absent-copy.yml"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/assembly_delegating-permissions-to-user-groups-to-manage-users-using-ansible-playbooks_managing-users-groups-hosts
|
Chapter 24. Clustering
|
Chapter 24. Clustering Pacemaker Remote may shut down, even if its connection to the cluster is unmanaged Previously, if a Pacemaker Remote connection was unmanaged, the Pacemaker Remote daemon would never receive a shutdown acknowledgment from the cluster. As a result, Pacemaker Remote would be unable to shut down. With this fix, if a Pacemaker Remote connection is unmanaged, the cluster now immediately sends a shutdown acknowledgement to Pacemaker Remote nodes that request shutdown, rather than wait for resources to stop. As a result, Pacemaker Remote may shut down, even if its connection to the cluster is unmanaged. (BZ#1388489) pcs now validates the name and the host of a remote and guest node Previously, the pcs command did not validate whether the name or the host of a remote or guest node conflicted with a resource ID or with a cluster node, a situation that would cause the cluster not to work correctly. With this fix, validation has been added to the relevant commands and pcs does not allow a user to configure a cluster with a conflicting name or conflicting host of a remote or guest node. (BZ# 1386114 ) New syntax of master option in pcs resource create command allows correct creation of meta attributes Previously, when a pcs resource creation command included the --master flag, all options after the keyword meta were interpreted as master meta attributes. This made it impossible to create meta attributes for the primitive when the --master flag was specified. This fix provides a new syntax for specifying a resource as a master slave clone by using the following format for the command: This allows you to specify meta options as follows: Additionally, with this fix, you specify a clone resource with the clone option rather than the --clone flag, as in releases. The new format for specifying a clone resource is as follows: (BZ# 1378107 )
|
[
"pcs resource create resource_id standard:provider:type|type [resource options] master [master_options...]",
"pcs resource create resource_id standard:provider:type|type [resource_options] meta meta_options... master [master_options...]",
"pcs resource create resource_id standard:provider:type|type [resource_options] clone"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/bug_fixes_clustering
|
Chapter 345. Test Component
|
Chapter 345. Test Component Available as of Camel version 1.3 Testing of distributed and asynchronous processing is notoriously difficult. The Mock , Test and DataSet endpoints work great with the Camel Testing Framework to simplify your unit and integration testing using Enterprise Integration Patterns and Camel's large range of Components together with the powerful Bean Integration. The test component extends the Mock component to support pulling messages from another endpoint on startup to set the expected message bodies on the underlying Mock endpoint. That is, you use the test endpoint in a route and messages arriving on it will be implicitly compared to some expected messages extracted from some other location. So you can use, for example, an expected set of message bodies as files. This will then set up a properly configured Mock endpoint, which is only valid if the received messages match the number of expected messages and their message payloads are equal. Maven users will need to add the following dependency to their pom.xml for this component when using Camel 2.8 or older: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> From Camel 2.9 onwards the Test component is provided directly in the camel-core. 345.1. URI format Where expectedMessagesEndpointUri refers to some other Component URI that the expected message bodies are pulled from before starting the test. 345.2. URI Options The Test component has no options. The Test endpoint is configured using URI syntax: with the following path and query parameters: 345.2.1. Path Parameters (1 parameters): Name Description Default Type name Required Name of endpoint to lookup in the registry to use for polling messages used for testing String 345.2.2. Query Parameters (14 parameters): Name Description Default Type anyOrder (producer) Whether the expected messages should arrive in the same order or can be in any order. false boolean assertPeriod (producer) Sets a grace period after which the mock endpoint will re-assert to ensure the preliminary assertion is still valid. This is used for example to assert that exactly a number of messages arrives. For example if expectedMessageCount(int) was set to 5, then the assertion is satisfied when 5 or more message arrives. To ensure that exactly 5 messages arrives, then you would need to wait a little period to ensure no further message arrives. This is what you can use this setAssertPeriod(long) method for. By default this period is disabled. 0 long delimiter (producer) The split delimiter to use when split is enabled. By default the delimiter is new line based. The delimiter can be a regular expression. String expectedCount (producer) Specifies the expected number of message exchanges that should be received by this endpoint. Beware: If you want to expect that 0 messages, then take extra care, as 0 matches when the tests starts, so you need to set a assert period time to let the test run for a while to make sure there are still no messages arrived; for that use setAssertPeriod(long). An alternative is to use NotifyBuilder, and use the notifier to know when Camel is done routing some messages, before you call the assertIsSatisfied() method on the mocks. This allows you to not use a fixed assert period, to speedup testing times. If you want to assert that exactly n'th message arrives to this mock endpoint, then see also the setAssertPeriod(long) method for further details. -1 int reportGroup (producer) A number that is used to turn on throughput logging based on groups of the size. int resultMinimumWaitTime (producer) Sets the minimum expected amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied 0 long resultWaitTime (producer) Sets the maximum amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied 0 long retainFirst (producer) Specifies to only retain the first n'th number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the first 10 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the first 10 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object... ) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received. -1 int retainLast (producer) Specifies to only retain the last n'th number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the last 20 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the last 20 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object... ) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received. -1 int sleepForEmptyTest (producer) Allows a sleep to be specified to wait to check that this endpoint really is empty when expectedMessageCount(int) is called with zero 0 long split (producer) If enabled the messages loaded from the test endpoint will be split using new line delimiters so each line is an expected message. For example to use a file endpoint to load a file where each line is an expected message. false boolean timeout (producer) The timeout to use when polling for message bodies from the URI 2000 long copyOnExchange (producer) Sets whether to make a deep copy of the incoming Exchange when received at this mock endpoint. Is by default true. true boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 345.3. Example For example, you could write a test case as follows: from("seda:someEndpoint"). to("test:file://data/expectedOutput?noop=true"); If your test then invokes the MockEndpoint.assertIsSatisfied(camelContext) method , your test case will perform the necessary assertions. To see how you can set other expectations on the test endpoint, see the Mock component. 345.4. See Also Spring Testing
|
[
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"test:expectedMessagesEndpointUri",
"test:name",
"from(\"seda:someEndpoint\"). to(\"test:file://data/expectedOutput?noop=true\");"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/test-component
|
Chapter 9. Ingress Operator in OpenShift Container Platform
|
Chapter 9. Ingress Operator in OpenShift Container Platform 9.1. OpenShift Container Platform Ingress Operator When you create your OpenShift Container Platform cluster, pods and services running on the cluster are each allocated their own IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to outside clients. The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OpenShift Container Platform cluster services. The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing. You can use the Ingress Operator to route traffic by specifying OpenShift Container Platform Route and Kubernetes Ingress resources. Configurations within the Ingress Controller, such as the ability to define endpointPublishingStrategy type and internal load balancing, provide ways to publish Ingress Controller endpoints. 9.2. The Ingress configuration asset The installation program generates an asset with an Ingress resource in the config.openshift.io API group, cluster-ingress-02-config.yml . YAML Definition of the Ingress resource apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.openshiftdemos.com The installation program stores this asset in the cluster-ingress-02-config.yml file in the manifests/ directory. This Ingress resource defines the cluster-wide configuration for Ingress. This Ingress configuration is used as follows: The Ingress Operator uses the domain from the cluster Ingress configuration as the domain for the default Ingress Controller. The OpenShift API Server Operator uses the domain from the cluster Ingress configuration. This domain is also used when generating a default host for a Route resource that does not specify an explicit host. 9.3. Ingress Controller configuration parameters The IngressController custom resource (CR) includes optional configuration parameters that you can configure to meet specific needs for your organization. Parameter Description domain domain is a DNS name serviced by the Ingress Controller and is used to configure multiple features: For the LoadBalancerService endpoint publishing strategy, domain is used to configure DNS records. See endpointPublishingStrategy . When using a generated default certificate, the certificate is valid for domain and its subdomains . See defaultCertificate . The value is published to individual Route statuses so that users know where to target external DNS records. The domain value must be unique among all Ingress Controllers and cannot be updated. If empty, the default value is ingress.config.openshift.io/cluster .spec.domain . replicas replicas is the number of Ingress Controller replicas. If not set, the default value is 2 . endpointPublishingStrategy endpointPublishingStrategy is used to publish the Ingress Controller endpoints to other networks, enable load balancer integrations, and provide access to other systems. For cloud environments, use the loadBalancer field to configure the endpoint publishing strategy for your Ingress Controller. On GCP, AWS, and Azure you can configure the following endpointPublishingStrategy fields: loadBalancer.scope loadBalancer.allowedSourceRanges If not set, the default value is based on infrastructure.config.openshift.io/cluster .status.platform : Azure: LoadBalancerService (with External scope) Google Cloud Platform (GCP): LoadBalancerService (with External scope) For most platforms, the endpointPublishingStrategy value can be updated. On GCP, you can configure the following endpointPublishingStrategy fields: loadBalancer.scope loadbalancer.providerParameters.gcp.clientAccess For non-cloud environments, such as a bare-metal platform, use the NodePortService , HostNetwork , or Private fields to configure the endpoint publishing strategy for your Ingress Controller. If you do not set a value in one of these fields, the default value is based on binding ports specified in the .status.platform value in the IngressController CR. If you need to update the endpointPublishingStrategy value after your cluster is deployed, you can configure the following endpointPublishingStrategy fields: hostNetwork.protocol nodePort.protocol private.protocol defaultCertificate The defaultCertificate value is a reference to a secret that contains the default certificate that is served by the Ingress Controller. When Routes do not specify their own certificate, defaultCertificate is used. The secret must contain the following keys and data: * tls.crt : certificate file contents * tls.key : key file contents If not set, a wildcard certificate is automatically generated and used. The certificate is valid for the Ingress Controller domain and subdomains , and the generated certificate's CA is automatically integrated with the cluster's trust store. The in-use certificate, whether generated or user-specified, is automatically integrated with OpenShift Container Platform built-in OAuth server. namespaceSelector namespaceSelector is used to filter the set of namespaces serviced by the Ingress Controller. This is useful for implementing shards. routeSelector routeSelector is used to filter the set of Routes serviced by the Ingress Controller. This is useful for implementing shards. nodePlacement nodePlacement enables explicit control over the scheduling of the Ingress Controller. If not set, the defaults values are used. Note The nodePlacement parameter includes two parts, nodeSelector and tolerations . For example: nodePlacement: nodeSelector: matchLabels: kubernetes.io/os: linux tolerations: - effect: NoSchedule operator: Exists tlsSecurityProfile tlsSecurityProfile specifies settings for TLS connections for Ingress Controllers. If not set, the default value is based on the apiservers.config.openshift.io/cluster resource. When using the Old , Intermediate , and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z , an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the Ingress Controller, resulting in a rollout. The minimum TLS version for Ingress Controllers is 1.1 , and the maximum TLS version is 1.3 . Note Ciphers and the minimum TLS version of the configured security profile are reflected in the TLSProfile status. Important The Ingress Operator converts the TLS 1.0 of an Old or Custom profile to 1.1 . clientTLS clientTLS authenticates client access to the cluster and services; as a result, mutual TLS authentication is enabled. If not set, then client TLS is not enabled. clientTLS has the required subfields, spec.clientTLS.clientCertificatePolicy and spec.clientTLS.ClientCA . The ClientCertificatePolicy subfield accepts one of the two values: Required or Optional . The ClientCA subfield specifies a config map that is in the openshift-config namespace. The config map should contain a CA certificate bundle. The AllowedSubjectPatterns is an optional value that specifies a list of regular expressions, which are matched against the distinguished name on a valid client certificate to filter requests. The regular expressions must use PCRE syntax. At least one pattern must match a client certificate's distinguished name; otherwise, the Ingress Controller rejects the certificate and denies the connection. If not specified, the Ingress Controller does not reject certificates based on the distinguished name. routeAdmission routeAdmission defines a policy for handling new route claims, such as allowing or denying claims across namespaces. namespaceOwnership describes how hostname claims across namespaces should be handled. The default is Strict . Strict : does not allow routes to claim the same hostname across namespaces. InterNamespaceAllowed : allows routes to claim different paths of the same hostname across namespaces. wildcardPolicy describes how routes with wildcard policies are handled by the Ingress Controller. WildcardsAllowed : Indicates routes with any wildcard policy are admitted by the Ingress Controller. WildcardsDisallowed : Indicates only routes with a wildcard policy of None are admitted by the Ingress Controller. Updating wildcardPolicy from WildcardsAllowed to WildcardsDisallowed causes admitted routes with a wildcard policy of Subdomain to stop working. These routes must be recreated to a wildcard policy of None to be readmitted by the Ingress Controller. WildcardsDisallowed is the default setting. IngressControllerLogging logging defines parameters for what is logged where. If this field is empty, operational logs are enabled but access logs are disabled. access describes how client requests are logged. If this field is empty, access logging is disabled. destination describes a destination for log messages. type is the type of destination for logs: Container specifies that logs should go to a sidecar container. The Ingress Operator configures the container, named logs , on the Ingress Controller pod and configures the Ingress Controller to write logs to the container. The expectation is that the administrator configures a custom logging solution that reads logs from this container. Using container logs means that logs may be dropped if the rate of logs exceeds the container runtime capacity or the custom logging solution capacity. Syslog specifies that logs are sent to a Syslog endpoint. The administrator must specify an endpoint that can receive Syslog messages. The expectation is that the administrator has configured a custom Syslog instance. container describes parameters for the Container logging destination type. Currently there are no parameters for container logging, so this field must be empty. syslog describes parameters for the Syslog logging destination type: address is the IP address of the syslog endpoint that receives log messages. port is the UDP port number of the syslog endpoint that receives log messages. maxLength is the maximum length of the syslog message. It must be between 480 and 4096 bytes. If this field is empty, the maximum length is set to the default value of 1024 bytes. facility specifies the syslog facility of log messages. If this field is empty, the facility is local1 . Otherwise, it must specify a valid syslog facility: kern , user , mail , daemon , auth , syslog , lpr , news , uucp , cron , auth2 , ftp , ntp , audit , alert , cron2 , local0 , local1 , local2 , local3 . local4 , local5 , local6 , or local7 . httpLogFormat specifies the format of the log message for an HTTP request. If this field is empty, log messages use the implementation's default HTTP log format. For HAProxy's default HTTP log format, see the HAProxy documentation . httpHeaders httpHeaders defines the policy for HTTP headers. By setting the forwardedHeaderPolicy for the IngressControllerHTTPHeaders , you specify when and how the Ingress Controller sets the Forwarded , X-Forwarded-For , X-Forwarded-Host , X-Forwarded-Port , X-Forwarded-Proto , and X-Forwarded-Proto-Version HTTP headers. By default, the policy is set to Append . Append specifies that the Ingress Controller appends the headers, preserving any existing headers. Replace specifies that the Ingress Controller sets the headers, removing any existing headers. IfNone specifies that the Ingress Controller sets the headers if they are not already set. Never specifies that the Ingress Controller never sets the headers, preserving any existing headers. By setting headerNameCaseAdjustments , you can specify case adjustments that can be applied to HTTP header names. Each adjustment is specified as an HTTP header name with the desired capitalization. For example, specifying X-Forwarded-For indicates that the x-forwarded-for HTTP header should be adjusted to have the specified capitalization. These adjustments are only applied to cleartext, edge-terminated, and re-encrypt routes, and only when using HTTP/1. For request headers, these adjustments are applied only for routes that have the haproxy.router.openshift.io/h1-adjust-case=true annotation. For response headers, these adjustments are applied to all HTTP responses. If this field is empty, no request headers are adjusted. actions specifies options for performing certain actions on headers. Headers cannot be set or deleted for TLS passthrough connections. The actions field has additional subfields spec.httpHeader.actions.response and spec.httpHeader.actions.request : The response subfield specifies a list of HTTP response headers to set or delete. The request subfield specifies a list of HTTP request headers to set or delete. httpCompression httpCompression defines the policy for HTTP traffic compression. mimeTypes defines a list of MIME types to which compression should be applied. For example, text/css; charset=utf-8 , text/html , text/* , image/svg+xml , application/octet-stream , X-custom/customsub , using the format pattern, type/subtype; [;attribute=value] . The types are: application, image, message, multipart, text, video, or a custom type prefaced by X- ; e.g. To see the full notation for MIME types and subtypes, see RFC1341 httpErrorCodePages httpErrorCodePages specifies custom HTTP error code response pages. By default, an IngressController uses error pages built into the IngressController image. httpCaptureCookies httpCaptureCookies specifies HTTP cookies that you want to capture in access logs. If the httpCaptureCookies field is empty, the access logs do not capture the cookies. For any cookie that you want to capture, the following parameters must be in your IngressController configuration: name specifies the name of the cookie. maxLength specifies tha maximum length of the cookie. matchType specifies if the field name of the cookie exactly matches the capture cookie setting or is a prefix of the capture cookie setting. The matchType field uses the Exact and Prefix parameters. For example: httpCaptureCookies: - matchType: Exact maxLength: 128 name: MYCOOKIE httpCaptureHeaders httpCaptureHeaders specifies the HTTP headers that you want to capture in the access logs. If the httpCaptureHeaders field is empty, the access logs do not capture the headers. httpCaptureHeaders contains two lists of headers to capture in the access logs. The two lists of header fields are request and response . In both lists, the name field must specify the header name and the maxlength field must specify the maximum length of the header. For example: httpCaptureHeaders: request: - maxLength: 256 name: Connection - maxLength: 128 name: User-Agent response: - maxLength: 256 name: Content-Type - maxLength: 256 name: Content-Length tuningOptions tuningOptions specifies options for tuning the performance of Ingress Controller pods. clientFinTimeout specifies how long a connection is held open while waiting for the client response to the server closing the connection. The default timeout is 1s . clientTimeout specifies how long a connection is held open while waiting for a client response. The default timeout is 30s . headerBufferBytes specifies how much memory is reserved, in bytes, for Ingress Controller connection sessions. This value must be at least 16384 if HTTP/2 is enabled for the Ingress Controller. If not set, the default value is 32768 bytes. Setting this field not recommended because headerBufferBytes values that are too small can break the Ingress Controller, and headerBufferBytes values that are too large could cause the Ingress Controller to use significantly more memory than necessary. headerBufferMaxRewriteBytes specifies how much memory should be reserved, in bytes, from headerBufferBytes for HTTP header rewriting and appending for Ingress Controller connection sessions. The minimum value for headerBufferMaxRewriteBytes is 4096 . headerBufferBytes must be greater than headerBufferMaxRewriteBytes for incoming HTTP requests. If not set, the default value is 8192 bytes. Setting this field not recommended because headerBufferMaxRewriteBytes values that are too small can break the Ingress Controller and headerBufferMaxRewriteBytes values that are too large could cause the Ingress Controller to use significantly more memory than necessary. healthCheckInterval specifies how long the router waits between health checks. The default is 5s . serverFinTimeout specifies how long a connection is held open while waiting for the server response to the client that is closing the connection. The default timeout is 1s . serverTimeout specifies how long a connection is held open while waiting for a server response. The default timeout is 30s . threadCount specifies the number of threads to create per HAProxy process. Creating more threads allows each Ingress Controller pod to handle more connections, at the cost of more system resources being used. HAProxy supports up to 64 threads. If this field is empty, the Ingress Controller uses the default value of 4 threads. The default value can change in future releases. Setting this field is not recommended because increasing the number of HAProxy threads allows Ingress Controller pods to use more CPU time under load, and prevent other pods from receiving the CPU resources they need to perform. Reducing the number of threads can cause the Ingress Controller to perform poorly. tlsInspectDelay specifies how long the router can hold data to find a matching route. Setting this value too short can cause the router to fall back to the default certificate for edge-terminated, reencrypted, or passthrough routes, even when using a better matched certificate. The default inspect delay is 5s . tunnelTimeout specifies how long a tunnel connection, including websockets, remains open while the tunnel is idle. The default timeout is 1h . maxConnections specifies the maximum number of simultaneous connections that can be established per HAProxy process. Increasing this value allows each ingress controller pod to handle more connections at the cost of additional system resources. Permitted values are 0 , -1 , any value within the range 2000 and 2000000 , or the field can be left empty. If this field is left empty or has the value 0 , the Ingress Controller will use the default value of 50000 . This value is subject to change in future releases. If the field has the value of -1 , then HAProxy will dynamically compute a maximum value based on the available ulimits in the running container. This process results in a large computed value that will incur significant memory usage compared to the current default value of 50000 . If the field has a value that is greater than the current operating system limit, the HAProxy process will not start. If you choose a discrete value and the router pod is migrated to a new node, it is possible the new node does not have an identical ulimit configured. In such cases, the pod fails to start. If you have nodes with different ulimits configured, and you choose a discrete value, it is recommended to use the value of -1 for this field so that the maximum number of connections is calculated at runtime. logEmptyRequests logEmptyRequests specifies connections for which no request is received and logged. These empty requests come from load balancer health probes or web browser speculative connections (preconnect) and logging these requests can be undesirable. However, these requests can be caused by network errors, in which case logging empty requests can be useful for diagnosing the errors. These requests can be caused by port scans, and logging empty requests can aid in detecting intrusion attempts. Allowed values for this field are Log and Ignore . The default value is Log . The LoggingPolicy type accepts either one of two values: Log : Setting this value to Log indicates that an event should be logged. Ignore : Setting this value to Ignore sets the dontlognull option in the HAproxy configuration. HTTPEmptyRequestsPolicy HTTPEmptyRequestsPolicy describes how HTTP connections are handled if the connection times out before a request is received. Allowed values for this field are Respond and Ignore . The default value is Respond . The HTTPEmptyRequestsPolicy type accepts either one of two values: Respond : If the field is set to Respond , the Ingress Controller sends an HTTP 400 or 408 response, logs the connection if access logging is enabled, and counts the connection in the appropriate metrics. Ignore : Setting this option to Ignore adds the http-ignore-probes parameter in the HAproxy configuration. If the field is set to Ignore , the Ingress Controller closes the connection without sending a response, then logs the connection, or incrementing metrics. These connections come from load balancer health probes or web browser speculative connections (preconnect) and can be safely ignored. However, these requests can be caused by network errors, so setting this field to Ignore can impede detection and diagnosis of problems. These requests can be caused by port scans, in which case logging empty requests can aid in detecting intrusion attempts. 9.3.1. Ingress Controller TLS security profiles TLS security profiles provide a way for servers to regulate which ciphers a connecting client can use when connecting to the server. 9.3.1.1. Understanding TLS security profiles You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component: Table 9.1. TLS security profiles Profile Description Old This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Custom This profile allows you to define the TLS version and ciphers to use. Warning Use caution when using a Custom profile, because invalid configurations can cause problems. Note When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout. 9.3.1.2. Configuring the TLS security profile for the Ingress Controller To configure a TLS security profile for an Ingress Controller, edit the IngressController custom resource (CR) to specify a predefined or custom TLS security profile. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server. Sample IngressController CR that configures the Old TLS security profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: old: {} type: Old ... The TLS security profile defines the minimum TLS version and the TLS ciphers for TLS connections for Ingress Controllers. You can see the ciphers and the minimum TLS version of the configured TLS security profile in the IngressController custom resource (CR) under Status.Tls Profile and the configured TLS security profile under Spec.Tls Security Profile . For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed under both parameters. Note The HAProxy Ingress Controller image supports TLS 1.3 and the Modern profile. The Ingress Operator also converts the TLS 1.0 of an Old or Custom profile to 1.1 . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the IngressController CR in the openshift-ingress-operator project to configure the TLS security profile: USD oc edit IngressController default -n openshift-ingress-operator Add the spec.tlsSecurityProfile field: Sample IngressController CR for a Custom profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 ... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. Save the file to apply the changes. Verification Verify that the profile is set in the IngressController CR: USD oc describe IngressController default -n openshift-ingress-operator Example output Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController ... Spec: ... Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ... 9.3.1.3. Configuring mutual TLS authentication You can configure the Ingress Controller to enable mutual TLS (mTLS) authentication by setting a spec.clientTLS value. The clientTLS value configures the Ingress Controller to verify client certificates. This configuration includes setting a clientCA value, which is a reference to a config map. The config map contains the PEM-encoded CA certificate bundle that is used to verify a client's certificate. Optionally, you can also configure a list of certificate subject filters. If the clientCA value specifies an X509v3 certificate revocation list (CRL) distribution point, the Ingress Operator downloads and manages a CRL config map based on the HTTP URI X509v3 CRL Distribution Point specified in each provided certificate. The Ingress Controller uses this config map during mTLS/TLS negotiation. Requests that do not provide valid certificates are rejected. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a PEM-encoded CA certificate bundle. If your CA bundle references a CRL distribution point, you must have also included the end-entity or leaf certificate to the client CA bundle. This certificate must have included an HTTP URI under CRL Distribution Points , as described in RFC 5280. For example: Issuer: C=US, O=Example Inc, CN=Example Global G2 TLS RSA SHA256 2020 CA1 Subject: SOME SIGNED CERT X509v3 CRL Distribution Points: Full Name: URI:http://crl.example.com/example.crl Procedure In the openshift-config namespace, create a config map from your CA bundle: USD oc create configmap \ router-ca-certs-default \ --from-file=ca-bundle.pem=client-ca.crt \ 1 -n openshift-config 1 The config map data key must be ca-bundle.pem , and the data value must be a CA certificate in PEM format. Edit the IngressController resource in the openshift-ingress-operator project: USD oc edit IngressController default -n openshift-ingress-operator Add the spec.clientTLS field and subfields to configure mutual TLS: Sample IngressController CR for a clientTLS profile that specifies filtering patterns apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: clientTLS: clientCertificatePolicy: Required clientCA: name: router-ca-certs-default allowedSubjectPatterns: - "^/CN=example.com/ST=NC/C=US/O=Security/OU=OpenShiftUSD" Optional, get the Distinguished Name (DN) for allowedSubjectPatterns by entering the following command. 9.4. View the default Ingress Controller The Ingress Operator is a core feature of OpenShift Container Platform and is enabled out of the box. Every new OpenShift Container Platform installation has an ingresscontroller named default. It can be supplemented with additional Ingress Controllers. If the default ingresscontroller is deleted, the Ingress Operator will automatically recreate it within a minute. Procedure View the default Ingress Controller: USD oc describe --namespace=openshift-ingress-operator ingresscontroller/default 9.5. View Ingress Operator status You can view and inspect the status of your Ingress Operator. Procedure View your Ingress Operator status: USD oc describe clusteroperators/ingress 9.6. View Ingress Controller logs You can view your Ingress Controller logs. Procedure View your Ingress Controller logs: USD oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name> 9.7. View Ingress Controller status Your can view the status of a particular Ingress Controller. Procedure View the status of an Ingress Controller: USD oc describe --namespace=openshift-ingress-operator ingresscontroller/<name> 9.8. Creating a custom Ingress Controller As a cluster administrator, you can create a new custom Ingress Controller. Because the default Ingress Controller might change during OpenShift Container Platform updates, creating a custom Ingress Controller can be helpful when maintaining a configuration manually that persists across cluster updates. This example provides a minimal spec for a custom Ingress Controller. To further customize your custom Ingress Controller, see "Configuring the Ingress Controller". Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a YAML file that defines the custom IngressController object: Example custom-ingress-controller.yaml file apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: <custom_name> 1 namespace: openshift-ingress-operator spec: defaultCertificate: name: <custom-ingress-custom-certs> 2 replicas: 1 3 domain: <custom_domain> 4 1 Specify the a custom name for the IngressController object. 2 Specify the name of the secret with the custom wildcard certificate. 3 Minimum replica needs to be ONE 4 Specify the domain to your domain name. The domain specified on the IngressController object and the domain used for the certificate must match. For example, if the domain value is "custom_domain.mycompany.com", then the certificate must have SAN *.custom_domain.mycompany.com (with the *. added to the domain). Create the object by running the following command: USD oc create -f custom-ingress-controller.yaml 9.9. Configuring the Ingress Controller 9.9.1. Setting a custom default certificate As an administrator, you can configure an Ingress Controller to use a custom certificate by creating a Secret resource and editing the IngressController custom resource (CR). Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is signed by a trusted certificate authority or by a private trusted certificate authority that you configured in a custom PKI. Your certificate meets the following requirements: The certificate is valid for the ingress domain. The certificate uses the subjectAltName extension to specify a wildcard domain, such as *.apps.ocp4.example.com . You must have an IngressController CR. You may use the default one: USD oc --namespace openshift-ingress-operator get ingresscontrollers Example output NAME AGE default 10m Note If you have intermediate certificates, they must be included in the tls.crt file of the secret containing a custom default certificate. Order matters when specifying a certificate; list your intermediate certificate(s) after any server certificate(s). Procedure The following assumes that the custom certificate and key pair are in the tls.crt and tls.key files in the current working directory. Substitute the actual path names for tls.crt and tls.key . You also may substitute another name for custom-certs-default when creating the Secret resource and referencing it in the IngressController CR. Note This action will cause the Ingress Controller to be redeployed, using a rolling deployment strategy. Create a Secret resource containing the custom certificate in the openshift-ingress namespace using the tls.crt and tls.key files. USD oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key Update the IngressController CR to reference the new certificate secret: USD oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default \ --patch '{"spec":{"defaultCertificate":{"name":"custom-certs-default"}}}' Verify the update was effective: USD echo Q |\ openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null |\ openssl x509 -noout -subject -issuer -enddate where: <domain> Specifies the base domain name for your cluster. Example output subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com notAfter=May 10 08:32:45 2022 GM Tip You can alternatively apply the following YAML to set a custom default certificate: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: defaultCertificate: name: custom-certs-default The certificate secret name should match the value used to update the CR. Once the IngressController CR has been modified, the Ingress Operator updates the Ingress Controller's deployment to use the custom certificate. 9.9.2. Removing a custom default certificate As an administrator, you can remove a custom certificate that you configured an Ingress Controller to use. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You previously configured a custom default certificate for the Ingress Controller. Procedure To remove the custom certificate and restore the certificate that ships with OpenShift Container Platform, enter the following command: USD oc patch -n openshift-ingress-operator ingresscontrollers/default \ --type json -p USD'- op: remove\n path: /spec/defaultCertificate' There can be a delay while the cluster reconciles the new certificate configuration. Verification To confirm that the original cluster certificate is restored, enter the following command: USD echo Q | \ openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | \ openssl x509 -noout -subject -issuer -enddate where: <domain> Specifies the base domain name for your cluster. Example output subject=CN = *.apps.<domain> issuer=CN = ingress-operator@1620633373 notAfter=May 10 10:44:36 2023 GMT 9.9.3. Autoscaling an Ingress Controller You can automatically scale an Ingress Controller to dynamically meet routing performance or availability requirements, such as the requirement to increase throughput. The following procedure provides an example for scaling up the default Ingress Controller. Prerequisites You have the OpenShift CLI ( oc ) installed. You have access to an OpenShift Container Platform cluster as a user with the cluster-admin role. You installed the Custom Metrics Autoscaler Operator and an associated KEDA Controller. You can install the Operator by using OperatorHub on the web console. After you install the Operator, you can create an instance of KedaController . Procedure Create a service account to authenticate with Thanos by running the following command: USD oc create -n openshift-ingress-operator serviceaccount thanos && oc describe -n openshift-ingress-operator serviceaccount thanos Example output Name: thanos Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-kfvf2 Mountable secrets: thanos-dockercfg-kfvf2 Tokens: thanos-token-c422q Events: <none> Optional: Manually create the service account secret token with the following command: Important If you disable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, the image pull secret is not generated for each service account. In this situation, you must perform this step. USD oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: thanos-token namespace: openshift-ingress-operator annotations: kubernetes.io/service-account.name: thanos type: kubernetes.io/service-account-token EOF Define a TriggerAuthentication object within the openshift-ingress-operator namespace by using the service account's token. Define the secret variable that contains the secret by running the following command: USD secret=USD(oc get secret -n openshift-ingress-operator | grep thanos-token | head -n 1 | awk '{ print USD1 }') Create the TriggerAuthentication object and pass the value of the secret variable to the TOKEN parameter: USD oc process TOKEN="USDsecret" -f - <<EOF | oc apply -n openshift-ingress-operator -f - apiVersion: template.openshift.io/v1 kind: Template parameters: - name: TOKEN objects: - apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: - parameter: bearerToken name: \USD{TOKEN} key: token - parameter: ca name: \USD{TOKEN} key: ca.crt EOF Create and apply a role for reading metrics from Thanos: Create a new role, thanos-metrics-reader.yaml , that reads metrics from pods and nodes: thanos-metrics-reader.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader namespace: openshift-ingress-operator rules: - apiGroups: - "" resources: - pods - nodes verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch - apiGroups: - "" resources: - namespaces verbs: - get Apply the new role by running the following command: USD oc apply -f thanos-metrics-reader.yaml Add the new role to the service account by entering the following commands: USD oc adm policy -n openshift-ingress-operator add-role-to-user thanos-metrics-reader -z thanos --role-namespace=openshift-ingress-operator USD oc adm policy -n openshift-ingress-operator add-cluster-role-to-user cluster-monitoring-view -z thanos Note The argument add-cluster-role-to-user is only required if you use cross-namespace queries. The following step uses a query from the kube-metrics namespace which requires this argument. Create a new ScaledObject YAML file, ingress-autoscaler.yaml , that targets the default Ingress Controller deployment: Example ScaledObject definition apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: ingress-scaler namespace: openshift-ingress-operator spec: scaleTargetRef: 1 apiVersion: operator.openshift.io/v1 kind: IngressController name: default envSourceContainerName: ingress-operator minReplicaCount: 1 maxReplicaCount: 20 2 cooldownPeriod: 1 pollingInterval: 1 triggers: - type: prometheus metricType: AverageValue metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 3 namespace: openshift-ingress-operator 4 metricName: 'kube-node-role' threshold: '1' query: 'sum(kube_node_role{role="worker",service="kube-state-metrics"})' 5 authModes: "bearer" authenticationRef: name: keda-trigger-auth-prometheus 1 The custom resource that you are targeting. In this case, the Ingress Controller. 2 Optional: The maximum number of replicas. If you omit this field, the default maximum is set to 100 replicas. 3 The Thanos service endpoint in the openshift-monitoring namespace. 4 The Ingress Operator namespace. 5 This expression evaluates to however many worker nodes are present in the deployed cluster. Important If you are using cross-namespace queries, you must target port 9091 and not port 9092 in the serverAddress field. You also must have elevated privileges to read metrics from this port. Apply the custom resource definition by running the following command: USD oc apply -f ingress-autoscaler.yaml Verification Verify that the default Ingress Controller is scaled out to match the value returned by the kube-state-metrics query by running the following commands: Use the grep command to search the Ingress Controller YAML file for replicas: USD oc get -n openshift-ingress-operator ingresscontroller/default -o yaml | grep replicas: Example output replicas: 3 Get the pods in the openshift-ingress project: USD oc get pods -n openshift-ingress Example output NAME READY STATUS RESTARTS AGE router-default-7b5df44ff-l9pmm 2/2 Running 0 17h router-default-7b5df44ff-s5sl5 2/2 Running 0 3d22h router-default-7b5df44ff-wwsth 2/2 Running 0 66s Additional resources Installing the custom metrics autoscaler Enabling monitoring for user-defined projects Understanding custom metrics autoscaler trigger authentications Configuring the custom metrics autoscaler to use OpenShift Container Platform monitoring Understanding how to add custom metrics autoscalers 9.9.4. Scaling an Ingress Controller Manually scale an Ingress Controller to meeting routing performance or availability requirements such as the requirement to increase throughput. oc commands are used to scale the IngressController resource. The following procedure provides an example for scaling up the default IngressController . Note Scaling is not an immediate action, as it takes time to create the desired number of replicas. Procedure View the current number of available replicas for the default IngressController : USD oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}' Example output 2 Scale the default IngressController to the desired number of replicas using the oc patch command. The following example scales the default IngressController to 3 replicas: USD oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge Example output ingresscontroller.operator.openshift.io/default patched Verify that the default IngressController scaled to the number of replicas that you specified: USD oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}' Example output 3 Tip You can alternatively apply the following YAML to scale an Ingress Controller to three replicas: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 3 1 1 If you need a different amount of replicas, change the replicas value. 9.9.5. Configuring Ingress access logging You can configure the Ingress Controller to enable access logs. If you have clusters that do not receive much traffic, then you can log to a sidecar. If you have high traffic clusters, to avoid exceeding the capacity of the logging stack or to integrate with a logging infrastructure outside of OpenShift Container Platform, you can forward logs to a custom syslog endpoint. You can also specify the format for access logs. Container logging is useful to enable access logs on low-traffic clusters when there is no existing Syslog logging infrastructure, or for short-term use while diagnosing problems with the Ingress Controller. Syslog is needed for high-traffic clusters where access logs could exceed the OpenShift Logging stack's capacity, or for environments where any logging solution needs to integrate with an existing Syslog logging infrastructure. The Syslog use-cases can overlap. Prerequisites Log in as a user with cluster-admin privileges. Procedure Configure Ingress access logging to a sidecar. To configure Ingress access logging, you must specify a destination using spec.logging.access.destination . To specify logging to a sidecar container, you must specify Container spec.logging.access.destination.type . The following example is an Ingress Controller definition that logs to a Container destination: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container When you configure the Ingress Controller to log to a sidecar, the operator creates a container named logs inside the Ingress Controller Pod: USD oc -n openshift-ingress logs deployment.apps/router-default -c logs Example output 2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 "GET / HTTP/1.1" Configure Ingress access logging to a Syslog endpoint. To configure Ingress access logging, you must specify a destination using spec.logging.access.destination . To specify logging to a Syslog endpoint destination, you must specify Syslog for spec.logging.access.destination.type . If the destination type is Syslog , you must also specify a destination endpoint using spec.logging.access.destination.syslog.endpoint and you can specify a facility using spec.logging.access.destination.syslog.facility . The following example is an Ingress Controller definition that logs to a Syslog destination: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 Note The syslog destination port must be UDP. Configure Ingress access logging with a specific log format. You can specify spec.logging.access.httpLogFormat to customize the log format. The following example is an Ingress Controller definition that logs to a syslog endpoint with IP address 1.2.3.4 and port 10514: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV' Disable Ingress access logging. To disable Ingress access logging, leave spec.logging or spec.logging.access empty: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: null Allow the Ingress Controller to modify the HAProxy log length when using a sidecar. Use spec.logging.access.destination.syslog.maxLength if you are using spec.logging.access.destination.type: Syslog . apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 maxLength: 4096 port: 10514 Use spec.logging.access.destination.container.maxLength if you are using spec.logging.access.destination.type: Container . apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container container: maxLength: 8192 9.9.6. Setting Ingress Controller thread count A cluster administrator can set the thread count to increase the amount of incoming connections a cluster can handle. You can patch an existing Ingress Controller to increase the amount of threads. Prerequisites The following assumes that you already created an Ingress Controller. Procedure Update the Ingress Controller to increase the number of threads: USD oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"threadCount": 8}}}' Note If you have a node that is capable of running large amounts of resources, you can configure spec.nodePlacement.nodeSelector with labels that match the capacity of the intended node, and configure spec.tuningOptions.threadCount to an appropriately high value. 9.9.7. Configuring an Ingress Controller to use an internal load balancer When creating an Ingress Controller on cloud platforms, the Ingress Controller is published by a public cloud load balancer by default. As an administrator, you can create an Ingress Controller that uses an internal cloud load balancer. Warning If your cloud provider is Microsoft Azure, you must have at least one public load balancer that points to your nodes. If you do not, all of your nodes will lose egress connectivity to the internet. Important If you want to change the scope for an IngressController , you can change the .spec.endpointPublishingStrategy.loadBalancer.scope parameter after the custom resource (CR) is created. Figure 9.1. Diagram of LoadBalancer The preceding graphic shows the following concepts pertaining to OpenShift Container Platform Ingress LoadBalancerService endpoint publishing strategy: You can load balance externally, using the cloud provider load balancer, or internally, using the OpenShift Ingress Controller Load Balancer. You can use the single IP address of the load balancer and more familiar ports, such as 8080 and 4200 as shown on the cluster depicted in the graphic. Traffic from the external load balancer is directed at the pods, and managed by the load balancer, as depicted in the instance of a down node. See the Kubernetes Services documentation for implementation details. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IngressController custom resource (CR) in a file named <name>-ingress-controller.yaml , such as in the following example: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal 3 1 Replace <name> with a name for the IngressController object. 2 Specify the domain for the application published by the controller. 3 Specify a value of Internal to use an internal load balancer. Create the Ingress Controller defined in the step by running the following command: USD oc create -f <name>-ingress-controller.yaml 1 1 Replace <name> with the name of the IngressController object. Optional: Confirm that the Ingress Controller was created by running the following command: USD oc --all-namespaces=true get ingresscontrollers 9.9.8. Configuring global access for an Ingress Controller on GCP An Ingress Controller created on GCP with an internal load balancer generates an internal IP address for the service. A cluster administrator can specify the global access option, which enables clients in any region within the same VPC network and compute region as the load balancer, to reach the workloads running on your cluster. For more information, see the GCP documentation for global access . Prerequisites You deployed an OpenShift Container Platform cluster on GCP infrastructure. You configured an Ingress Controller to use an internal load balancer. You installed the OpenShift CLI ( oc ). Procedure Configure the Ingress Controller resource to allow global access. Note You can also create an Ingress Controller and specify the global access option. Configure the Ingress Controller resource: USD oc -n openshift-ingress-operator edit ingresscontroller/default Edit the YAML file: Sample clientAccess configuration to Global spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal type: LoadBalancerService 1 Set gcp.clientAccess to Global . Save the file to apply the changes. Run the following command to verify that the service allows global access: USD oc -n openshift-ingress edit svc/router-default -o yaml The output shows that global access is enabled for GCP with the annotation, networking.gke.io/internal-load-balancer-allow-global-access . 9.9.9. Setting the Ingress Controller health check interval A cluster administrator can set the health check interval to define how long the router waits between two consecutive health checks. This value is applied globally as a default for all routes. The default value is 5 seconds. Prerequisites The following assumes that you already created an Ingress Controller. Procedure Update the Ingress Controller to change the interval between back end health checks: USD oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"healthCheckInterval": "8s"}}}' Note To override the healthCheckInterval for a single route, use the route annotation router.openshift.io/haproxy.health.check.interval 9.9.10. Configuring the default Ingress Controller for your cluster to be internal You can configure the default Ingress Controller for your cluster to be internal by deleting and recreating it. Warning If your cloud provider is Microsoft Azure, you must have at least one public load balancer that points to your nodes. If you do not, all of your nodes will lose egress connectivity to the internet. Important If you want to change the scope for an IngressController , you can change the .spec.endpointPublishingStrategy.loadBalancer.scope parameter after the custom resource (CR) is created. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Configure the default Ingress Controller for your cluster to be internal by deleting and recreating it. USD oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF 9.9.11. Configuring the route admission policy Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname. Warning Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces. Prerequisites Cluster administrator privileges. Procedure Edit the .spec.routeAdmission field of the ingresscontroller resource variable using the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge Sample Ingress Controller configuration spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed ... Tip You can alternatively apply the following YAML to configure the route admission policy: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed 9.9.12. Using wildcard routes The HAProxy Ingress Controller has support for wildcard routes. The Ingress Operator uses wildcardPolicy to configure the ROUTER_ALLOW_WILDCARD_ROUTES environment variable of the Ingress Controller. The default behavior of the Ingress Controller is to admit routes with a wildcard policy of None , which is backwards compatible with existing IngressController resources. Procedure Configure the wildcard policy. Use the following command to edit the IngressController resource: USD oc edit IngressController Under spec , set the wildcardPolicy field to WildcardsDisallowed or WildcardsAllowed : spec: routeAdmission: wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed 9.9.13. HTTP header configuration OpenShift Container Platform provides different methods for working with HTTP headers. When setting or deleting headers, you can use specific fields in the Ingress Controller or an individual route to modify request and response headers. You can also set certain headers by using route annotations. The various ways of configuring headers can present challenges when working together. Note You can only set or delete headers within an IngressController or Route CR, you cannot append them. If an HTTP header is set with a value, that value must be complete and not require appending in the future. In situations where it makes sense to append a header, such as the X-Forwarded-For header, use the spec.httpHeaders.forwardedHeaderPolicy field, instead of spec.httpHeaders.actions . 9.9.13.1. Order of precedence When the same HTTP header is modified both in the Ingress Controller and in a route, HAProxy prioritizes the actions in certain ways depending on whether it is a request or response header. For HTTP response headers, actions specified in the Ingress Controller are executed after the actions specified in a route. This means that the actions specified in the Ingress Controller take precedence. For HTTP request headers, actions specified in a route are executed after the actions specified in the Ingress Controller. This means that the actions specified in the route take precedence. For example, a cluster administrator sets the X-Frame-Options response header with the value DENY in the Ingress Controller using the following configuration: Example IngressController spec apiVersion: operator.openshift.io/v1 kind: IngressController # ... spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: DENY A route owner sets the same response header that the cluster administrator set in the Ingress Controller, but with the value SAMEORIGIN using the following configuration: Example Route spec apiVersion: route.openshift.io/v1 kind: Route # ... spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: SAMEORIGIN When both the IngressController spec and Route spec are configuring the X-Frame-Options header, then the value set for this header at the global level in the Ingress Controller will take precedence, even if a specific route allows frames. This prioritzation occurs because the haproxy.config file uses the following logic, where the Ingress Controller is considered the front end and individual routes are considered the back end. The header value DENY applied to the front end configurations overrides the same header with the value SAMEORIGIN that is set in the back end: frontend public http-response set-header X-Frame-Options 'DENY' frontend fe_sni http-response set-header X-Frame-Options 'DENY' frontend fe_no_sni http-response set-header X-Frame-Options 'DENY' backend be_secure:openshift-monitoring:alertmanager-main http-response set-header X-Frame-Options 'SAMEORIGIN' Additionally, any actions defined in either the Ingress Controller or a route override values set using route annotations. 9.9.13.2. Special case headers The following headers are either prevented entirely from being set or deleted, or allowed under specific circumstances: Table 9.2. Special case header configuration options Header name Configurable using IngressController spec Configurable using Route spec Reason for disallowment Configurable using another method proxy No No The proxy HTTP request header can be used to exploit vulnerable CGI applications by injecting the header value into the HTTP_PROXY environment variable. The proxy HTTP request header is also non-standard and prone to error during configuration. No host No Yes When the host HTTP request header is set using the IngressController CR, HAProxy can fail when looking up the correct route. No strict-transport-security No No The strict-transport-security HTTP response header is already handled using route annotations and does not need a separate implementation. Yes: the haproxy.router.openshift.io/hsts_header route annotation cookie and set-cookie No No The cookies that HAProxy sets are used for session tracking to map client connections to particular back-end servers. Allowing these headers to be set could interfere with HAProxy's session affinity and restrict HAProxy's ownership of a cookie. Yes: the haproxy.router.openshift.io/disable_cookie route annotation the haproxy.router.openshift.io/cookie_name route annotation 9.9.14. Setting or deleting HTTP request and response headers in an Ingress Controller You can set or delete certain HTTP request and response headers for compliance purposes or other reasons. You can set or delete these headers either for all routes served by an Ingress Controller or for specific routes. For example, you might want to migrate an application running on your cluster to use mutual TLS, which requires that your application checks for an X-Forwarded-Client-Cert request header, but the OpenShift Container Platform default Ingress Controller provides an X-SSL-Client-Der request header. The following procedure modifies the Ingress Controller to set the X-Forwarded-Client-Cert request header, and delete the X-SSL-Client-Der request header. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to an OpenShift Container Platform cluster as a user with the cluster-admin role. Procedure Edit the Ingress Controller resource: USD oc -n openshift-ingress-operator edit ingresscontroller/default Replace the X-SSL-Client-Der HTTP request header with the X-Forwarded-Client-Cert HTTP request header: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: actions: 1 request: 2 - name: X-Forwarded-Client-Cert 3 action: type: Set 4 set: value: "%{+Q}[ssl_c_der,base64]" 5 - name: X-SSL-Client-Der action: type: Delete 1 The list of actions you want to perform on the HTTP headers. 2 The type of header you want to change. In this case, a request header. 3 The name of the header you want to change. For a list of available headers you can set or delete, see HTTP header configuration . 4 The type of action being taken on the header. This field can have the value Set or Delete . 5 When setting HTTP headers, you must provide a value . The value can be a string from a list of available directives for that header, for example DENY , or it can be a dynamic value that will be interpreted using HAProxy's dynamic value syntax. In this case, a dynamic value is added. Note For setting dynamic header values for HTTP responses, allowed sample fetchers are res.hdr and ssl_c_der . For setting dynamic header values for HTTP requests, allowed sample fetchers are req.hdr and ssl_c_der . Both request and response dynamic values can use the lower and base64 converters. Save the file to apply the changes. 9.9.15. Using X-Forwarded headers You configure the HAProxy Ingress Controller to specify a policy for how to handle HTTP headers including Forwarded and X-Forwarded-For . The Ingress Operator uses the HTTPHeaders field to configure the ROUTER_SET_FORWARDED_HEADERS environment variable of the Ingress Controller. Procedure Configure the HTTPHeaders field for the Ingress Controller. Use the following command to edit the IngressController resource: USD oc edit IngressController Under spec , set the HTTPHeaders policy field to Append , Replace , IfNone , or Never : apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: forwardedHeaderPolicy: Append Example use cases As a cluster administrator, you can: Configure an external proxy that injects the X-Forwarded-For header into each request before forwarding it to an Ingress Controller. To configure the Ingress Controller to pass the header through unmodified, you specify the never policy. The Ingress Controller then never sets the headers, and applications receive only the headers that the external proxy provides. Configure the Ingress Controller to pass the X-Forwarded-For header that your external proxy sets on external cluster requests through unmodified. To configure the Ingress Controller to set the X-Forwarded-For header on internal cluster requests, which do not go through the external proxy, specify the if-none policy. If an HTTP request already has the header set through the external proxy, then the Ingress Controller preserves it. If the header is absent because the request did not come through the proxy, then the Ingress Controller adds the header. As an application developer, you can: Configure an application-specific external proxy that injects the X-Forwarded-For header. To configure an Ingress Controller to pass the header through unmodified for an application's Route, without affecting the policy for other Routes, add an annotation haproxy.router.openshift.io/set-forwarded-headers: if-none or haproxy.router.openshift.io/set-forwarded-headers: never on the Route for the application. Note You can set the haproxy.router.openshift.io/set-forwarded-headers annotation on a per route basis, independent from the globally set value for the Ingress Controller. 9.9.16. Enabling and disabling HTTP/2 on Ingress Controllers You can enable or disable transparent end-to-end HTTP/2 connectivity in HAProxy. This allows application owners to make use of HTTP/2 protocol capabilities, including single connection, header compression, binary streams, and more. You can enable or disable HTTP/2 connectivity for an individual Ingress Controller or for the entire cluster. To enable the use of HTTP/2 for the connection from the client to HAProxy, a route must specify a custom certificate. A route that uses the default certificate cannot use HTTP/2. This restriction is necessary to avoid problems from connection coalescing, where the client re-uses a connection for different routes that use the same certificate. The connection from HAProxy to the application pod can use HTTP/2 only for re-encrypt routes and not for edge-terminated or insecure routes. This restriction is because HAProxy uses Application-Level Protocol Negotiation (ALPN), which is a TLS extension, to negotiate the use of HTTP/2 with the back-end. The implication is that end-to-end HTTP/2 is possible with passthrough and re-encrypt and not with edge-terminated routes. Note You can use HTTP/2 with an insecure route whether the Ingress Controller has HTTP/2 enabled or disabled. Important For non-passthrough routes, the Ingress Controller negotiates its connection to the application independently of the connection from the client. This means a client may connect to the Ingress Controller and negotiate HTTP/1.1, and the Ingress Controller might then connect to the application, negotiate HTTP/2, and forward the request from the client HTTP/1.1 connection using the HTTP/2 connection to the application. This poses a problem if the client subsequently tries to upgrade its connection from HTTP/1.1 to the WebSocket protocol, because the Ingress Controller cannot forward WebSocket to HTTP/2 and cannot upgrade its HTTP/2 connection to WebSocket. Consequently, if you have an application that is intended to accept WebSocket connections, it must not allow negotiating the HTTP/2 protocol or else clients will fail to upgrade to the WebSocket protocol. 9.9.16.1. Enabling HTTP/2 You can enable HTTP/2 on a specific Ingress Controller, or you can enable HTTP/2 for the entire cluster. Procedure To enable HTTP/2 on a specific Ingress Controller, enter the oc annotate command: USD oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true 1 1 Replace <ingresscontroller_name> with the name of an Ingress Controller to enable HTTP/2. To enable HTTP/2 for the entire cluster, enter the oc annotate command: USD oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true Tip Alternatively, you can apply the following YAML code to enable HTTP/2: apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster annotations: ingress.operator.openshift.io/default-enable-http2: "true" 9.9.16.2. Disabling HTTP/2 You can disable HTTP/2 on a specific Ingress Controller, or you can disable HTTP/2 for the entire cluster. Procedure To disable HTTP/2 on a specific Ingress Controller, enter the oc annotate command: USD oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=false 1 1 Replace <ingresscontroller_name> with the name of an Ingress Controller to disable HTTP/2. To disable HTTP/2 for the entire cluster, enter the oc annotate command: USD oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=false Tip Alternatively, you can apply the following YAML code to disable HTTP/2: apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster annotations: ingress.operator.openshift.io/default-enable-http2: "false" 9.9.17. Configuring the PROXY protocol for an Ingress Controller A cluster administrator can configure the PROXY protocol when an Ingress Controller uses either the HostNetwork , NodePortService , or Private endpoint publishing strategy types. The PROXY protocol enables the load balancer to preserve the original client addresses for connections that the Ingress Controller receives. The original client addresses are useful for logging, filtering, and injecting HTTP headers. In the default configuration, the connections that the Ingress Controller receives only contain the source address that is associated with the load balancer. Warning The default Ingress Controller with installer-provisioned clusters on non-cloud platforms that use a Keepalived Ingress Virtual IP (VIP) do not support the PROXY protocol. The PROXY protocol enables the load balancer to preserve the original client addresses for connections that the Ingress Controller receives. The original client addresses are useful for logging, filtering, and injecting HTTP headers. In the default configuration, the connections that the Ingress Controller receives contain only the source IP address that is associated with the load balancer. Important For a passthrough route configuration, servers in OpenShift Container Platform clusters cannot observe the original client source IP address. If you need to know the original client source IP address, configure Ingress access logging for your Ingress Controller so that you can view the client source IP addresses. For re-encrypt and edge routes, the OpenShift Container Platform router sets the Forwarded and X-Forwarded-For headers so that application workloads check the client source IP address. For more information about Ingress access logging, see "Configuring Ingress access logging". Configuring the PROXY protocol for an Ingress Controller is not supported when using the LoadBalancerService endpoint publishing strategy type. This restriction is because when OpenShift Container Platform runs in a cloud platform, and an Ingress Controller specifies that a service load balancer should be used, the Ingress Operator configures the load balancer service and enables the PROXY protocol based on the platform requirement for preserving source addresses. Important You must configure both OpenShift Container Platform and the external load balancer to use either the PROXY protocol or TCP. This feature is not supported in cloud deployments. This restriction is because when OpenShift Container Platform runs in a cloud platform, and an Ingress Controller specifies that a service load balancer should be used, the Ingress Operator configures the load balancer service and enables the PROXY protocol based on the platform requirement for preserving source addresses. Important You must configure both OpenShift Container Platform and the external load balancer to either use the PROXY protocol or to use Transmission Control Protocol (TCP). Prerequisites You created an Ingress Controller. Procedure Edit the Ingress Controller resource by entering the following command in your CLI: USD oc -n openshift-ingress-operator edit ingresscontroller/default Set the PROXY configuration: If your Ingress Controller uses the HostNetwork endpoint publishing strategy type, set the spec.endpointPublishingStrategy.hostNetwork.protocol subfield to PROXY : Sample hostNetwork configuration to PROXY # ... spec: endpointPublishingStrategy: hostNetwork: protocol: PROXY type: HostNetwork # ... If your Ingress Controller uses the NodePortService endpoint publishing strategy type, set the spec.endpointPublishingStrategy.nodePort.protocol subfield to PROXY : Sample nodePort configuration to PROXY # ... spec: endpointPublishingStrategy: nodePort: protocol: PROXY type: NodePortService # ... If your Ingress Controller uses the Private endpoint publishing strategy type, set the spec.endpointPublishingStrategy.private.protocol subfield to PROXY : Sample private configuration to PROXY # ... spec: endpointPublishingStrategy: private: protocol: PROXY type: Private # ... Additional resources Configuring Ingress access logging 9.9.18. Specifying an alternative cluster domain using the appsDomain option As a cluster administrator, you can specify an alternative to the default cluster domain for user-created routes by configuring the appsDomain field. The appsDomain field is an optional domain for OpenShift Container Platform to use instead of the default, which is specified in the domain field. If you specify an alternative domain, it overrides the default cluster domain for the purpose of determining the default host for a new route. For example, you can use the DNS domain for your company as the default domain for routes and ingresses for applications running on your cluster. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc command line interface. Procedure Configure the appsDomain field by specifying an alternative default domain for user-created routes. Edit the ingress cluster resource: USD oc edit ingresses.config/cluster -o yaml Edit the YAML file: Sample appsDomain configuration to test.example.com apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.example.com 1 appsDomain: <test.example.com> 2 1 Specifies the default domain. You cannot modify the default domain after installation. 2 Optional: Domain for OpenShift Container Platform infrastructure to use for application routes. Instead of the default prefix, apps , you can use an alternative prefix like test . Verify that an existing route contains the domain name specified in the appsDomain field by exposing the route and verifying the route domain change: Note Wait for the openshift-apiserver finish rolling updates before exposing the route. Expose the route: USD oc expose service hello-openshift route.route.openshift.io/hello-openshift exposed Example output: USD oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hello-openshift hello_openshift-<my_project>.test.example.com hello-openshift 8080-tcp None 9.9.19. Converting HTTP header case HAProxy lowercases HTTP header names by default; for example, changing Host: xyz.com to host: xyz.com . If legacy applications are sensitive to the capitalization of HTTP header names, use the Ingress Controller spec.httpHeaders.headerNameCaseAdjustments API field for a solution to accommodate legacy applications until they can be fixed. Important OpenShift Container Platform includes HAProxy 2.6. If you want to update to this version of the web-based load balancer, ensure that you add the spec.httpHeaders.headerNameCaseAdjustments section to your cluster's configuration file. As a cluster administrator, you can convert the HTTP header case by entering the oc patch command or by setting the HeaderNameCaseAdjustments field in the Ingress Controller YAML file. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure Capitalize an HTTP header by using the oc patch command. Change the HTTP header from host to Host by running the following command: USD oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"httpHeaders":{"headerNameCaseAdjustments":["Host"]}}}' Create a Route resource YAML file so that the annotation can be applied to the application. Example of a route named my-application apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: <application_name> namespace: <application_name> # ... 1 Set haproxy.router.openshift.io/h1-adjust-case so that the Ingress Controller can adjust the host request header as specified. Specify adjustments by configuring the HeaderNameCaseAdjustments field in the Ingress Controller YAML configuration file. The following example Ingress Controller YAML file adjusts the host header to Host for HTTP/1 requests to appropriately annotated routes: Example Ingress Controller YAML apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: headerNameCaseAdjustments: - Host The following example route enables HTTP response header name case adjustments by using the haproxy.router.openshift.io/h1-adjust-case annotation: Example route YAML apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: my-application namespace: my-application spec: to: kind: Service name: my-application 1 Set haproxy.router.openshift.io/h1-adjust-case to true. 9.9.20. Using router compression You configure the HAProxy Ingress Controller to specify router compression globally for specific MIME types. You can use the mimeTypes variable to define the formats of MIME types to which compression is applied. The types are: application, image, message, multipart, text, video, or a custom type prefaced by "X-". To see the full notation for MIME types and subtypes, see RFC1341 . Note Memory allocated for compression can affect the max connections. Additionally, compression of large buffers can cause latency, like heavy regex or long lists of regex. Not all MIME types benefit from compression, but HAProxy still uses resources to try to compress if instructed to. Generally, text formats, such as html, css, and js, formats benefit from compression, but formats that are already compressed, such as image, audio, and video, benefit little in exchange for the time and resources spent on compression. Procedure Configure the httpCompression field for the Ingress Controller. Use the following command to edit the IngressController resource: USD oc edit -n openshift-ingress-operator ingresscontrollers/default Under spec , set the httpCompression policy field to mimeTypes and specify a list of MIME types that should have compression applied: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpCompression: mimeTypes: - "text/html" - "text/css; charset=utf-8" - "application/json" ... 9.9.21. Exposing router metrics You can expose the HAProxy router metrics by default in Prometheus format on the default stats port, 1936. The external metrics collection and aggregation systems such as Prometheus can access the HAProxy router metrics. You can view the HAProxy router metrics in a browser in the HTML and comma separated values (CSV) format. Prerequisites You configured your firewall to access the default stats port, 1936. Procedure Get the router pod name by running the following command: USD oc get pods -n openshift-ingress Example output NAME READY STATUS RESTARTS AGE router-default-76bfffb66c-46qwp 1/1 Running 0 11h Get the router's username and password, which the router pod stores in the /var/lib/haproxy/conf/metrics-auth/statsUsername and /var/lib/haproxy/conf/metrics-auth/statsPassword files: Get the username by running the following command: USD oc rsh <router_pod_name> cat metrics-auth/statsUsername Get the password by running the following command: USD oc rsh <router_pod_name> cat metrics-auth/statsPassword Get the router IP and metrics certificates by running the following command: USD oc describe pod <router_pod> Get the raw statistics in Prometheus format by running the following command: USD curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics Access the metrics securely by running the following command: USD curl -u user:password https://<router_IP>:<stats_port>/metrics -k Access the default stats port, 1936, by running the following command: USD curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics Example 9.1. Example output ... # HELP haproxy_backend_connections_total Total number of connections. # TYPE haproxy_backend_connections_total gauge haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route"} 0 haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route-alt"} 0 haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route01"} 0 ... # HELP haproxy_exporter_server_threshold Number of servers tracked and the current threshold value. # TYPE haproxy_exporter_server_threshold gauge haproxy_exporter_server_threshold{type="current"} 11 haproxy_exporter_server_threshold{type="limit"} 500 ... # HELP haproxy_frontend_bytes_in_total Current total of incoming bytes. # TYPE haproxy_frontend_bytes_in_total gauge haproxy_frontend_bytes_in_total{frontend="fe_no_sni"} 0 haproxy_frontend_bytes_in_total{frontend="fe_sni"} 0 haproxy_frontend_bytes_in_total{frontend="public"} 119070 ... # HELP haproxy_server_bytes_in_total Current total of incoming bytes. # TYPE haproxy_server_bytes_in_total gauge haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_no_sni",service=""} 0 haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_sni",service=""} 0 haproxy_server_bytes_in_total{namespace="default",pod="docker-registry-5-nk5fz",route="docker-registry",server="10.130.0.89:5000",service="docker-registry"} 0 haproxy_server_bytes_in_total{namespace="default",pod="hello-rc-vkjqx",route="hello-route",server="10.130.0.90:8080",service="hello-svc-1"} 0 ... Launch the stats window by entering the following URL in a browser: http://<user>:<password>@<router_IP>:<stats_port> Optional: Get the stats in CSV format by entering the following URL in a browser: http://<user>:<password>@<router_ip>:1936/metrics;csv 9.9.22. Customizing HAProxy error code response pages As a cluster administrator, you can specify a custom error code response page for either 503, 404, or both error pages. The HAProxy router serves a 503 error page when the application pod is not running or a 404 error page when the requested URL does not exist. For example, if you customize the 503 error code response page, then the page is served when the application pod is not running, and the default 404 error code HTTP response page is served by the HAProxy router for an incorrect route or a non-existing route. Custom error code response pages are specified in a config map then patched to the Ingress Controller. The config map keys have two available file names as follows: error-page-503.http and error-page-404.http . Custom HTTP error code response pages must follow the HAProxy HTTP error page configuration guidelines . Here is an example of the default OpenShift Container Platform HAProxy router http 503 error code response page . You can use the default content as a template for creating your own custom page. By default, the HAProxy router serves only a 503 error page when the application is not running or when the route is incorrect or non-existent. This default behavior is the same as the behavior on OpenShift Container Platform 4.8 and earlier. If a config map for the customization of an HTTP error code response is not provided, and you are using a custom HTTP error code response page, the router serves a default 404 or 503 error code response page. Note If you use the OpenShift Container Platform default 503 error code page as a template for your customizations, the headers in the file require an editor that can use CRLF line endings. Procedure Create a config map named my-custom-error-code-pages in the openshift-config namespace: USD oc -n openshift-config create configmap my-custom-error-code-pages \ --from-file=error-page-503.http \ --from-file=error-page-404.http Important If you do not specify the correct format for the custom error code response page, a router pod outage occurs. To resolve this outage, you must delete or correct the config map and delete the affected router pods so they can be recreated with the correct information. Patch the Ingress Controller to reference the my-custom-error-code-pages config map by name: USD oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"httpErrorCodePages":{"name":"my-custom-error-code-pages"}}}' --type=merge The Ingress Operator copies the my-custom-error-code-pages config map from the openshift-config namespace to the openshift-ingress namespace. The Operator names the config map according to the pattern, <your_ingresscontroller_name>-errorpages , in the openshift-ingress namespace. Display the copy: USD oc get cm default-errorpages -n openshift-ingress Example output 1 The example config map name is default-errorpages because the default Ingress Controller custom resource (CR) was patched. Confirm that the config map containing the custom error response page mounts on the router volume where the config map key is the filename that has the custom HTTP error code response: For 503 custom HTTP custom error code response: USD oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-503.http For 404 custom HTTP custom error code response: USD oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-404.http Verification Verify your custom error code HTTP response: Create a test project and application: USD oc new-project test-ingress USD oc new-app django-psql-example For 503 custom http error code response: Stop all the pods for the application. Run the following curl command or visit the route hostname in the browser: USD curl -vk <route_hostname> For 404 custom http error code response: Visit a non-existent route or an incorrect route. Run the following curl command or visit the route hostname in the browser: USD curl -vk <route_hostname> Check if the errorfile attribute is properly in the haproxy.config file: USD oc -n openshift-ingress rsh <router> cat /var/lib/haproxy/conf/haproxy.config | grep errorfile 9.9.23. Setting the Ingress Controller maximum connections A cluster administrator can set the maximum number of simultaneous connections for OpenShift router deployments. You can patch an existing Ingress Controller to increase the maximum number of connections. Prerequisites The following assumes that you already created an Ingress Controller Procedure Update the Ingress Controller to change the maximum number of connections for HAProxy: USD oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"maxConnections": 7500}}}' Warning If you set the spec.tuningOptions.maxConnections value greater than the current operating system limit, the HAProxy process will not start. See the table in the "Ingress Controller configuration parameters" section for more information about this parameter. 9.10. Additional resources Configuring a custom PKI
|
[
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.openshiftdemos.com",
"nodePlacement: nodeSelector: matchLabels: kubernetes.io/os: linux tolerations: - effect: NoSchedule operator: Exists",
"httpCaptureCookies: - matchType: Exact maxLength: 128 name: MYCOOKIE",
"httpCaptureHeaders: request: - maxLength: 256 name: Connection - maxLength: 128 name: User-Agent response: - maxLength: 256 name: Content-Type - maxLength: 256 name: Content-Length",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old",
"oc edit IngressController default -n openshift-ingress-operator",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11",
"oc describe IngressController default -n openshift-ingress-operator",
"Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom",
"Issuer: C=US, O=Example Inc, CN=Example Global G2 TLS RSA SHA256 2020 CA1 Subject: SOME SIGNED CERT X509v3 CRL Distribution Points: Full Name: URI:http://crl.example.com/example.crl",
"oc create configmap router-ca-certs-default --from-file=ca-bundle.pem=client-ca.crt \\ 1 -n openshift-config",
"oc edit IngressController default -n openshift-ingress-operator",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: clientTLS: clientCertificatePolicy: Required clientCA: name: router-ca-certs-default allowedSubjectPatterns: - \"^/CN=example.com/ST=NC/C=US/O=Security/OU=OpenShiftUSD\"",
"openssl x509 -in custom-cert.pem -noout -subject subject= /CN=example.com/ST=NC/C=US/O=Security/OU=OpenShift",
"oc describe --namespace=openshift-ingress-operator ingresscontroller/default",
"oc describe clusteroperators/ingress",
"oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name>",
"oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: <custom_name> 1 namespace: openshift-ingress-operator spec: defaultCertificate: name: <custom-ingress-custom-certs> 2 replicas: 1 3 domain: <custom_domain> 4",
"oc create -f custom-ingress-controller.yaml",
"oc --namespace openshift-ingress-operator get ingresscontrollers",
"NAME AGE default 10m",
"oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key",
"oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default --patch '{\"spec\":{\"defaultCertificate\":{\"name\":\"custom-certs-default\"}}}'",
"echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate",
"subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com notAfter=May 10 08:32:45 2022 GM",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: defaultCertificate: name: custom-certs-default",
"oc patch -n openshift-ingress-operator ingresscontrollers/default --type json -p USD'- op: remove\\n path: /spec/defaultCertificate'",
"echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate",
"subject=CN = *.apps.<domain> issuer=CN = ingress-operator@1620633373 notAfter=May 10 10:44:36 2023 GMT",
"oc create -n openshift-ingress-operator serviceaccount thanos && oc describe -n openshift-ingress-operator serviceaccount thanos",
"Name: thanos Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-kfvf2 Mountable secrets: thanos-dockercfg-kfvf2 Tokens: thanos-token-c422q Events: <none>",
"oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: thanos-token namespace: openshift-ingress-operator annotations: kubernetes.io/service-account.name: thanos type: kubernetes.io/service-account-token EOF",
"secret=USD(oc get secret -n openshift-ingress-operator | grep thanos-token | head -n 1 | awk '{ print USD1 }')",
"oc process TOKEN=\"USDsecret\" -f - <<EOF | oc apply -n openshift-ingress-operator -f - apiVersion: template.openshift.io/v1 kind: Template parameters: - name: TOKEN objects: - apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: - parameter: bearerToken name: \\USD{TOKEN} key: token - parameter: ca name: \\USD{TOKEN} key: ca.crt EOF",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader namespace: openshift-ingress-operator rules: - apiGroups: - \"\" resources: - pods - nodes verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch - apiGroups: - \"\" resources: - namespaces verbs: - get",
"oc apply -f thanos-metrics-reader.yaml",
"oc adm policy -n openshift-ingress-operator add-role-to-user thanos-metrics-reader -z thanos --role-namespace=openshift-ingress-operator",
"oc adm policy -n openshift-ingress-operator add-cluster-role-to-user cluster-monitoring-view -z thanos",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: ingress-scaler namespace: openshift-ingress-operator spec: scaleTargetRef: 1 apiVersion: operator.openshift.io/v1 kind: IngressController name: default envSourceContainerName: ingress-operator minReplicaCount: 1 maxReplicaCount: 20 2 cooldownPeriod: 1 pollingInterval: 1 triggers: - type: prometheus metricType: AverageValue metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 3 namespace: openshift-ingress-operator 4 metricName: 'kube-node-role' threshold: '1' query: 'sum(kube_node_role{role=\"worker\",service=\"kube-state-metrics\"})' 5 authModes: \"bearer\" authenticationRef: name: keda-trigger-auth-prometheus",
"oc apply -f ingress-autoscaler.yaml",
"oc get -n openshift-ingress-operator ingresscontroller/default -o yaml | grep replicas:",
"replicas: 3",
"oc get pods -n openshift-ingress",
"NAME READY STATUS RESTARTS AGE router-default-7b5df44ff-l9pmm 2/2 Running 0 17h router-default-7b5df44ff-s5sl5 2/2 Running 0 3d22h router-default-7b5df44ff-wwsth 2/2 Running 0 66s",
"oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'",
"2",
"oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"replicas\": 3}}' --type=merge",
"ingresscontroller.operator.openshift.io/default patched",
"oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'",
"3",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 3 1",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container",
"oc -n openshift-ingress logs deployment.apps/router-default -c logs",
"2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 \"GET / HTTP/1.1\"",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV'",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: null",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 maxLength: 4096 port: 10514",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container container: maxLength: 8192",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"threadCount\": 8}}}'",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal 3",
"oc create -f <name>-ingress-controller.yaml 1",
"oc --all-namespaces=true get ingresscontrollers",
"oc -n openshift-ingress-operator edit ingresscontroller/default",
"spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal type: LoadBalancerService",
"oc -n openshift-ingress edit svc/router-default -o yaml",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"healthCheckInterval\": \"8s\"}}}'",
"oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF",
"oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge",
"spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"oc edit IngressController",
"spec: routeAdmission: wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: DENY",
"apiVersion: route.openshift.io/v1 kind: Route spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: SAMEORIGIN",
"frontend public http-response set-header X-Frame-Options 'DENY' frontend fe_sni http-response set-header X-Frame-Options 'DENY' frontend fe_no_sni http-response set-header X-Frame-Options 'DENY' backend be_secure:openshift-monitoring:alertmanager-main http-response set-header X-Frame-Options 'SAMEORIGIN'",
"oc -n openshift-ingress-operator edit ingresscontroller/default",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: actions: 1 request: 2 - name: X-Forwarded-Client-Cert 3 action: type: Set 4 set: value: \"%{+Q}[ssl_c_der,base64]\" 5 - name: X-SSL-Client-Der action: type: Delete",
"oc edit IngressController",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: forwardedHeaderPolicy: Append",
"oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true 1",
"oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster annotations: ingress.operator.openshift.io/default-enable-http2: \"true\"",
"oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=false 1",
"oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=false",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster annotations: ingress.operator.openshift.io/default-enable-http2: \"false\"",
"oc -n openshift-ingress-operator edit ingresscontroller/default",
"spec: endpointPublishingStrategy: hostNetwork: protocol: PROXY type: HostNetwork",
"spec: endpointPublishingStrategy: nodePort: protocol: PROXY type: NodePortService",
"spec: endpointPublishingStrategy: private: protocol: PROXY type: Private",
"oc edit ingresses.config/cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.example.com 1 appsDomain: <test.example.com> 2",
"oc expose service hello-openshift route.route.openshift.io/hello-openshift exposed",
"oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hello-openshift hello_openshift-<my_project>.test.example.com hello-openshift 8080-tcp None",
"oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"httpHeaders\":{\"headerNameCaseAdjustments\":[\"Host\"]}}}'",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: <application_name> namespace: <application_name>",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: headerNameCaseAdjustments: - Host",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: my-application namespace: my-application spec: to: kind: Service name: my-application",
"oc edit -n openshift-ingress-operator ingresscontrollers/default",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpCompression: mimeTypes: - \"text/html\" - \"text/css; charset=utf-8\" - \"application/json\"",
"oc get pods -n openshift-ingress",
"NAME READY STATUS RESTARTS AGE router-default-76bfffb66c-46qwp 1/1 Running 0 11h",
"oc rsh <router_pod_name> cat metrics-auth/statsUsername",
"oc rsh <router_pod_name> cat metrics-auth/statsPassword",
"oc describe pod <router_pod>",
"curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics",
"curl -u user:password https://<router_IP>:<stats_port>/metrics -k",
"curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics",
"HELP haproxy_backend_connections_total Total number of connections. TYPE haproxy_backend_connections_total gauge haproxy_backend_connections_total{backend=\"http\",namespace=\"default\",route=\"hello-route\"} 0 haproxy_backend_connections_total{backend=\"http\",namespace=\"default\",route=\"hello-route-alt\"} 0 haproxy_backend_connections_total{backend=\"http\",namespace=\"default\",route=\"hello-route01\"} 0 HELP haproxy_exporter_server_threshold Number of servers tracked and the current threshold value. TYPE haproxy_exporter_server_threshold gauge haproxy_exporter_server_threshold{type=\"current\"} 11 haproxy_exporter_server_threshold{type=\"limit\"} 500 HELP haproxy_frontend_bytes_in_total Current total of incoming bytes. TYPE haproxy_frontend_bytes_in_total gauge haproxy_frontend_bytes_in_total{frontend=\"fe_no_sni\"} 0 haproxy_frontend_bytes_in_total{frontend=\"fe_sni\"} 0 haproxy_frontend_bytes_in_total{frontend=\"public\"} 119070 HELP haproxy_server_bytes_in_total Current total of incoming bytes. TYPE haproxy_server_bytes_in_total gauge haproxy_server_bytes_in_total{namespace=\"\",pod=\"\",route=\"\",server=\"fe_no_sni\",service=\"\"} 0 haproxy_server_bytes_in_total{namespace=\"\",pod=\"\",route=\"\",server=\"fe_sni\",service=\"\"} 0 haproxy_server_bytes_in_total{namespace=\"default\",pod=\"docker-registry-5-nk5fz\",route=\"docker-registry\",server=\"10.130.0.89:5000\",service=\"docker-registry\"} 0 haproxy_server_bytes_in_total{namespace=\"default\",pod=\"hello-rc-vkjqx\",route=\"hello-route\",server=\"10.130.0.90:8080\",service=\"hello-svc-1\"} 0",
"http://<user>:<password>@<router_IP>:<stats_port>",
"http://<user>:<password>@<router_ip>:1936/metrics;csv",
"oc -n openshift-config create configmap my-custom-error-code-pages --from-file=error-page-503.http --from-file=error-page-404.http",
"oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"httpErrorCodePages\":{\"name\":\"my-custom-error-code-pages\"}}}' --type=merge",
"oc get cm default-errorpages -n openshift-ingress",
"NAME DATA AGE default-errorpages 2 25s 1",
"oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-503.http",
"oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-404.http",
"oc new-project test-ingress",
"oc new-app django-psql-example",
"curl -vk <route_hostname>",
"curl -vk <route_hostname>",
"oc -n openshift-ingress rsh <router> cat /var/lib/haproxy/conf/haproxy.config | grep errorfile",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"maxConnections\": 7500}}}'"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/networking/configuring-ingress
|
Chapter 1. Security APIs
|
Chapter 1. Security APIs 1.1. CertificateSigningRequest [certificates.k8s.io/v1] Description CertificateSigningRequest objects provide a mechanism to obtain x509 certificates by submitting a certificate signing request, and having it asynchronously approved and issued. Kubelets use this API to obtain: 1. client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client-kubelet" signerName). 2. serving certificates for TLS endpoints kube-apiserver can connect to securely (with the "kubernetes.io/kubelet-serving" signerName). This API can be used to request client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client" signerName), or to obtain certificates from custom non-Kubernetes signers. Type object 1.2. CredentialsRequest [cloudcredential.openshift.io/v1] Description CredentialsRequest is the Schema for the credentialsrequests API Type object 1.3. PodSecurityPolicyReview [security.openshift.io/v1] Description PodSecurityPolicyReview checks which service accounts (not users, since that would be cluster-wide) can create the PodTemplateSpec in question. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.4. PodSecurityPolicySelfSubjectReview [security.openshift.io/v1] Description PodSecurityPolicySelfSubjectReview checks whether this user/SA tuple can create the PodTemplateSpec Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.5. PodSecurityPolicySubjectReview [security.openshift.io/v1] Description PodSecurityPolicySubjectReview checks whether a particular user/SA tuple can create the PodTemplateSpec. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.6. RangeAllocation [security.openshift.io/v1] Description RangeAllocation is used so we can easily expose a RangeAllocation typed for security group Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object 1.7. Secret [v1] Description Secret holds secret data of a certain type. The total bytes of the values in the Data field must be less than MaxSecretSize bytes. Type object 1.8. SecurityContextConstraints [security.openshift.io/v1] Description SecurityContextConstraints governs the ability to make requests that affect the SecurityContext that will be applied to a container. For historical reasons SCC was exposed under the core Kubernetes API group. That exposure is deprecated and will be removed in a future release - users should instead use the security.openshift.io group to manage SecurityContextConstraints. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.9. ServiceAccount [v1] Description ServiceAccount binds together: * a name, understood by users, and perhaps by peripheral systems, for an identity * a principal that can be authenticated and authorized * a set of secrets Type object
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/security_apis/security-apis
|
9.2. pNFS
|
9.2. pNFS Support for Parallel NFS (pNFS) as part of the NFS v4.1 standard is available starting with Red Hat Enterprise Linux 6.4. The pNFS architecture improves the scalability of NFS, with possible improvements to performance. That is, when a server implements pNFS as well, a client is able to access data through multiple sources concurrently. To enable this functionality, use the -o v4.1 mount option on mounts from a pNFS-enabled server. After the server is pNFS-enabled, the nfs_layout_nfsv41_files kernel is automatically loaded on the first mount. If the module is successfully loaded, the following message is logged in the /var/log/messages file: To verify a successful NFSv4.1 mount, run: Important Once server and client negotiate NFS v4.1 or higher, they automatically take advantage of pNFS if available. Both client and server need to support the same "layout type". Possible layout types include files , blocks , objects , flexfiles , and SCSI . Starting with Red Hat Enterprise Linux 6.4, the client only supports the files layout type and uses pNFS only when the server also supports the files layout type. Red Hat recommends using the files profiles only with Red Hat Enterprise Linux 6.6 and later. For more information on pNFS, see http://www.pnfs.com .
|
[
"kernel: nfs4filelayout_init: NFSv4 File Layout Driver Registering",
"mount | grep /proc/mounts"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/ch09s02
|
Chapter 70. Kubernetes Nodes
|
Chapter 70. Kubernetes Nodes Since Camel 2.17 Both producer and consumer are supported The Kubernetes Nodes component is one of the Kubernetes Components which provides a producer to execute Kubernetes Node operations and a consumer to consume events related to Node objects. 70.1. Dependencies When using kubernetes-nodes with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 70.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 70.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 70.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 70.3. Component Options The Kubernetes Nodes component supports 4 options, which are listed below. Name Description Default Type kubernetesClient (common) Autowired To use an existing kubernetes client. KubernetesClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 70.4. Endpoint Options The Kubernetes Nodes endpoint is configured using URI syntax: with the following path and query parameters: 70.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (common) Required Kubernetes Master url. String 70.4.2. Query Parameters (33 parameters) Name Description Default Type apiVersion (common) The Kubernetes API Version to use. String dnsDomain (common) The dns domain, used for ServiceCall EIP. String kubernetesClient (common) Default KubernetesClient to use if provided. KubernetesClient namespace (common) The namespace. String portName (common) The port name, used for ServiceCall EIP. String portProtocol (common) The port protocol, used for ServiceCall EIP. tcp String crdGroup (consumer) The Consumer CRD Resource Group we would like to watch. String crdName (consumer) The Consumer CRD Resource name we would like to watch. String crdPlural (consumer) The Consumer CRD Resource Plural we would like to watch. String crdScope (consumer) The Consumer CRD Resource Scope we would like to watch. String crdVersion (consumer) The Consumer CRD Resource Version we would like to watch. String labelKey (consumer) The Consumer Label key when watching at some resources. String labelValue (consumer) The Consumer Label value when watching at some resources. String poolSize (consumer) The Consumer pool size. 1 int resourceName (consumer) The Consumer Resource Name we would like to watch. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern operation (producer) Producer operation to do on Kubernetes. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 70.5. Message Headers The Kubernetes Nodes component supports 6 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNodesLabels (producer) Constant: KUBERNETES_NODES_LABELS The node labels. Map CamelKubernetesNodeName (producer) Constant: KUBERNETES_NODE_NAME The node name. String CamelKubernetesNodeSpec (producer) Constant: KUBERNETES_NODE_SPEC The spec for a node. NodeSpec CamelKubernetesEventAction (consumer) Constant: KUBERNETES_EVENT_ACTION Action watched by the consumer. Enum values: ADDED MODIFIED DELETED ERROR BOOKMARK Action CamelKubernetesEventTimestamp (consumer) Constant: KUBERNETES_EVENT_TIMESTAMP Timestamp of the action watched by the consumer. long 70.6. Supported producer operation listNodes listNodesByLabels getNode createNode updateNode deleteNode 70.7. Kubernetes Nodes Producer Examples listNodes: this operation list the nodes on a kubernetes cluster. from("direct:list"). toF("kubernetes-nodes:///?kubernetesClient=#kubernetesClient&operation=listNodes"). to("mock:result"); This operation returns a List of Nodes from your cluster. listNodesByLabels: this operation list the nodes by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NODES_LABELS, labels); } }); toF("kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listNodesByLabels"). to("mock:result"); This operation returns a List of Nodes from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 70.8. Kubernetes Nodes Consumer Example fromF("kubernetes-nodes://%s?oauthToken=%s&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Node node = exchange.getIn().getBody(Node.class); log.info("Got event with configmap name: " + node.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } } This consumer returns a list of events for the node test. 70.9. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>",
"kubernetes-nodes:masterUrl",
"from(\"direct:list\"). toF(\"kubernetes-nodes:///?kubernetesClient=#kubernetesClient&operation=listNodes\"). to(\"mock:result\");",
"from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NODES_LABELS, labels); } }); toF(\"kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listNodesByLabels\"). to(\"mock:result\");",
"fromF(\"kubernetes-nodes://%s?oauthToken=%s&resourceName=test\", host, authToken).process(new KubernertesProcessor()).to(\"mock:result\"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Node node = exchange.getIn().getBody(Node.class); log.info(\"Got event with configmap name: \" + node.getMetadata().getName() + \" and action \" + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } }"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-nodes-component-starter
|
Chapter 5. ironic
|
Chapter 5. ironic The following chapter contains information about the configuration options in the ironic service. 5.1. ironic.conf This section contains options for the /etc/ironic/ironic.conf file. 5.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/ironic/ironic.conf file. . Configuration option = Default value Type Description auth_strategy = keystone string value Authentication strategy used by ironic-api. "noauth" should not be used in a production environment because all authentication will be disabled. backdoor_port = None string value Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service's log file. backdoor_socket = None string value Enable eventlet backdoor, using the provided path as a unix socket that can receive connections. This option is mutually exclusive with backdoor_port in that only one should be provided. If both are provided then the existence of this option overrides the usage of that option. Inside the path {pid} will be replaced with the PID of the current process. bindir = USDpybasedir/bin string value Directory where ironic binaries are installed. conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool control_exchange = openstack string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. debug_tracebacks_in_api = False boolean value Return server tracebacks in the API response for any error responses. WARNING: this is insecure and should not be used in a production environment. default_bios_interface = None string value Default bios interface to be used for nodes that do not have bios_interface field set. A complete list of bios interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.bios" entrypoint. default_boot_interface = None string value Default boot interface to be used for nodes that do not have boot_interface field set. A complete list of boot interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.boot" entrypoint. default_console_interface = None string value Default console interface to be used for nodes that do not have console_interface field set. A complete list of console interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.console" entrypoint. default_deploy_interface = None string value Default deploy interface to be used for nodes that do not have deploy_interface field set. A complete list of deploy interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.deploy" entrypoint. default_inspect_interface = None string value Default inspect interface to be used for nodes that do not have inspect_interface field set. A complete list of inspect interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.inspect" entrypoint. default_log_levels = ['amqp=WARNING', 'amqplib=WARNING', 'qpid.messaging=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'sqlalchemy=WARNING', 'stevedore=INFO', 'eventlet.wsgi.server=INFO', 'iso8601=WARNING', 'requests=WARNING', 'glanceclient=WARNING', 'urllib3.connectionpool=WARNING', 'keystonemiddleware.auth_token=INFO', 'keystoneauth.session=INFO', 'openstack=WARNING', 'oslo_policy=WARNING', 'oslo_concurrency.lockutils=WARNING'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. default_management_interface = None string value Default management interface to be used for nodes that do not have management_interface field set. A complete list of management interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.management" entrypoint. default_network_interface = None string value Default network interface to be used for nodes that do not have network_interface field set. A complete list of network interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.network" entrypoint. default_portgroup_mode = active-backup string value Default mode for portgroups. Allowed values can be found in the linux kernel documentation on bonding: https://www.kernel.org/doc/Documentation/networking/bonding.txt . default_power_interface = None string value Default power interface to be used for nodes that do not have power_interface field set. A complete list of power interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.power" entrypoint. default_raid_interface = None string value Default raid interface to be used for nodes that do not have raid_interface field set. A complete list of raid interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.raid" entrypoint. default_rescue_interface = None string value Default rescue interface to be used for nodes that do not have rescue_interface field set. A complete list of rescue interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.rescue" entrypoint. default_resource_class = None string value Resource class to use for new nodes when no resource class is provided in the creation request. default_storage_interface = noop string value Default storage interface to be used for nodes that do not have storage_interface field set. A complete list of storage interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.storage" entrypoint. default_vendor_interface = None string value Default vendor interface to be used for nodes that do not have vendor_interface field set. A complete list of vendor interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.vendor" entrypoint. enabled_bios_interfaces = ['no-bios', 'redfish'] list value Specify the list of bios interfaces to load during service initialization. Missing bios interfaces, or bios interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one bios interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented bios interfaces. A complete list of bios interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.bios" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled bios interfaces on every ironic-conductor service. enabled_boot_interfaces = ['ipxe', 'pxe', 'redfish-virtual-media'] list value Specify the list of boot interfaces to load during service initialization. Missing boot interfaces, or boot interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one boot interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented boot interfaces. A complete list of boot interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.boot" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled boot interfaces on every ironic-conductor service. enabled_console_interfaces = ['no-console'] list value Specify the list of console interfaces to load during service initialization. Missing console interfaces, or console interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one console interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented console interfaces. A complete list of console interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.console" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled console interfaces on every ironic-conductor service. enabled_deploy_interfaces = ['direct', 'ramdisk'] list value Specify the list of deploy interfaces to load during service initialization. Missing deploy interfaces, or deploy interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one deploy interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented deploy interfaces. A complete list of deploy interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.deploy" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled deploy interfaces on every ironic-conductor service. enabled_hardware_types = ['ipmi', 'redfish'] list value Specify the list of hardware types to load during service initialization. Missing hardware types, or hardware types which fail to initialize, will prevent the conductor service from starting. This option defaults to a recommended set of production-oriented hardware types. A complete list of hardware types present on your system may be found by enumerating the "ironic.hardware.types" entrypoint. enabled_inspect_interfaces = ['no-inspect', 'redfish'] list value Specify the list of inspect interfaces to load during service initialization. Missing inspect interfaces, or inspect interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one inspect interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented inspect interfaces. A complete list of inspect interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.inspect" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled inspect interfaces on every ironic-conductor service. enabled_management_interfaces = None list value Specify the list of management interfaces to load during service initialization. Missing management interfaces, or management interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one management interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented management interfaces. A complete list of management interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.management" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled management interfaces on every ironic-conductor service. enabled_network_interfaces = ['flat', 'noop'] list value Specify the list of network interfaces to load during service initialization. Missing network interfaces, or network interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one network interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented network interfaces. A complete list of network interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.network" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled network interfaces on every ironic-conductor service. enabled_power_interfaces = None list value Specify the list of power interfaces to load during service initialization. Missing power interfaces, or power interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one power interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented power interfaces. A complete list of power interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.power" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled power interfaces on every ironic-conductor service. enabled_raid_interfaces = ['agent', 'no-raid', 'redfish'] list value Specify the list of raid interfaces to load during service initialization. Missing raid interfaces, or raid interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one raid interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented raid interfaces. A complete list of raid interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.raid" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled raid interfaces on every ironic-conductor service. enabled_rescue_interfaces = ['no-rescue'] list value Specify the list of rescue interfaces to load during service initialization. Missing rescue interfaces, or rescue interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one rescue interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented rescue interfaces. A complete list of rescue interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.rescue" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled rescue interfaces on every ironic-conductor service. enabled_storage_interfaces = ['cinder', 'noop'] list value Specify the list of storage interfaces to load during service initialization. Missing storage interfaces, or storage interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one storage interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented storage interfaces. A complete list of storage interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.storage" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled storage interfaces on every ironic-conductor service. enabled_vendor_interfaces = ['ipmitool', 'redfish', 'no-vendor'] list value Specify the list of vendor interfaces to load during service initialization. Missing vendor interfaces, or vendor interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one vendor interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented vendor interfaces. A complete list of vendor interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.vendor" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled vendor interfaces on every ironic-conductor service. esp_image = None string value Path to EFI System Partition image file. This file is recommended for creating UEFI bootable ISO images efficiently. ESP image should contain a FAT12/16/32-formatted file system holding EFI boot loaders (e.g. GRUB2) for each hardware architecture ironic needs to boot. This option is only used when neither ESP nor ISO deploy image is configured to the node being deployed in which case ironic will attempt to fetch ESP image from the configured location or extract ESP image from UEFI-bootable deploy ISO image. executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. force_raw_images = True boolean value If True, convert backing images to "raw" disk image format. graceful_shutdown_timeout = 60 integer value Specify a timeout after which a gracefully shutdown server will exit. Zero value means endless wait. grub_config_path = EFI/BOOT/grub.cfg string value GRUB2 configuration file location on the UEFI ISO images produced by ironic. The default value is usually incorrect and should not be relied on. If you use a GRUB2 image from a certain distribution, use a distribution-specific path here, e.g. EFI/ubuntu/grub.cfg grub_config_template = USDpybasedir/common/grub_conf.template string value Template file for grub configuration file. hash_partition_exponent = 5 integer value Exponent to determine number of hash partitions to use when distributing load across conductors. Larger values will result in more even distribution of load and less load when rebalancing the ring, but more memory usage. Number of partitions per conductor is (2^hash_partition_exponent). This determines the granularity of rebalancing: given 10 hosts, and an exponent of the 2, there are 40 partitions in the ring.A few thousand partitions should make rebalancing smooth in most cases. The default is suitable for up to a few hundred conductors. Configuring for too many partitions has a negative impact on CPU usage. hash_ring_algorithm = md5 string value Hash function to use when building the hash ring. If running on a FIPS system, do not use md5. WARNING: all ironic services in a cluster MUST use the same algorithm at all times. Changing the algorithm requires an offline update. hash_ring_reset_interval = 15 integer value Time (in seconds) after which the hash ring is considered outdated and is refreshed on the access. host = <based on operating system> string value Name of this node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address. However, the node name must be valid within an AMQP key, and if using ZeroMQ (will be removed in the Stein release), a valid hostname, FQDN, or IP address. http_basic_auth_user_file = /etc/ironic/htpasswd string value Path to Apache format user authentication file used when auth_strategy=http_basic image_download_concurrency = 20 integer value How many image downloads and raw format conversions to run in parallel. Only affects image caches. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. isolinux_bin = /usr/lib/syslinux/isolinux.bin string value Path to isolinux binary file. isolinux_config_template = USDpybasedir/common/isolinux_config.template string value Template file for isolinux configuration file. ldlinux_c32 = None string value Path to ldlinux.c32 file. This file is required for syslinux 5.0 or later. If not specified, the file is looked for in "/usr/lib/syslinux/modules/bios/ldlinux.c32" and "/usr/share/syslinux/ldlinux.c32". log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_in_db_max_size = 4096 integer value Max number of characters of any node last_error/maintenance_reason pushed to database. log_options = True boolean value Enables or disables logging values of all registered options when starting a service (at DEBUG level). log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is set to "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". minimum_memory_wait_retries = 6 integer value Number of retries to hold onto the worker before failing or returning the thread to the pool if the conductor can automatically retry. minimum_memory_wait_time = 15 integer value Seconds to wait between retries for free memory before launching the process. This, combined with memory_wait_retries allows the conductor to determine how long we should attempt to directly retry. minimum_memory_warning_only = False boolean value Setting to govern if Ironic should only warn instead of attempting to hold back the request in order to prevent the exhaustion of system memory. minimum_required_memory = 1024 integer value Minimum memory in MiB for the system to have available prior to starting a memory intensive process on the conductor. my_ip = <based on operating system> string value IPv4 address of this host. If unset, will determine the IP programmatically. If unable to do so, will use "127.0.0.1". NOTE: This field does accept an IPv6 address as an override for templates and URLs, however it is recommended that [DEFAULT]my_ipv6 is used along with DNS names for service URLs for dual-stack environments. my_ipv6 = None string value IP address of this host using IPv6. This value must be supplied via the configuration and cannot be adequately programmatically determined like the [DEFAULT]my_ip parameter for IPv4. notification_level = None string value Specifies the minimum level for which to send notifications. If not set, no notifications will be sent. The default is for this option to be unset. parallel_image_downloads = True boolean value Run image downloads and raw format conversions in parallel. pecan_debug = False boolean value Enable pecan debug mode. WARNING: this is insecure and should not be used in a production environment. pin_release_version = None string value Used for rolling upgrades. Setting this option downgrades (or pins) the Bare Metal API, the internal ironic RPC communication, and the database objects to their respective versions, so they are compatible with older services. When doing a rolling upgrade from version N to version N+1, set (to pin) this to N. To unpin (default), leave it unset and the latest versions will be used. publish_errors = False boolean value Enables or disables publication of error events. pybasedir = /usr/lib/python3.9/site-packages/ironic string value Directory where the ironic python module is installed. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. raw_image_growth_factor = 2.0 floating point value The scale factor used for estimating the size of a raw image converted from compact image formats such as QCOW2. Default is 2.0, must be greater than 1.0. rbac_service_project_name = service string value The project name utilized for Role Based Access Control checks for the reserved service project. This project is utilized for services to have accounts for cross-service communication. Often these accounts require higher levels of access, and effectively this permits accounts from the service to not be restricted to project scoping of responses. i.e. The service project user with a service role will be able to see nodes across all projects, similar to System scoped access. If not set to a value, and all service role access will be filtered matching an owner or lessee , if applicable. If an operator wishes to make behavior visible for all service role users across all projects, then a custom policy must be used to override the default "service_role" rule. It should be noted that the value of "service" is a default convention for OpenStack deployments, but the requsite access and details around end configuration are largely up to an operator if they are doing an OpenStack deployment manually. rbac_service_role_elevated_access = False boolean value Enable elevated access for users with service role belonging to the rbac_service_project_name project when using default policy. The default setting of disabled causes all service role requests to be scoped to the project the service account belongs to. rootwrap_config = /etc/ironic/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root. rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_ping_enabled = False boolean value Add an endpoint to answer to ping calls. Endpoint is named oslo_rpc_server_ping rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. rpc_transport = oslo string value Which RPC transport implementation to use between conductor and API services run_external_periodic_tasks = True boolean value Some periodic tasks can be run in a separate process. Should we run them here? state_path = USDpybasedir string value Top-level directory for maintaining ironic's state. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. tempdir = /tmp string value Temporary working directory, default is Python temp dir. transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. versioned_notifications_topics = ['ironic_versioned_notifications'] list value Specifies the topics for the versioned notifications issued by Ironic. The default value is fine for most deployments and rarely needs to be changed. However, if you have a third-party service that consumes versioned notifications, it might be worth getting a topic for that service. Ironic will send a message containing a versioned notification payload to each topic queue in this list. The list of versioned notifications is visible in https://docs.openstack.org/ironic/latest/admin/notifications.html watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. webserver_connection_timeout = 60 integer value Connection timeout when accessing remote web servers with images. webserver_verify_ca = True string value CA certificates to be used for certificate verification. This can be either a Boolean value or a path to a CA_BUNDLE file.If set to True, the certificates present in the standard path are used to verify the host certificates.If set to False, the conductor will ignore verifying the SSL certificate presented by the host.If it"s a path, conductor uses the specified certificate for SSL verification. If the path does not exist, the behavior is same as when this value is set to True i.e the certificates present in the standard path are used for SSL verification.Defaults to True. 5.1.2. agent The following table outlines the options available under the [agent] group in the /etc/ironic/ironic.conf file. Table 5.1. agent Configuration option = Default value Type Description agent_api_version = v1 string value API version to use for communicating with the ramdisk agent. api_ca_file = None string value Path to the TLS CA that is used to start the bare metal API. In some boot methods this file can be passed to the ramdisk. certificates_path = /var/lib/ironic/certificates string value Path to store auto-generated TLS certificates used to validate connections to the ramdisk. command_timeout = 60 integer value Timeout (in seconds) for IPA commands. command_wait_attempts = 100 integer value Number of attempts to check for asynchronous commands completion before timing out. command_wait_interval = 6 integer value Number of seconds to wait for between checks for asynchronous commands completion. deploy_logs_collect = on_failure string value Whether Ironic should collect the deployment logs on deployment failure (on_failure), always or never. deploy_logs_local_path = /var/log/ironic/deploy string value The path to the directory where the logs should be stored, used when the deploy_logs_storage_backend is configured to "local". deploy_logs_storage_backend = local string value The name of the storage backend where the logs will be stored. deploy_logs_swift_container = ironic_deploy_logs_container string value The name of the Swift container to store the logs, used when the deploy_logs_storage_backend is configured to "swift". deploy_logs_swift_days_to_expire = 30 integer value Number of days before a log object is marked as expired in Swift. If None, the logs will be kept forever or until manually deleted. Used when the deploy_logs_storage_backend is configured to "swift". image_download_source = http string value Specifies whether direct deploy interface should try to use the image source directly or if ironic should cache the image on the conductor and serve it from ironic's own http server. manage_agent_boot = True boolean value Whether Ironic will manage booting of the agent ramdisk. If set to False, you will need to configure your mechanism to allow booting the agent ramdisk. max_command_attempts = 3 integer value This is the maximum number of attempts that will be done for IPA commands that fails due to network problems. memory_consumed_by_agent = 0 integer value The memory size in MiB consumed by agent when it is booted on a bare metal node. This is used for checking if the image can be downloaded and deployed on the bare metal node after booting agent ramdisk. This may be set according to the memory consumed by the agent ramdisk image. neutron_agent_max_attempts = 100 integer value Max number of attempts to validate a Neutron agent status before raising network error for a dead agent. neutron_agent_poll_interval = 2 integer value The number of seconds Neutron agent will wait between polling for device changes. This value should be the same as CONF.AGENT.polling_interval in Neutron configuration. neutron_agent_status_retry_interval = 10 integer value Wait time in seconds between attempts for validating Neutron agent status. post_deploy_get_power_state_retries = 6 integer value Number of times to retry getting power state to check if bare metal node has been powered off after a soft power off. post_deploy_get_power_state_retry_interval = 5 integer value Amount of time (in seconds) to wait between polling power state after trigger soft poweroff. require_tls = False boolean value If set to True, callback URLs without https:// will be rejected by the conductor. stream_raw_images = True boolean value Whether the agent ramdisk should stream raw images directly onto the disk or not. By streaming raw images directly onto the disk the agent ramdisk will not spend time copying the image to a tmpfs partition (therefore consuming less memory) prior to writing it to the disk. Unless the disk where the image will be copied to is really slow, this option should be set to True. Defaults to True. verify_ca = True string value Path to the TLS CA to validate connection to the ramdisk. Set to True to use the system default CA storage. Set to False to disable validation. Ignored when automatic TLS setup is used. 5.1.3. anaconda The following table outlines the options available under the [anaconda] group in the /etc/ironic/ironic.conf file. Table 5.2. anaconda Configuration option = Default value Type Description default_ks_template = USDpybasedir/drivers/modules/ks.cfg.template string value kickstart template to use when no kickstart template is specified in the instance_info or the glance OS image. insecure_heartbeat = False boolean value Option to allow the kickstart configuration to be informed if SSL/TLS certificate verificaiton should be enforced, or not. This option exists largely to facilitate easy testing and use of the anaconda deployment interface. When this option is set, heartbeat operations, depending on the contents of the utilized kickstart template, may not enfore TLS certificate verification. 5.1.4. ansible The following table outlines the options available under the [ansible] group in the /etc/ironic/ironic.conf file. Table 5.3. ansible Configuration option = Default value Type Description ansible_extra_args = None string value Extra arguments to pass on every invocation of Ansible. ansible_playbook_script = ansible-playbook string value Path to "ansible-playbook" script. Default will search the USDPATH configured for user running ironic-conductor process. Provide the full path when ansible-playbook is not in USDPATH or installed in not default location. config_file_path = USDpybasedir/drivers/modules/ansible/playbooks/ansible.cfg string value Path to ansible configuration file. If set to empty, system default will be used. default_clean_playbook = clean.yaml string value Path (relative to USDplaybooks_path or absolute) to the default playbook used for node cleaning. It may be overridden by per-node ansible_clean_playbook option in node's driver_info field. default_clean_steps_config = clean_steps.yaml string value Path (relative to USDplaybooks_path or absolute) to the default auxiliary cleaning steps file used during the node cleaning. It may be overridden by per-node ansible_clean_steps_config option in node's driver_info field. default_deploy_playbook = deploy.yaml string value Path (relative to USDplaybooks_path or absolute) to the default playbook used for deployment. It may be overridden by per-node ansible_deploy_playbook option in node's driver_info field. default_key_file = None string value Absolute path to the private SSH key file to use by Ansible by default when connecting to the ramdisk over SSH. Default is to use default SSH keys configured for the user running the ironic-conductor service. Private keys with password must be pre-loaded into ssh-agent . It may be overridden by per-node ansible_key_file option in node's driver_info field. default_python_interpreter = None string value Absolute path to the python interpreter on the managed machines. It may be overridden by per-node ansible_python_interpreter option in node's driver_info field. By default, ansible uses /usr/bin/python default_shutdown_playbook = shutdown.yaml string value Path (relative to USDplaybooks_path or absolute) to the default playbook used for graceful in-band shutdown of the node. It may be overridden by per-node ansible_shutdown_playbook option in node's driver_info field. default_username = ansible string value Name of the user to use for Ansible when connecting to the ramdisk over SSH. It may be overridden by per-node ansible_username option in node's driver_info field. extra_memory = 10 integer value Extra amount of memory in MiB expected to be consumed by Ansible-related processes on the node. Affects decision whether image will fit into RAM. image_store_cafile = None string value Specific CA bundle to use for validating SSL connections to the image store. If not specified, CA available in the ramdisk will be used. Is not used by default playbooks included with the driver. Suitable for environments that use self-signed certificates. image_store_certfile = None string value Client cert to use for SSL connections to image store. Is not used by default playbooks included with the driver. image_store_insecure = False boolean value Skip verifying SSL connections to the image store when downloading the image. Setting it to "True" is only recommended for testing environments that use self-signed certificates. image_store_keyfile = None string value Client key to use for SSL connections to image store. Is not used by default playbooks included with the driver. playbooks_path = USDpybasedir/drivers/modules/ansible/playbooks string value Path to directory with playbooks, roles and local inventory. post_deploy_get_power_state_retries = 6 integer value Number of times to retry getting power state to check if bare metal node has been powered off after a soft power off. Value of 0 means do not retry on failure. post_deploy_get_power_state_retry_interval = 5 integer value Amount of time (in seconds) to wait between polling power state after trigger soft poweroff. verbosity = None integer value Set ansible verbosity level requested when invoking "ansible-playbook" command. 4 includes detailed SSH session logging. Default is 4 when global debug is enabled and 0 otherwise. 5.1.5. api The following table outlines the options available under the [api] group in the /etc/ironic/ironic.conf file. Table 5.4. api Configuration option = Default value Type Description api_workers = None integer value Number of workers for OpenStack Ironic API service. The default is equal to the number of CPUs available, but not more than 4. One worker is used if the CPU number cannot be detected. enable_ssl_api = False boolean value Enable the integrated stand-alone API to service requests via HTTPS instead of HTTP. If there is a front-end service performing HTTPS offloading from the service, this option should be False; note, you will want to enable proxy headers parsing with [oslo_middleware]enable_proxy_headers_parsing option or configure [api]public_endpoint option to set URLs in responses to the SSL terminated one. host_ip = 0.0.0.0 host address value The IP address or hostname on which ironic-api listens. max_limit = 1000 integer value The maximum number of items returned in a single response from a collection resource. network_data_schema = USDpybasedir/api/controllers/v1/network-data-schema.json string value Schema for network data used by this deployment. port = 6385 port value The TCP port on which ironic-api listens. project_admin_can_manage_own_nodes = True boolean value If a project scoped administrative user is permitted to create/delte baremetal nodes in their project. public_endpoint = None string value Public URL to use when building the links to the API resources (for example, "https://ironic.rocks:6384"). If None the links will be built using the request's host URL. If the API is operating behind a proxy, you will want to change this to represent the proxy's URL. Defaults to None. Ignored when proxy headers parsing is enabled via [oslo_middleware]enable_proxy_headers_parsing option. ramdisk_heartbeat_timeout = 300 integer value Maximum interval (in seconds) for agent heartbeats. restrict_lookup = True boolean value Whether to restrict the lookup API to only nodes in certain states. unix_socket = None string value Unix socket to listen on. Disables host_ip and port. unix_socket_mode = None integer value File mode (an octal number) of the unix socket to listen on. Ignored if unix_socket is not set. 5.1.6. audit The following table outlines the options available under the [audit] group in the /etc/ironic/ironic.conf file. Table 5.5. audit Configuration option = Default value Type Description audit_map_file = /etc/ironic/api_audit_map.conf string value Path to audit map file for ironic-api service. Used only when API audit is enabled. enabled = False boolean value Enable auditing of API requests (for ironic-api service). `ignore_req_list = ` string value Comma separated list of Ironic REST API HTTP methods to be ignored during audit logging. For example: auditing will not be done on any GET or POST requests if this is set to "GET,POST". It is used only when API audit is enabled. 5.1.7. audit_middleware_notifications The following table outlines the options available under the [audit_middleware_notifications] group in the /etc/ironic/ironic.conf file. Table 5.6. audit_middleware_notifications Configuration option = Default value Type Description driver = None string value The Driver to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop. If not specified, then value from oslo_messaging_notifications conf section is used. topics = None list value List of AMQP topics used for OpenStack notifications. If not specified, then value from oslo_messaging_notifications conf section is used. transport_url = None string value A URL representing messaging driver to use for notification. If not specified, we fall back to the same configuration used for RPC. use_oslo_messaging = True boolean value Indicate whether to use oslo_messaging as the notifier. If set to False, the local logger will be used as the notifier. If set to True, the oslo_messaging package must also be present. Otherwise, the local will be used instead. 5.1.8. cinder The following table outlines the options available under the [cinder] group in the /etc/ironic/ironic.conf file. Table 5.7. cinder Configuration option = Default value Type Description action_retries = 3 integer value Number of retries in the case of a failed action (currently only used when detaching volumes). action_retry_interval = 5 integer value Retry interval in seconds in the case of a failed action (only specific actions are retried). auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. retries = 3 integer value Client retries in the case of a failed request connection. service-name = None string value The default service_name for endpoint URL discovery. service-type = volumev3 string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 5.1.9. conductor The following table outlines the options available under the [conductor] group in the /etc/ironic/ironic.conf file. Table 5.8. conductor Configuration option = Default value Type Description allow_deleting_available_nodes = True boolean value Allow deleting nodes which are in state available . Defaults to True. allow_provisioning_in_maintenance = True boolean value Whether to allow nodes to enter or undergo deploy or cleaning when in maintenance mode. If this option is set to False, and a node enters maintenance during deploy or cleaning, the process will be aborted after the heartbeat. Automated cleaning or making a node available will also fail. If True (the default), the process will begin and will pause after the node starts heartbeating. Moving it from maintenance will make the process continue. automated_clean = True boolean value Enables or disables automated cleaning. Automated cleaning is a configurable set of steps, such as erasing disk drives, that are performed on the node to ensure it is in a baseline state and ready to be deployed to. This is done after instance deletion as well as during the transition from a "manageable" to "available" state. When enabled, the particular steps performed to clean a node depend on which driver that node is managed by; see the individual driver's documentation for details. NOTE: The introduction of the cleaning operation causes instance deletion to take significantly longer. In an environment where all tenants are trusted (eg, because there is only one tenant), this option could be safely disabled. automatic_lessee = False boolean value If the conductor should record the Project ID indicated by Keystone for a requested deployment. Allows rights to be granted to directly access the deployed node as a lessee within the RBAC security model. The conductor does not record this value otherwise, and this information is not backfilled for prior instances which have been deployed. bootloader = None string value Glance ID, http:// or file:// URL of the EFI system partition image containing EFI boot loader. This image will be used by ironic when building UEFI-bootable ISO out of kernel and ramdisk. Required for UEFI boot from partition images. cache_clean_up_interval = 3600 integer value Interval between cleaning up image caches, in seconds. Set to 0 to disable periodic clean-up. check_allocations_interval = 60 integer value Interval between checks of orphaned allocations, in seconds. Set to 0 to disable checks. check_provision_state_interval = 60 integer value Interval between checks of provision timeouts, in seconds. Set to 0 to disable checks. check_rescue_state_interval = 60 integer value Interval (seconds) between checks of rescue timeouts. clean_callback_timeout = 1800 integer value Timeout (seconds) to wait for a callback from the ramdisk doing the cleaning. If the timeout is reached the node will be put in the "clean failed" provision state. Set to 0 to disable timeout. clean_step_priority_override = {} dict value Priority to run automated clean steps for both in-band and out of band clean steps, provided in interface.step_name:priority format, e.g. deploy.erase_devices_metadata:123. The option can be specified multiple times to define priorities for multiple steps. If set to 0, this specific step will not run during cleaning. If unset for an inband clean step, will use the priority set in the ramdisk. conductor_always_validates_images = False boolean value Security Option to enable the conductor to always inspect the image content of any requested deploy, even if the deployment would have normally bypassed the conductor's cache. When this is set to False, the Ironic-Python-Agent is responsible for any necessary image checks. Setting this to True will result in a higher utilization of resources (disk space, network traffic) as the conductor will evaluate all images. This option is not mutable, and requires a service restart to change. This option requires [conductor]disable_deep_image_inspection to be set to False. `conductor_group = ` string value Name of the conductor group to join. Can be up to 255 characters and is case insensitive. This conductor will only manage nodes with a matching "conductor_group" field set on the node. configdrive_swift_container = ironic_configdrive_container string value Name of the Swift container to store config drive data. Used when configdrive_use_object_store is True. configdrive_swift_temp_url_duration = None integer value The timeout (in seconds) after which a configdrive temporary URL becomes invalid. Defaults to deploy_callback_timeout if it is set, otherwise to 1800 seconds. Used when configdrive_use_object_store is True. deploy_callback_timeout = 1800 integer value Timeout (seconds) to wait for a callback from a deploy ramdisk. Set to 0 to disable timeout. deploy_kernel = None string value Glance ID, http:// or file:// URL of the kernel of the default deploy image. deploy_ramdisk = None string value Glance ID, http:// or file:// URL of the initramfs of the default deploy image. disable_deep_image_inspection = False boolean value Security Option to permit an operator to disable file content inspections. Under normal conditions, the conductor will inspect requested image contents which are transferred through the conductor. Disabling this option is not advisable and opens the risk of unsafe images being processed which may allow an attacker to leverage unsafe features in various disk image formats to perform a variety of unsafe and potentially compromising actions. This option is not mutable, and requires a service restart to change. disable_file_checksum = False boolean value Deprecated Security option: In the default case, image files have their checksums verified before undergoing additional conductor side actions such as image conversion. Enabling this option opens the risk of files being replaced at the source without the user's knowledge. disable_support_for_checksum_files = False boolean value Security option: By default Ironic will attempt to retrieve a remote checksum file via HTTP(S) URL in order to validate an image download. This is functionality aligning with ironic-python-agent support for standalone users. Disabling this functionality by setting this option to True will create a more secure environment, however it may break users in an unexpected fashion. enable_mdns = False boolean value Whether to enable publishing the baremetal API endpoint via multicast DNS. force_power_state_during_sync = True boolean value During sync_power_state, should the hardware power state be set to the state recorded in the database (True) or should the database be updated based on the hardware state (False). heartbeat_interval = 10 integer value Seconds between conductor heart beats. heartbeat_timeout = 60 integer value Maximum time (in seconds) since the last check-in of a conductor. A conductor is considered inactive when this time has been exceeded. inspect_wait_timeout = 1800 integer value Timeout (seconds) for waiting for node inspection. 0 - unlimited. max_concurrent_clean = 50 integer value The maximum number of concurrent nodes in cleaning which are permitted in this Ironic system. If this limit is reached, new requests will be rejected until the number of nodes in cleaning is lower than this maximum. As this is a security mechanism requests are not queued, and this setting is a global setting applying to all requests this conductor receives, regardless of access rights. The concurrent clean limit cannot be disabled. max_concurrent_deploy = 250 integer value The maximum number of concurrent nodes in deployment which are permitted in this Ironic system. If this limit is reached, new requests will be rejected until the number of deployments in progress is lower than this maximum. As this is a security mechanism requests are not queued, and this setting is a global setting applying to all requests this conductor receives, regardless of access rights. The concurrent deployment limit cannot be disabled. node_history = True boolean value Boolean value, default True, if node event history is to be recorded. Errors and other noteworthy events in relation to a node are journaled to a database table which incurs some additional load. A periodic task does periodically remove entries from the database. Please note, if this is disabled, the conductor will continue to purge entries as long as [conductor]node_history_cleanup_batch_count is not 0. node_history_cleanup_batch_count = 1000 integer value The target number of node history records to purge from the database when performing clean-up. Deletes are performed by node, and a node with excess records for a node will still be deleted. Defaults to 1000. Operators who find node history building up may wish to lower this threshold and decrease the time between cleanup operations using the node_history_cleanup_interval setting. node_history_cleanup_interval = 86400 integer value Interval in seconds at which node history entries can be cleaned up in the database. Setting to 0 disables the periodic task. Defaults to once a day, or 86400 seconds. node_history_max_entries = 300 integer value Maximum number of history entries which will be stored in the database per node. Default is 300. This setting excludes the minimum number of days retained using the [conductor]node_history_minimum_days setting. node_history_minimum_days = 0 integer value The minimum number of days to explicitly keep on hand in the database history entries for nodes. This is exclusive from the [conductor]node_history_max_entries setting as users of this setting are anticipated to need to retain history by policy. node_locked_retry_attempts = 3 integer value Number of attempts to grab a node lock. node_locked_retry_interval = 1 integer value Seconds to sleep between node lock attempts. periodic_max_workers = 8 integer value Maximum number of worker threads that can be started simultaneously by a periodic task. Should be less than RPC thread pool size. permitted_image_formats = ['raw', 'qcow2', 'iso'] list value The supported list of image formats which are permitted for deployment with Ironic. If an image format outside of this list is detected, the image validation logic will fail the deployment process. power_failure_recovery_interval = 300 integer value Interval (in seconds) between checking the power state for nodes previously put into maintenance mode due to power synchronization failure. A node is automatically moved out of maintenance mode once its power state is retrieved successfully. Set to 0 to disable this check. power_state_change_timeout = 60 integer value Number of seconds to wait for power operations to complete, i.e., so that a baremetal node is in the desired power state. If timed out, the power operation is considered a failure. power_state_sync_max_retries = 3 integer value During sync_power_state failures, limit the number of times Ironic should try syncing the hardware node power state with the node power state in DB require_rescue_password_hashed = False boolean value Option to cause the conductor to not fallback to an un-hashed version of the rescue password, permitting rescue with older ironic-python-agent ramdisks. rescue_callback_timeout = 1800 integer value Timeout (seconds) to wait for a callback from the rescue ramdisk. If the timeout is reached the node will be put in the "rescue failed" provision state. Set to 0 to disable timeout. rescue_kernel = None string value Glance ID, http:// or file:// URL of the kernel of the default rescue image. rescue_password_hash_algorithm = sha256 string value Password hash algorithm to be used for the rescue password. rescue_ramdisk = None string value Glance ID, http:// or file:// URL of the initramfs of the default rescue image. soft_power_off_timeout = 600 integer value Timeout (in seconds) of soft reboot and soft power off operation. This value always has to be positive. sync_local_state_interval = 180 integer value When conductors join or leave the cluster, existing conductors may need to update any persistent local state as nodes are moved around the cluster. This option controls how often, in seconds, each conductor will check for nodes that it should "take over". Set it to 0 (or a negative value) to disable the check entirely. sync_power_state_interval = 60 integer value Interval between syncing the node power state to the database, in seconds. Set to 0 to disable syncing. sync_power_state_workers = 8 integer value The maximum number of worker threads that can be started simultaneously to sync nodes power states from the periodic task. verify_step_priority_override = {} dict value Priority to run automated verify steps provided in interface.step_name:priority format,e.g. management.clear_job_queue:123. The option can be specified multiple times to define priorities for multiple steps. If set to 0, this specific step will not run during verification. workers_pool_size = 100 integer value The size of the workers greenthread pool. Note that 2 threads will be reserved by the conductor itself for handling heart beats and periodic tasks. On top of that, sync_power_state_workers will take up to 7 green threads with the default value of 8. 5.1.10. console The following table outlines the options available under the [console] group in the /etc/ironic/ironic.conf file. Table 5.9. console Configuration option = Default value Type Description kill_timeout = 1 integer value Time (in seconds) to wait for the console subprocess to exit before sending SIGKILL signal. port_range = None string value A range of ports available to be used for the console proxy service running on the host of ironic conductor, in the form of <start>:<stop>. This option is used by both Shellinabox and Socat console socat_address = USDmy_ip IP address value IP address of Socat service running on the host of ironic conductor. Used only by Socat console. subprocess_checking_interval = 1 integer value Time interval (in seconds) for checking the status of console subprocess. subprocess_timeout = 10 integer value Time (in seconds) to wait for the console subprocess to start. terminal = shellinaboxd string value Path to serial console terminal program. Used only by Shell In A Box console. terminal_cert_dir = None string value Directory containing the terminal SSL cert (PEM) for serial console access. Used only by Shell In A Box console. terminal_pid_dir = None string value Directory for holding terminal pid files. If not specified, the temporary directory will be used. terminal_timeout = 600 integer value Timeout (in seconds) for the terminal session to be closed on inactivity. Set to 0 to disable timeout. Used only by Socat console. 5.1.11. cors The following table outlines the options available under the [cors] group in the /etc/ironic/ironic.conf file. Table 5.10. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = [] list value Indicate which header field names may be used during the actual request. allow_methods = ['OPTIONS', 'GET', 'HEAD', 'POST', 'PUT', 'DELETE', 'TRACE', 'PATCH'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = [] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 5.1.12. database The following table outlines the options available under the [database] group in the /etc/ironic/ironic.conf file. Table 5.11. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). Deprecated since: 12.1.0 *Reason:*Support for the MySQL NDB Cluster storage engine has been deprecated and will be removed in a future release. mysql_engine = InnoDB string value MySQL engine to use. mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= mysql_wsrep_sync_wait = None integer value For Galera only, configure wsrep_sync_wait causality checks on new connections. Default is None, meaning don't configure any setting. pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. 5.1.13. deploy The following table outlines the options available under the [deploy] group in the /etc/ironic/ironic.conf file. Table 5.12. deploy Configuration option = Default value Type Description configdrive_use_object_store = False boolean value Whether to upload the config drive to object store. Set this option to True to store config drive in a swift endpoint. continue_if_disk_secure_erase_fails = False boolean value Defines what to do if a secure erase operation (NVMe or ATA) fails during cleaning in the Ironic Python Agent. If False, the cleaning operation will fail and the node will be put in clean failed state. If True, shred will be invoked and cleaning will continue. create_configuration_priority = None integer value Priority to run in-band clean step that creates RAID configuration from devices, via the Ironic Python Agent ramdisk. If unset, will use the priority set in the ramdisk (defaults to 0 for the GenericHardwareManager). If set to 0, will not run during cleaning. default_boot_mode = uefi string value Default boot mode to use when no boot mode is requested in node's driver_info, capabilities or in the instance_info configuration. Currently the default boot mode is "uefi", but it was "bios" previously in Ironic. It is recommended to set an explicit value for this option, and if the setting or default differs from nodes, to ensure that nodes are configured specifically for their desired boot mode. delete_configuration_priority = None integer value Priority to run in-band clean step that erases RAID configuration from devices, via the Ironic Python Agent ramdisk. If unset, will use the priority set in the ramdisk (defaults to 0 for the GenericHardwareManager). If set to 0, will not run during cleaning. disk_erasure_concurrency = 4 integer value Defines the target pool size used by Ironic Python Agent ramdisk to erase disk devices. The number of threads created to erase disks will not exceed this value or the number of disks to be erased. enable_ata_secure_erase = True boolean value Whether to support the use of ATA Secure Erase during the cleaning process. Defaults to True. enable_nvme_secure_erase = True boolean value Whether to support the use of NVMe Secure Erase during the cleaning process. Currently nvme-cli format command is supported with user-data and crypto modes, depending on device capabilities.Defaults to True. erase_devices_metadata_priority = None integer value Priority to run in-band clean step that erases metadata from devices, via the Ironic Python Agent ramdisk. If unset, will use the priority set in the ramdisk (defaults to 99 for the GenericHardwareManager). If set to 0, will not run during cleaning. erase_devices_priority = None integer value Priority to run in-band erase devices via the Ironic Python Agent ramdisk. If unset, will use the priority set in the ramdisk (defaults to 10 for the GenericHardwareManager). If set to 0, will not run during cleaning. erase_skip_read_only = False boolean value If the ironic-python-agent should skip read-only devices when running the "erase_devices" clean step where block devices are zeroed out. This requires ironic-python-agent 6.0.0 or greater. By default a read-only device will cause non-metadata based cleaning operations to fail due to the possible operational security risk of data being retained between deployments of the bare metal node. external_callback_url = None string value Agent callback URL of the bare metal API for boot methods such as virtual media, where images could be served outside of the provisioning network. Defaults to the configuration from [service_catalog]. external_http_url = None string value URL of the ironic-conductor node's HTTP server for boot methods such as virtual media, where images could be served outside of the provisioning network. Does not apply when Swift is used. Defaults to http_url. fast_track = False boolean value Whether to allow deployment agents to perform lookup, heartbeat operations during initial states of a machine lifecycle and by-pass the normal setup procedures for a ramdisk. This feature also enables power operations which are part of deployment processes to be bypassed if the ramdisk has performed a heartbeat operation using the fast_track_timeout setting. fast_track_timeout = 300 integer value Seconds for which the last heartbeat event is to be considered valid for the purpose of a fast track sequence. This setting should generally be less than the number of seconds for "Power-On Self Test" and typical ramdisk start-up. This value should not exceed the [api]ramdisk_heartbeat_timeout setting. http_image_subdir = agent_images string value The name of subdirectory under ironic-conductor node's HTTP root path which is used to place instance images for the direct deploy interface, when local HTTP service is incorporated to provide instance image instead of swift tempurls. http_root = /httpboot string value ironic-conductor node's HTTP root path. http_url = None string value ironic-conductor node's HTTP server URL. Example: http://192.1.2.3:8080 iso_cache_size = 20480 integer value Maximum size (in MiB) of cache for master ISO images, including those in use. iso_cache_ttl = 10080 integer value Maximum TTL (in minutes) for old master ISO images in cache. iso_master_path = /var/lib/ironic/master_iso_images string value On the ironic-conductor node, directory where master ISO images are stored on disk. Setting to the empty string disables image caching. power_off_after_deploy_failure = True boolean value Whether to power off a node after deploy failure. Defaults to True. ramdisk_image_download_source = local string value Specifies whether a boot iso image should be served from its own original location using the image source url directly, or if ironic should cache the image on the conductor and serve it from ironic's own http server. shred_final_overwrite_with_zeros = True boolean value Whether to write zeros to a node's block devices after writing random data. This will write zeros to the device even when deploy.shred_random_overwrite_iterations is 0. This option is only used if a device could not be ATA Secure Erased. Defaults to True. shred_random_overwrite_iterations = 1 integer value During shred, overwrite all block devices N times with random data. This is only used if a device could not be ATA Secure Erased. Defaults to 1. 5.1.14. dhcp The following table outlines the options available under the [dhcp] group in the /etc/ironic/ironic.conf file. Table 5.13. dhcp Configuration option = Default value Type Description dhcp_provider = neutron string value DHCP provider to use. "neutron" uses Neutron, "dnsmasq" uses the Dnsmasq provider, and "none" uses a no-op provider. 5.1.15. disk_partitioner The following table outlines the options available under the [disk_partitioner] group in the /etc/ironic/ironic.conf file. Table 5.14. disk_partitioner Configuration option = Default value Type Description check_device_interval = 1 integer value After Ironic has completed creating the partition table, it continues to check for activity on the attached iSCSI device status at this interval prior to copying the image to the node, in seconds check_device_max_retries = 20 integer value The maximum number of times to check that the device is not accessed by another process. If the device is still busy after that, the disk partitioning will be treated as having failed. 5.1.16. disk_utils The following table outlines the options available under the [disk_utils] group in the /etc/ironic/ironic.conf file. Table 5.15. disk_utils Configuration option = Default value Type Description bios_boot_partition_size = 1 integer value Size of BIOS Boot partition in MiB when configuring GPT partitioned systems for local boot in BIOS. dd_block_size = 1M string value Block size to use when writing to the nodes disk. efi_system_partition_size = 200 integer value Size of EFI system partition in MiB when configuring UEFI systems for local boot. image_convert_attempts = 3 integer value Number of attempts to convert an image. image_convert_memory_limit = 2048 integer value Memory limit for "qemu-img convert" in MiB. Implemented via the address space resource limit. partition_detection_attempts = 3 integer value Maximum attempts to detect a newly created partition. partprobe_attempts = 10 integer value Maximum number of attempts to try to read the partition. 5.1.17. drac The following table outlines the options available under the [drac] group in the /etc/ironic/ironic.conf file. Table 5.16. drac Configuration option = Default value Type Description bios_factory_reset_timeout = 600 integer value Maximum time (in seconds) to wait for factory reset of BIOS settings to complete. boot_device_job_status_timeout = 30 integer value Maximum amount of time (in seconds) to wait for the boot device configuration job to transition to the correct state to allow a reboot or power on to complete. config_job_max_retries = 240 integer value Maximum number of retries for the configuration job to complete successfully. query_import_config_job_status_interval = 60 integer value Number of seconds to wait between checking for completed import configuration task query_raid_config_job_status_interval = 120 integer value Interval (in seconds) between periodic RAID job status checks to determine whether the asynchronous RAID configuration was successfully finished or not. raid_job_timeout = 300 integer value Maximum time (in seconds) to wait for RAID job to complete 5.1.18. glance The following table outlines the options available under the [glance] group in the /etc/ironic/ironic.conf file. Table 5.17. glance Configuration option = Default value Type Description allowed_direct_url_schemes = [] list value A list of URL schemes that can be downloaded directly via the direct_url. Currently supported schemes: [file]. auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". num_retries = 0 integer value Number of retries when downloading an image from glance. password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = image string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. swift_account = None string value The account that Glance uses to communicate with Swift. The format is "AUTH_uuid". "uuid" is the UUID for the account configured in the glance-api.conf. For example: "AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30". If not set, the default value is calculated based on the ID of the project used to access Swift (as set in the [swift] section). Swift temporary URL format: "endpoint_url/api_version/account/container/object_id" swift_account_prefix = AUTH string value The prefix added to the project uuid to determine the swift account. swift_api_version = v1 string value The Swift API version to create a temporary URL for. Defaults to "v1". Swift temporary URL format: "endpoint_url/api_version/account/container/object_id" swift_container = glance string value The Swift container Glance is configured to store its images in. Defaults to "glance", which is the default in glance-api.conf. Swift temporary URL format: "endpoint_url/api_version/account/container/object_id" swift_endpoint_url = None string value The "endpoint" (scheme, hostname, optional port) for the Swift URL of the form "endpoint_url/api_version/account/container/object_id". Do not include trailing "/". For example, use "https://swift.example.com". If using RADOS Gateway, endpoint may also contain /swift path; if it does not, it will be appended. Used for temporary URLs, will be fetched from the service catalog, if not provided. swift_store_multiple_containers_seed = 0 integer value This should match a config by the same name in the Glance configuration file. When set to 0, a single-tenant store will only use one container to store all images. When set to an integer value between 1 and 32, a single-tenant store will use multiple containers to store images, and this value will determine how many containers are created. swift_temp_url_cache_enabled = False boolean value Whether to cache generated Swift temporary URLs. Setting it to true is only useful when an image caching proxy is used. Defaults to False. swift_temp_url_duration = 1200 integer value The length of time in seconds that the temporary URL will be valid for. Defaults to 20 minutes. If some deploys get a 401 response code when trying to download from the temporary URL, try raising this duration. This value must be greater than or equal to the value for swift_temp_url_expected_download_start_delay swift_temp_url_expected_download_start_delay = 0 integer value This is the delay (in seconds) from the time of the deploy request (when the Swift temporary URL is generated) to when the IPA ramdisk starts up and URL is used for the image download. This value is used to check if the Swift temporary URL duration is large enough to let the image download begin. Also if temporary URL caching is enabled this will determine if a cached entry will still be valid when the download starts. swift_temp_url_duration value must be greater than or equal to this option's value. Defaults to 0. swift_temp_url_key = None string value The secret token given to Swift to allow temporary URL downloads. Required for temporary URLs. For the Swift backend, the key on the service project (as set in the [swift] section) is used by default. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 5.1.19. healthcheck The following table outlines the options available under the [healthcheck] group in the /etc/ironic/ironic.conf file. Table 5.18. healthcheck Configuration option = Default value Type Description backends = [] list value Additional backends that can perform health checks and report that information back as part of a request. detailed = False boolean value Show more detailed information as part of the response. Security note: Enabling this option may expose sensitive details about the service being monitored. Be sure to verify that it will not violate your security policies. disable_by_file_path = None string value Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin. disable_by_file_paths = [] list value Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin. enabled = False boolean value Enable the health check endpoint at /healthcheck. Note that this is unauthenticated. More information is available at https://docs.openstack.org/oslo.middleware/latest/reference/healthcheck_plugins.html . path = /healthcheck string value The path to respond to healtcheck requests on. 5.1.20. ilo The following table outlines the options available under the [ilo] group in the /etc/ironic/ironic.conf file. Table 5.19. ilo Configuration option = Default value Type Description ca_file = None string value CA certificate file to validate iLO. cert_path = /var/lib/ironic/ilo/ string value On the ironic-conductor node, directory where ilo driver stores the CSR and the cert. clean_priority_clear_secure_boot_keys = 0 integer value Priority for clear_secure_boot_keys clean step. This step is not enabled by default. It can be enabled to clear all secure boot keys enrolled with iLO. clean_priority_reset_bios_to_default = 10 integer value Priority for reset_bios_to_default clean step. clean_priority_reset_ilo = 0 integer value Priority for reset_ilo clean step. clean_priority_reset_ilo_credential = 30 integer value Priority for reset_ilo_credential clean step. This step requires "ilo_change_password" parameter to be updated in nodes's driver_info with the new password. clean_priority_reset_secure_boot_keys_to_default = 20 integer value Priority for reset_secure_boot_keys clean step. This step will reset the secure boot keys to manufacturing defaults. client_port = 443 port value Port to be used for iLO operations client_timeout = 60 integer value Timeout (in seconds) for iLO operations default_boot_mode = auto string value Default boot mode to be used in provisioning when "boot_mode" capability is not provided in the "properties/capabilities" of the node. The default is "auto" for backward compatibility. When "auto" is specified, default boot mode will be selected based on boot mode settings on the system. file_permission = 420 integer value File permission for swift-less image hosting with the octal permission representation of file access permissions. This setting defaults to 644 , or as the octal number 0o644 in Python. This setting must be set to the octal number representation, meaning starting with 0o . kernel_append_params = nofb nomodeset vga=normal string value Additional kernel parameters to pass down to the instance kernel. These parameters can be consumed by the kernel or by the applications by reading /proc/cmdline. Mind severe cmdline size limit! Can be overridden by instance_info/kernel_append_params property. oob_erase_devices_job_status_interval = 300 integer value Interval (in seconds) between periodic erase-devices status checks to determine whether the asynchronous out-of-band erase-devices was successfully finished or not. On an average, a 300GB HDD with default pattern "overwrite" would take approximately 9 hours and 300GB SSD with default pattern "block" would take approx. 30 seconds to complete sanitize disk erase. power_wait = 2 integer value Amount of time in seconds to wait in between power operations swift_ilo_container = ironic_ilo_container string value The Swift iLO container to store data. swift_object_expiry_timeout = 900 integer value Amount of time in seconds for Swift objects to auto-expire. use_web_server_for_images = False boolean value Set this to True to use http web server to host floppy images and generated boot ISO. This requires http_root and http_url to be configured in the [deploy] section of the config file. If this is set to False, then Ironic will use Swift to host the floppy images and generated boot_iso. verify_ca = True string value CA certificate to validate iLO. This can be either a Boolean value, a path to a CA_BUNDLE file or directory with certificates of trusted CAs. If set to True the driver will verify the host certificates; if False the driver will ignore verifying the SSL certificate. If it's a path the driver will use the specified certificate or one of the certificates in the directory. Defaults to True. 5.1.21. inspector The following table outlines the options available under the [inspector] group in the /etc/ironic/ironic.conf file. Table 5.20. inspector Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. callback_endpoint_override = None string value endpoint to use as a callback for posting back introspection data when boot is managed by ironic. Standard keystoneauth options are used by default. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. `extra_kernel_params = ` string value extra kernel parameters to pass to the inspection ramdisk when boot is managed by ironic (not ironic-inspector). Pairs key=value separated by spaces. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password power_off = True boolean value whether to power off a node after inspection finishes. Ignored for nodes that have fast track mode enabled. project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. require_managed_boot = False boolean value require that the in-band inspection boot is fully managed by ironic. Set this to True if your installation of ironic-inspector does not have a separate PXE boot environment. service-name = None string value The default service_name for endpoint URL discovery. service-type = baremetal-introspection string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. status_check_period = 60 integer value period (in seconds) to check status of nodes on inspection system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 5.1.22. inventory The following table outlines the options available under the [inventory] group in the /etc/ironic/ironic.conf file. Table 5.21. inventory Configuration option = Default value Type Description data_backend = database string value The storage backend for storing introspection data. swift_data_container = introspection_data_container string value The Swift introspection data container to store the inventory data. 5.1.23. ipmi The following table outlines the options available under the [ipmi] group in the /etc/ironic/ironic.conf file. Table 5.22. ipmi Configuration option = Default value Type Description additional_retryable_ipmi_errors = [] multi valued Additional errors ipmitool may encounter, specific to the environment it is run in. cipher_suite_versions = [] list value List of possible cipher suites versions that can be supported by the hardware in case the field cipher_suite is not set for the node. command_retry_timeout = 60 integer value Maximum time in seconds to retry retryable IPMI operations. (An operation is retryable, for example, if the requested operation fails because the BMC is busy.) Setting this too high can cause the sync power state periodic task to hang when there are slow or unresponsive BMCs. debug = False boolean value Enables all ipmi commands to be executed with an additional debugging output. This is a separate option as ipmitool can log a substantial amount of misleading text when in this mode. disable_boot_timeout = True boolean value Default timeout behavior whether ironic sends a raw IPMI command to disable the 60 second timeout for booting. Setting this option to False will NOT send that command, the default value is True. It may be overridden by per-node ipmi_disable_boot_timeout option in node's driver_info field. kill_on_timeout = True boolean value Kill ipmitool process invoked by ironic to read node power state if ipmitool process does not exit after command_retry_timeout timeout expires. Recommended setting is True min_command_interval = 5 integer value Minimum time, in seconds, between IPMI operations sent to a server. There is a risk with some hardware that setting this too low may cause the BMC to crash. Recommended setting is 5 seconds. use_ipmitool_retries = False boolean value When set to True and the parameters are supported by ipmitool, the number of retries and the retry interval are passed to ipmitool as parameters, and ipmitool will do the retries. When set to False, ironic will retry the ipmitool commands. Recommended setting is False 5.1.24. irmc The following table outlines the options available under the [irmc] group in the /etc/ironic/ironic.conf file. Table 5.23. irmc Configuration option = Default value Type Description auth_method = basic string value Authentication method to be used for iRMC operations clean_priority_restore_irmc_bios_config = 0 integer value Priority for restore_irmc_bios_config clean step. client_timeout = 60 integer value Timeout (in seconds) for iRMC operations fpga_ids = [] list value List of vendor IDs and device IDs for CPU FPGA to inspect. List items are in format vendorID/deviceID and separated by commas. CPU inspection will use this value to find existence of CPU FPGA in a node. If this option is not defined, then leave out CUSTOM_CPU_FPGA in node traits. Sample fpga_ids value: 0x1000/0x0079,0x2100/0x0080 gpu_ids = [] list value List of vendor IDs and device IDs for GPU device to inspect. List items are in format vendorID/deviceID and separated by commas. GPU inspection will use this value to count the number of GPU device in a node. If this option is not defined, then leave out pci_gpu_devices in capabilities property. Sample gpu_ids value: 0x1000/0x0079,0x2100/0x0080 kernel_append_params = None string value Additional kernel parameters to pass down to the instance kernel. These parameters can be consumed by the kernel or by the applications by reading /proc/cmdline. Mind severe cmdline size limit! Can be overridden by instance_info/kernel_append_params property. port = 443 port value Port to be used for iRMC operations query_raid_config_fgi_status_interval = 300 integer value Interval (in seconds) between periodic RAID status checks to determine whether the asynchronous RAID configuration was successfully finished or not. Foreground Initialization (FGI) will start 5 minutes after creating virtual drives. remote_image_server = None string value IP of remote image server remote_image_share_name = share string value share name of remote_image_server remote_image_share_root = /remote_image_share_root string value Ironic conductor node's "NFS" or "CIFS" root path remote_image_share_type = CIFS string value Share type of virtual media `remote_image_user_domain = ` string value Domain name of remote_image_user_name remote_image_user_name = None string value User name of remote_image_server remote_image_user_password = None string value Password of remote_image_user_name sensor_method = ipmitool string value Sensor data retrieval method. snmp_auth_proto = sha string value SNMPv3 message authentication protocol ID. Required for version v3 . The valid options are sha , sha256 , sha384 and sha512 , while sha is the only supported protocol in iRMC S4 and S5, and from iRMC S6, sha256 , sha384 and sha512 are supported, but sha is not supported any more. snmp_community = public string value SNMP community. Required for versions "v1" and "v2c" snmp_polling_interval = 10 integer value SNMP polling interval in seconds snmp_port = 161 port value SNMP port snmp_priv_proto = aes string value SNMPv3 message privacy (encryption) protocol ID. Required for version v3 . aes is supported. snmp_security = None string value SNMP security name. Required for version v3 . snmp_version = v2c string value SNMP protocol version 5.1.25. ironic_lib The following table outlines the options available under the [ironic_lib] group in the /etc/ironic/ironic.conf file. Table 5.24. ironic_lib Configuration option = Default value Type Description fatal_exception_format_errors = False boolean value Used if there is a formatting error when generating an exception message (a programming error). If True, raise an exception; if False, use the unformatted message. root_helper = sudo ironic-rootwrap /etc/ironic/rootwrap.conf string value Command that is prefixed to commands that are run as root. If not specified, no commands are run as root. 5.1.26. json_rpc The following table outlines the options available under the [json_rpc] group in the /etc/ironic/ironic.conf file. Table 5.25. json_rpc Configuration option = Default value Type Description allowed_roles = ['admin'] list value List of roles allowed to use JSON RPC auth-url = None string value Authentication URL auth_strategy = None string value Authentication strategy used by JSON RPC. Defaults to the global auth_strategy setting. auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to host_ip = :: host address value The IP address or hostname on which JSON RPC will listen. http_basic_auth_user_file = /etc/ironic/htpasswd-json-rpc string value Path to Apache format user authentication file used when auth_strategy=http_basic http_basic_password = None string value Password to use for HTTP Basic authentication client requests. http_basic_username = None string value Name of the user to use for HTTP Basic authentication client requests. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file password = None string value User's password port = 8089 port value The port to use for JSON RPC project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use use_ssl = False boolean value Whether to use TLS for JSON RPC user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username 5.1.27. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/ironic/ironic.conf file. Table 5.26. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. Deprecated since: Queens *Reason:*The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = internal string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" (default) or "admin". keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = True boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 5.1.28. mdns The following table outlines the options available under the [mdns] group in the /etc/ironic/ironic.conf file. Table 5.27. mdns Configuration option = Default value Type Description interfaces = None list value List of IP addresses of interfaces to use for mDNS. Defaults to all interfaces on the system. lookup_attempts = 3 integer value Number of attempts to lookup a service. params = {} dict value Additional parameters to pass for the registered service. registration_attempts = 5 integer value Number of attempts to register a service. Currently has to be larger than 1 because of race conditions in the zeroconf library. 5.1.29. metrics The following table outlines the options available under the [metrics] group in the /etc/ironic/ironic.conf file. Table 5.28. metrics Configuration option = Default value Type Description agent_backend = noop string value Backend for the agent ramdisk to use for metrics. Default possible backends are "noop" and "statsd". agent_global_prefix = None string value Prefix all metric names sent by the agent ramdisk with this value. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name. agent_prepend_host = False boolean value Prepend the hostname to all metric names sent by the agent ramdisk. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name. agent_prepend_host_reverse = True boolean value Split the prepended host value by "." and reverse it for metrics sent by the agent ramdisk (to better match the reverse hierarchical form of domain names). agent_prepend_uuid = False boolean value Prepend the node's Ironic uuid to all metric names sent by the agent ramdisk. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name. backend = noop string value Backend to use for the metrics system. global_prefix = None string value Prefix all metric names with this value. By default, there is no global prefix. The format of metric names is [global_prefix.][host_name.]prefix.metric_name. prepend_host = False boolean value Prepend the hostname to all metric names. The format of metric names is [global_prefix.][host_name.]prefix.metric_name. prepend_host_reverse = True boolean value Split the prepended host value by "." and reverse it (to better match the reverse hierarchical form of domain names). 5.1.30. metrics_statsd The following table outlines the options available under the [metrics_statsd] group in the /etc/ironic/ironic.conf file. Table 5.29. metrics_statsd Configuration option = Default value Type Description agent_statsd_host = localhost string value Host for the agent ramdisk to use with the statsd backend. This must be accessible from networks the agent is booted on. agent_statsd_port = 8125 port value Port for the agent ramdisk to use with the statsd backend. statsd_host = localhost string value Host for use with the statsd backend. statsd_port = 8125 port value Port to use with the statsd backend. 5.1.31. molds The following table outlines the options available under the [molds] group in the /etc/ironic/ironic.conf file. Table 5.30. molds Configuration option = Default value Type Description password = None string value Password for "http" Basic auth. By default set empty. retry_attempts = 3 integer value Retry attempts for saving or getting configuration molds. retry_interval = 3 integer value Retry interval for saving or getting configuration molds. storage = swift string value Configuration mold storage location. Supports "swift" and "http". By default "swift". user = None string value User for "http" Basic auth. By default set empty. 5.1.32. neutron The following table outlines the options available under the [neutron] group in the /etc/ironic/ironic.conf file. Table 5.31. neutron Configuration option = Default value Type Description add_all_ports = False boolean value Option to enable transmission of all ports to neutron when creating ports for provisioning, cleaning, or rescue. This is done without IP addresses assigned to the port, and may be useful in some bonded network configurations. auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file cleaning_network = None string value Neutron network UUID or name for the ramdisk to be booted into for cleaning nodes. Required for "neutron" network interface. It is also required if cleaning nodes when using "flat" network interface or "neutron" DHCP provider. If a name is provided, it must be unique among all networks or cleaning will fail. cleaning_network_security_groups = [] list value List of Neutron Security Group UUIDs to be applied during cleaning of the nodes. Optional for the "neutron" network interface and not used for the "flat" or "noop" network interfaces. If not specified, default security group is used. collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. dhcpv6_stateful_address_count = 4 integer value Number of IPv6 addresses to allocate for ports created for provisioning, cleaning, rescue or inspection on DHCPv6-stateful networks. Different stages of the chain-loading process will request addresses with different CLID/IAID. Due to non-identical identifiers multiple addresses must be reserved for the host to ensure each step of the boot process can successfully lease addresses. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. inspection_network = None string value Neutron network UUID or name for the ramdisk to be booted into for in-band inspection of nodes. If a name is provided, it must be unique among all networks or inspection will fail. inspection_network_security_groups = [] list value List of Neutron Security Group UUIDs to be applied during the node inspection process. Optional for the "neutron" network interface and not used for the "flat" or "noop" network interfaces. If not specified, the default security group is used. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password port_setup_delay = 0 integer value Delay value to wait for Neutron agents to setup sufficient DHCP configuration for port. project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to provisioning_network = None string value Neutron network UUID or name for the ramdisk to be booted into for provisioning nodes. Required for "neutron" network interface. If a name is provided, it must be unique among all networks or deploy will fail. provisioning_network_security_groups = [] list value List of Neutron Security Group UUIDs to be applied during provisioning of the nodes. Optional for the "neutron" network interface and not used for the "flat" or "noop" network interfaces. If not specified, default security group is used. region-name = None string value The default region_name for endpoint URL discovery. request_timeout = 45 integer value Timeout for request processing when interacting with Neutron. This value should be increased if neutron port action timeouts are observed as neutron performs pre-commit validation prior returning to the API client which can take longer than normal client/server interactions. rescuing_network = None string value Neutron network UUID or name for booting the ramdisk for rescue mode. This is not the network that the rescue ramdisk will use post-boot - the tenant network is used for that. Required for "neutron" network interface, if rescue mode will be used. It is not used for the "flat" or "noop" network interfaces. If a name is provided, it must be unique among all networks or rescue will fail. rescuing_network_security_groups = [] list value List of Neutron Security Group UUIDs to be applied during the node rescue process. Optional for the "neutron" network interface and not used for the "flat" or "noop" network interfaces. If not specified, the default security group is used. retries = 3 integer value DEPRECATED: Client retries in the case of a failed request. service-name = None string value The default service_name for endpoint URL discovery. service-type = network string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 5.1.33. nova The following table outlines the options available under the [nova] group in the /etc/ironic/ironic.conf file. Table 5.32. nova Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. send_power_notifications = True boolean value When set to True, it will enable the support for power state change callbacks to nova. This option should be set to False in deployments that do not have the openstack compute service. service-name = None string value The default service_name for endpoint URL discovery. service-type = compute string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 5.1.34. oslo_concurrency The following table outlines the options available under the [oslo_concurrency] group in the /etc/ironic/ironic.conf file. Table 5.33. oslo_concurrency Configuration option = Default value Type Description disable_process_locking = False boolean value Enables or disables inter-process locks. lock_path = None string value Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. 5.1.35. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/ironic/ironic.conf file. Table 5.34. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 5.1.36. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/ironic/ironic.conf file. Table 5.35. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate `ssl_client_cert_file = ` string value Client certificate PEM file used for authentication. `ssl_client_key_file = ` string value Client key PEM file used for authentication. `ssl_client_key_password = ` string value Client key password file used for authentication. 5.1.37. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/ironic/ironic.conf file. Table 5.36. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 5.1.38. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/ironic/ironic.conf file. Table 5.37. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. If rabbit_quorum_queue is enabled, queues will be durable and this value will be ignored. direct_mandatory_flag = True boolean value (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore enable_cancel_on_failover = False boolean value Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down heartbeat_in_pthread = False boolean value Run the health check heartbeat thread through a native python thread by default. If this option is equal to False then the health check heartbeat will inherit the execution model from the parent process. For example if the parent process has monkey patched the stdlib by using eventlet/greenlet then the heartbeat will be run through a green thread. This option should be set to True only for the wsgi services. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait (in seconds) before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_quorum_delivery_limit = 0 integer value Each time a message is redelivered to a consumer, a counter is incremented. Once the redelivery count exceeds the delivery limit the message gets dropped or dead-lettered (if a DLX exchange has been configured) Used only when rabbit_quorum_queue is enabled, Default 0 which means dont set a limit. rabbit_quorum_max_memory_bytes = 0 integer value By default all messages are maintained in memory if a quorum queue grows in length it can put memory pressure on a cluster. This option can limit the number of memory bytes used by the quorum queue. Used only when rabbit_quorum_queue is enabled, Default 0 which means dont set a limit. rabbit_quorum_max_memory_length = 0 integer value By default all messages are maintained in memory if a quorum queue grows in length it can put memory pressure on a cluster. This option can limit the number of messages in the quorum queue. Used only when rabbit_quorum_queue is enabled, Default 0 which means dont set a limit. rabbit_quorum_queue = False boolean value Use quorum queues in RabbitMQ (x-queue-type: quorum). The quorum queue is a modern queue type for RabbitMQ implementing a durable, replicated FIFO queue based on the Raft consensus algorithm. It is available as of RabbitMQ 3.8.0. If set this option will conflict with the HA queues ( rabbit_ha_queues ) aka mirrored queues, in other words the HA queues should be disabled, quorum queues durable by default so the amqp_durable_queues opion is ignored when this option enabled. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). ssl_enforce_fips_mode = False boolean value Global toggle for enforcing the OpenSSL FIPS mode. This feature requires Python support. This is available in Python 3.9 in all environments and may have been backported to older Python versions on select environments. If the Python executable used does not support OpenSSL FIPS mode, an exception will be raised. `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 5.1.39. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/ironic/ironic.conf file. Table 5.38. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. 5.1.40. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/ironic/ironic.conf file. Table 5.39. oslo_policy Configuration option = Default value Type Description enforce_new_defaults = False boolean value This option controls whether or not to use old deprecated defaults when evaluating policies. If True , the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with the enforce_scope flag so that you can get the benefits of new defaults and scope_type together. If False , the deprecated policy check string is logically OR'd with the new policy check string, allowing for a graceful upgrade experience between releases with new policies, which is the default behavior. enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.json string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 5.1.41. oslo_reports The following table outlines the options available under the [oslo_reports] group in the /etc/ironic/ironic.conf file. Table 5.40. oslo_reports Configuration option = Default value Type Description file_event_handler = None string value The path to a file to watch for changes to trigger the reports, instead of signals. Setting this option disables the signal trigger for the reports. If application is running as a WSGI application it is recommended to use this instead of signals. file_event_handler_interval = 1 integer value How many seconds to wait between polls when file_event_handler is set log_dir = None string value Path to a log directory where to create a file 5.1.42. profiler The following table outlines the options available under the [profiler] group in the /etc/ironic/ironic.conf file. Table 5.41. profiler Configuration option = Default value Type Description connection_string = messaging:// string value Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: messaging:// - use oslo_messaging driver for sending spans. redis://127.0.0.1:6379 - use redis driver for sending spans. mongodb://127.0.0.1:27017 - use mongodb driver for sending spans. elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans. jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans. enabled = False boolean value Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: True: Enables the feature False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. es_doc_type = notification string value Document type for notification indexing in elasticsearch. es_scroll_size = 10000 integer value Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). es_scroll_time = 2m string value This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. filter_error_trace = False boolean value Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values: True: Enable filter traces that contain error/exception. False: Disable the filter. hmac_keys = SECRET_KEY string value Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,... <keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. sentinel_service_name = mymaster string value Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster ). socket_timeout = 0.1 floating point value Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). trace_sqlalchemy = False boolean value Enable SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. 5.1.43. pxe The following table outlines the options available under the [pxe] group in the /etc/ironic/ironic.conf file. Table 5.42. pxe Configuration option = Default value Type Description boot_retry_check_interval = 90 integer value Interval (in seconds) between periodic checks on PXE boot retry. Has no effect if boot_retry_timeout is not set. boot_retry_timeout = None integer value Timeout (in seconds) after which PXE boot should be retried. Must be less than [conductor]deploy_callback_timeout. Disabled by default. default_ephemeral_format = ext4 string value Default file system format for ephemeral partition, if one is created. dir_permission = None integer value The permission that will be applied to the TFTP folders upon creation. This should be set to the permission such that the tftpserver has access to read the contents of the configured TFTP folder. This setting is only required when the operating system's umask is restrictive such that ironic-conductor is creating files that cannot be read by the TFTP server. Setting to <None> will result in the operating system's umask to be utilized for the creation of new tftp folders. The system default umask is masked out on the specified value. It is required that an octal representation is specified. For example: 0o755 enable_netboot_fallback = False boolean value If True, generate a PXE environment even for nodes that use local boot. This is useful when the driver cannot switch nodes to local boot, e.g. with SNMP or with Redfish on machines that cannot do persistent boot. Mostly useful for standalone ironic since Neutron will prevent incorrect PXE boot. file_permission = 420 integer value The permission which is used on files created as part of configuration and setup of file assets for PXE based operations. Defaults to a value of 0o644. This value must be specified as an octal representation. For example: 0o644 image_cache_size = 20480 integer value Maximum size (in MiB) of cache for master images, including those in use. image_cache_ttl = 10080 integer value Maximum TTL (in minutes) for old master images in cache. images_path = /var/lib/ironic/images/ string value On the ironic-conductor node, directory where images are stored on disk. initial_grub_template = USDpybasedir/drivers/modules/initial_grub_cfg.template string value On ironic-conductor node, the path to the initial grubconfiguration template for grub network boot. instance_master_path = /var/lib/ironic/master_images string value On the ironic-conductor node, directory where master instance images are stored on disk. Setting to the empty string disables image caching. ip_version = 4 string value The IP version that will be used for PXE booting. Defaults to 4. This option has been a no-op for in-treedrivers since the Ussuri development cycle. ipxe_boot_script = USDpybasedir/drivers/modules/boot.ipxe string value On ironic-conductor node, the path to the main iPXE script file. ipxe_bootfile_name = undionly.kpxe string value Bootfile DHCP parameter. ipxe_bootfile_name_by_arch = {} dict value Bootfile DHCP parameter per node architecture. For example: aarch64:ipxe_aa64.efi ipxe_config_template = USDpybasedir/drivers/modules/ipxe_config.template string value On ironic-conductor node, template file for iPXE operations. ipxe_fallback_script = None string value File name (e.g. inspector.ipxe) of an iPXE script to fall back to when booting to a MAC-specific script fails. When not set, booting will fail in this case. ipxe_timeout = 0 integer value Timeout value (in seconds) for downloading an image via iPXE. Defaults to 0 (no timeout) ipxe_use_swift = False boolean value Download deploy and rescue images directly from swift using temporary URLs. If set to false (default), images are downloaded to the ironic-conductor node and served over its local HTTP server. Applicable only when ipxe compatible boot interface is used. kernel_append_params = nofb nomodeset vga=normal string value Additional append parameters for baremetal PXE boot. loader_file_paths = {} dict value Dictionary describing the bootloaders to load into conductor PXE/iPXE boot folders values from the host operating system. Formatted as key of destination file name, and value of a full path to a file to be copied. File assets will have [pxe]file_permission applied, if set. If used, the file names should match established bootloader configuration settings for bootloaders. Use example: ipxe.efi:/usr/share/ipxe/ipxe-snponly-x86_64.efi,undionly.kpxe:/usr/share/ipxe/undionly.kpxe pxe_bootfile_name = pxelinux.0 string value Bootfile DHCP parameter. pxe_bootfile_name_by_arch = {} dict value Bootfile DHCP parameter per node architecture. For example: aarch64:grubaa64.efi pxe_config_subdir = pxelinux.cfg string value Directory in which to create symbolic links which represent the MAC or IP address of the ports on a node and allow boot loaders to load the PXE file for the node. This directory name is relative to the PXE or iPXE folders. pxe_config_template = USDpybasedir/drivers/modules/pxe_config.template string value On ironic-conductor node, template file for PXE loader configuration. pxe_config_template_by_arch = {} dict value On ironic-conductor node, template file for PXE configuration per node architecture. For example: aarch64:/opt/share/grubaa64_pxe_config.template tftp_master_path = /tftpboot/master_images string value On ironic-conductor node, directory where master TFTP images are stored on disk. Setting to the empty string disables image caching. tftp_root = /tftpboot string value ironic-conductor node's TFTP root path. The ironic-conductor must have read/write access to this path. tftp_server = USDmy_ip string value IP address of ironic-conductor node's TFTP server. uefi_ipxe_bootfile_name = snponly.efi string value Bootfile DHCP parameter for UEFI boot mode. If you experience problems with booting using it, try ipxe.efi. uefi_pxe_bootfile_name = bootx64.efi string value Bootfile DHCP parameter for UEFI boot mode. uefi_pxe_config_template = USDpybasedir/drivers/modules/pxe_grub_config.template string value On ironic-conductor node, template file for PXE configuration for UEFI boot loader. Generally this is used for GRUB specific templates. 5.1.44. redfish The following table outlines the options available under the [redfish] group in the /etc/ironic/ironic.conf file. Table 5.43. redfish Configuration option = Default value Type Description auth_type = auto string value Redfish HTTP client authentication method. connection_attempts = 5 integer value Maximum number of attempts to try to connect to Redfish connection_cache_size = 1000 integer value Maximum Redfish client connection cache size. Redfish driver would strive to reuse authenticated BMC connections (obtained through Redfish Session Service). This option caps the maximum number of connections to maintain. The value of 0 disables client connection caching completely. connection_retry_interval = 4 integer value Number of seconds to wait between attempts to connect to Redfish file_permission = 420 integer value File permission for swift-less image hosting with the octal permission representation of file access permissions. This setting defaults to 644 , or as the octal number 0o644 in Python. This setting must be set to the octal number representation, meaning starting with 0o . firmware_source = http string value Specifies how firmware image should be served. Whether from its original location using the firmware source URL directly, or should serve it from ironic's Swift or HTTP server. firmware_update_fail_interval = 60 integer value Number of seconds to wait between checking for failed firmware update tasks firmware_update_status_interval = 60 integer value Number of seconds to wait between checking for completed firmware update tasks kernel_append_params = nofb nomodeset vga=normal string value Additional kernel parameters to pass down to the instance kernel. These parameters can be consumed by the kernel or by the applications by reading /proc/cmdline. Mind severe cmdline size limit! Can be overridden by instance_info/kernel_append_params property. raid_config_fail_interval = 60 integer value Number of seconds to wait between checking for failed raid config tasks raid_config_status_interval = 60 integer value Number of seconds to wait between checking for completed raid config tasks swift_container = ironic_redfish_container string value The Swift container to store Redfish driver data. Applies only when use_swift is enabled. swift_object_expiry_timeout = 900 integer value Amount of time in seconds for Swift objects to auto-expire. Applies only when use_swift is enabled. use_swift = True boolean value Upload generated ISO images for virtual media boot to Swift, then pass temporary URL to BMC for booting the node. If set to false, images are placed on the ironic-conductor node and served over its local HTTP server. 5.1.45. sensor_data The following table outlines the options available under the [sensor_data] group in the /etc/ironic/ironic.conf file. Table 5.44. sensor_data Configuration option = Default value Type Description data_types = ['ALL'] list value List of comma separated meter types which need to be sent to Ceilometer. The default value, "ALL", is a special value meaning send all the sensor data. This setting only applies to baremetal sensor data being processed through the conductor. enable_for_conductor = True boolean value If to include sensor metric data for the Conductor process itself in the message payload for sensor data which allows operators to gather instance counts of actions and states to better manage the deployment. enable_for_nodes = True boolean value If to transmit any sensor data for any nodes under this conductor's management. This option superceeds the send_sensor_data_for_undeployed_nodes setting. enable_for_undeployed_nodes = False boolean value The default for sensor data collection is to only collect data for machines that are deployed, however operators may desire to know if there are failures in hardware that is not presently in use. When set to true, the conductor will collect sensor information from all nodes when sensor data collection is enabled via the send_sensor_data setting. interval = 600 integer value Seconds between conductor sending sensor data message via the notification bus. This was originally for consumption via ceilometer, but the data may also be consumed via a plugin like ironic-prometheus-exporter or any other message bus data collector. send_sensor_data = False boolean value Enable sending sensor data message via the notification bus. wait_timeout = 300 integer value The time in seconds to wait for send sensors data periodic task to be finished before allowing periodic call to happen again. Should be less than send_sensor_data_interval value. workers = 4 integer value The maximum number of workers that can be started simultaneously for send data from sensors periodic task. 5.1.46. service_catalog The following table outlines the options available under the [service_catalog] group in the /etc/ironic/ironic.conf file. Table 5.45. service_catalog Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = baremetal string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 5.1.47. snmp The following table outlines the options available under the [snmp] group in the /etc/ironic/ironic.conf file. Table 5.46. snmp Configuration option = Default value Type Description power_action_delay = 0 integer value Time (in seconds) to sleep before power on and after powering off. Which may be needed with some PDUs as they may not honor toggling a specific power port in rapid succession without a delay. This option may be useful if the attached physical machine has a substantial power supply to hold it over in the event of a brownout. power_timeout = 10 integer value Seconds to wait for power action to be completed reboot_delay = 0 integer value Time (in seconds) to sleep between when rebooting (powering off and on again) udp_transport_retries = 5 integer value Maximum number of UDP request retries, 0 means no retries. udp_transport_timeout = 1.0 floating point value Response timeout in seconds used for UDP transport. Timeout should be a multiple of 0.5 seconds and is applicable to each retry. 5.1.48. ssl The following table outlines the options available under the [ssl] group in the /etc/ironic/ironic.conf file. Table 5.47. ssl Configuration option = Default value Type Description ca_file = None string value CA certificate file to use to verify connecting clients. cert_file = None string value Certificate file to use when starting the server securely. ciphers = None string value Sets the list of available ciphers. value should be a string in the OpenSSL cipher list format. key_file = None string value Private key file to use when starting the server securely. version = None string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 5.1.49. swift The following table outlines the options available under the [swift] group in the /etc/ironic/ironic.conf file. Table 5.48. swift Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = object-store string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. swift_max_retries = 2 integer value Maximum number of times to retry a Swift request, before failing. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 5.1.50. xclarity The following table outlines the options available under the [xclarity] group in the /etc/ironic/ironic.conf file. Table 5.49. xclarity Configuration option = Default value Type Description manager_ip = None string value IP address of the XClarity Controller. Configuration here is deprecated and will be removed in the Stein release. Please update the driver_info field to use "xclarity_manager_ip" instead password = None string value Password for XClarity Controller username. Configuration here is deprecated and will be removed in the Stein release. Please update the driver_info field to use "xclarity_password" instead port = 443 port value Port to be used for XClarity Controller connection. username = None string value Username for the XClarity Controller. Configuration here is deprecated and will be removed in the Stein release. Please update the driver_info field to use "xclarity_username" instead
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuration_reference/ironic
|
Chapter 8. Troubleshooting
|
Chapter 8. Troubleshooting The following chapter describes what happens when SELinux denies access; the top three causes of problems; where to find information about correct labeling; analyzing SELinux denials; and creating custom policy modules with audit2allow . 8.1. What Happens when Access is Denied SELinux decisions, such as allowing or disallowing access, are cached. This cache is known as the Access Vector Cache (AVC). Denial messages are logged when SELinux denies access. These denials are also known as "AVC denials", and are logged to a different location, depending on which daemons are running: Daemon Log Location auditd on /var/log/audit/audit.log auditd off; rsyslogd on /var/log/messages setroubleshootd, rsyslogd, and auditd on /var/log/audit/audit.log . Easier-to-read denial messages also sent to /var/log/messages If you are running the X Window System, have the setroubleshoot and setroubleshoot-server packages installed, and the setroubleshootd and auditd daemons are running, a warning is displayed when access is denied by SELinux: Clicking on 'Show' presents a detailed analysis of why SELinux denied access, and a possible solution for allowing access. If you are not running the X Window System, it is less obvious when access is denied by SELinux. For example, users browsing your website may receive an error similar to the following: For these situations, if DAC rules (standard Linux permissions) allow access, check /var/log/messages and /var/log/audit/audit.log for "SELinux is preventing" and "denied" errors respectively. This can be done by running the following commands as the Linux root user:
|
[
"Forbidden You don't have permission to access file name on this server",
"~]# grep \"SELinux is preventing\" /var/log/messages",
"~]# grep \"denied\" /var/log/audit/audit.log"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/chap-Security-Enhanced_Linux-Troubleshooting
|
Chapter 3. Creating applications
|
Chapter 3. Creating applications 3.1. Using templates The following sections provide an overview of templates, as well as how to use and create them. 3.1.1. Understanding templates A template describes a set of objects that can be parameterized and processed to produce a list of objects for creation by OpenShift Container Platform. A template can be processed to create anything you have permission to create within a project, for example services, build configurations, and deployment configurations. A template can also define a set of labels to apply to every object defined in the template. You can create a list of objects from a template using the CLI or, if a template has been uploaded to your project or the global template library, using the web console. 3.1.2. Uploading a template If you have a JSON or YAML file that defines a template, you can upload the template to projects using the CLI. This saves the template to the project for repeated use by any user with appropriate access to that project. Instructions about writing your own templates are provided later in this topic. Procedure Upload a template using one of the following methods: Upload a template to your current project's template library, pass the JSON or YAML file with the following command: USD oc create -f <filename> Upload a template to a different project using the -n option with the name of the project: USD oc create -f <filename> -n <project> The template is now available for selection using the web console or the CLI. 3.1.3. Creating an application by using the web console You can use the web console to create an application from a template. Procedure Select Developer from the context selector at the top of the web console navigation menu. While in the desired project, click +Add Click All services in the Developer Catalog tile. Click Builder Images under Type to see the available builder images. Note Only image stream tags that have the builder tag listed in their annotations appear in this list, as demonstrated here: kind: "ImageStream" apiVersion: "image.openshift.io/v1" metadata: name: "ruby" creationTimestamp: null spec: # ... tags: - name: "2.6" annotations: description: "Build and run Ruby 2.6 applications" iconClass: "icon-ruby" tags: "builder,ruby" 1 supports: "ruby:2.6,ruby" version: "2.6" # ... 1 Including builder here ensures this image stream tag appears in the web console as a builder. Modify the settings in the new application screen to configure the objects to support your application. 3.1.4. Creating objects from templates by using the CLI You can use the CLI to process templates and use the configuration that is generated to create objects. 3.1.4.1. Adding labels Labels are used to manage and organize generated objects, such as pods. The labels specified in the template are applied to every object that is generated from the template. Procedure Add labels in the template from the command line: USD oc process -f <filename> -l name=otherLabel 3.1.4.2. Listing parameters The list of parameters that you can override are listed in the parameters section of the template. Procedure You can list parameters with the CLI by using the following command and specifying the file to be used: USD oc process --parameters -f <filename> Alternatively, if the template is already uploaded: USD oc process --parameters -n <project> <template_name> For example, the following shows the output when listing the parameters for one of the quick start templates in the default openshift project: USD oc process --parameters -n openshift rails-postgresql-example Example output NAME DESCRIPTION GENERATOR VALUE SOURCE_REPOSITORY_URL The URL of the repository with your application source code https://github.com/sclorg/rails-ex.git SOURCE_REPOSITORY_REF Set this to a branch name, tag or other ref of your repository if you are not using the default branch CONTEXT_DIR Set this to the relative path to your project if it is not in the root of your repository APPLICATION_DOMAIN The exposed hostname that will route to the Rails service rails-postgresql-example.openshiftapps.com GITHUB_WEBHOOK_SECRET A secret string used to configure the GitHub webhook expression [a-zA-Z0-9]{40} SECRET_KEY_BASE Your secret key for verifying the integrity of signed cookies expression [a-z0-9]{127} APPLICATION_USER The application user that is used within the sample application to authorize access on pages openshift APPLICATION_PASSWORD The application password that is used within the sample application to authorize access on pages secret DATABASE_SERVICE_NAME Database service name postgresql POSTGRESQL_USER database username expression user[A-Z0-9]{3} POSTGRESQL_PASSWORD database password expression [a-zA-Z0-9]{8} POSTGRESQL_DATABASE database name root POSTGRESQL_MAX_CONNECTIONS database max connections 10 POSTGRESQL_SHARED_BUFFERS database shared buffers 12MB The output identifies several parameters that are generated with a regular expression-like generator when the template is processed. 3.1.4.3. Generating a list of objects Using the CLI, you can process a file defining a template to return the list of objects to standard output. Procedure Process a file defining a template to return the list of objects to standard output: USD oc process -f <filename> Alternatively, if the template has already been uploaded to the current project: USD oc process <template_name> Create objects from a template by processing the template and piping the output to oc create : USD oc process -f <filename> | oc create -f - Alternatively, if the template has already been uploaded to the current project: USD oc process <template> | oc create -f - You can override any parameter values defined in the file by adding the -p option for each <name>=<value> pair you want to override. A parameter reference appears in any text field inside the template items. For example, in the following the POSTGRESQL_USER and POSTGRESQL_DATABASE parameters of a template are overridden to output a configuration with customized environment variables: Creating a List of objects from a template USD oc process -f my-rails-postgresql \ -p POSTGRESQL_USER=bob \ -p POSTGRESQL_DATABASE=mydatabase The JSON file can either be redirected to a file or applied directly without uploading the template by piping the processed output to the oc create command: USD oc process -f my-rails-postgresql \ -p POSTGRESQL_USER=bob \ -p POSTGRESQL_DATABASE=mydatabase \ | oc create -f - If you have large number of parameters, you can store them in a file and then pass this file to oc process : USD cat postgres.env POSTGRESQL_USER=bob POSTGRESQL_DATABASE=mydatabase USD oc process -f my-rails-postgresql --param-file=postgres.env You can also read the environment from standard input by using "-" as the argument to --param-file : USD sed s/bob/alice/ postgres.env | oc process -f my-rails-postgresql --param-file=- 3.1.5. Modifying uploaded templates You can edit a template that has already been uploaded to your project. Procedure Modify a template that has already been uploaded: USD oc edit template <template> 3.1.6. Using instant app and quick start templates OpenShift Container Platform provides a number of default instant app and quick start templates to make it easy to quickly get started creating a new application for different languages. Templates are provided for Rails (Ruby), Django (Python), Node.js, CakePHP (PHP), and Dancer (Perl). Your cluster administrator must create these templates in the default, global openshift project so you have access to them. By default, the templates build using a public source repository on GitHub that contains the necessary application code. Procedure You can list the available default instant app and quick start templates with: USD oc get templates -n openshift To modify the source and build your own version of the application: Fork the repository referenced by the template's default SOURCE_REPOSITORY_URL parameter. Override the value of the SOURCE_REPOSITORY_URL parameter when creating from the template, specifying your fork instead of the default value. By doing this, the build configuration created by the template now points to your fork of the application code, and you can modify the code and rebuild the application at will. Note Some of the instant app and quick start templates define a database deployment configuration. The configuration they define uses ephemeral storage for the database content. These templates should be used for demonstration purposes only as all database data is lost if the database pod restarts for any reason. 3.1.6.1. Quick start templates A quick start template is a basic example of an application running on OpenShift Container Platform. Quick starts come in a variety of languages and frameworks, and are defined in a template, which is constructed from a set of services, build configurations, and deployment configurations. This template references the necessary images and source repositories to build and deploy the application. To explore a quick start, create an application from a template. Your administrator must have already installed these templates in your OpenShift Container Platform cluster, in which case you can simply select it from the web console. Quick starts refer to a source repository that contains the application source code. To customize the quick start, fork the repository and, when creating an application from the template, substitute the default source repository name with your forked repository. This results in builds that are performed using your source code instead of the provided example source. You can then update the code in your source repository and launch a new build to see the changes reflected in the deployed application. 3.1.6.1.1. Web framework quick start templates These quick start templates provide a basic application of the indicated framework and language: CakePHP: a PHP web framework that includes a MySQL database Dancer: a Perl web framework that includes a MySQL database Django: a Python web framework that includes a PostgreSQL database NodeJS: a NodeJS web application that includes a MongoDB database Rails: a Ruby web framework that includes a PostgreSQL database 3.1.7. Writing templates You can define new templates to make it easy to recreate all the objects of your application. The template defines the objects it creates along with some metadata to guide the creation of those objects. The following is an example of a simple template object definition (YAML): apiVersion: template.openshift.io/v1 kind: Template metadata: name: redis-template annotations: description: "Description" iconClass: "icon-redis" tags: "database,nosql" objects: - apiVersion: v1 kind: Pod metadata: name: redis-master spec: containers: - env: - name: REDIS_PASSWORD value: USD{REDIS_PASSWORD} image: dockerfile/redis name: master ports: - containerPort: 6379 protocol: TCP parameters: - description: Password used for Redis authentication from: '[A-Z0-9]{8}' generate: expression name: REDIS_PASSWORD labels: redis: master 3.1.7.1. Writing the template description The template description informs you what the template does and helps you find it when searching in the web console. Additional metadata beyond the template name is optional, but useful to have. In addition to general descriptive information, the metadata also includes a set of tags. Useful tags include the name of the language the template is related to for example, Java, PHP, Ruby, and so on. The following is an example of template description metadata: kind: Template apiVersion: template.openshift.io/v1 metadata: name: cakephp-mysql-example 1 annotations: openshift.io/display-name: "CakePHP MySQL Example (Ephemeral)" 2 description: >- An example CakePHP application with a MySQL database. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/cakephp-ex/blob/master/README.md. WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing." 3 openshift.io/long-description: >- This template defines resources needed to develop a CakePHP application, including a build configuration, application DeploymentConfig, and database DeploymentConfig. The database is stored in non-persistent storage, so this configuration should be used for experimental purposes only. 4 tags: "quickstart,php,cakephp" 5 iconClass: icon-php 6 openshift.io/provider-display-name: "Red Hat, Inc." 7 openshift.io/documentation-url: "https://github.com/sclorg/cakephp-ex" 8 openshift.io/support-url: "https://access.redhat.com" 9 message: "Your admin credentials are USD{ADMIN_USERNAME}:USD{ADMIN_PASSWORD}" 10 1 The unique name of the template. 2 A brief, user-friendly name, which can be employed by user interfaces. 3 A description of the template. Include enough detail that users understand what is being deployed and any caveats they must know before deploying. It should also provide links to additional information, such as a README file. Newlines can be included to create paragraphs. 4 Additional template description. This may be displayed by the service catalog, for example. 5 Tags to be associated with the template for searching and grouping. Add tags that include it into one of the provided catalog categories. Refer to the id and categoryAliases in CATALOG_CATEGORIES in the console constants file. The categories can also be customized for the whole cluster. 6 An icon to be displayed with your template in the web console. Example 3.1. Available icons icon-3scale icon-aerogear icon-amq icon-angularjs icon-ansible icon-apache icon-beaker icon-camel icon-capedwarf icon-cassandra icon-catalog-icon icon-clojure icon-codeigniter icon-cordova icon-datagrid icon-datavirt icon-debian icon-decisionserver icon-django icon-dotnet icon-drupal icon-eap icon-elastic icon-erlang icon-fedora icon-freebsd icon-git icon-github icon-gitlab icon-glassfish icon-go-gopher icon-golang icon-grails icon-hadoop icon-haproxy icon-helm icon-infinispan icon-jboss icon-jenkins icon-jetty icon-joomla icon-jruby icon-js icon-knative icon-kubevirt icon-laravel icon-load-balancer icon-mariadb icon-mediawiki icon-memcached icon-mongodb icon-mssql icon-mysql-database icon-nginx icon-nodejs icon-openjdk icon-openliberty icon-openshift icon-openstack icon-other-linux icon-other-unknown icon-perl icon-phalcon icon-php icon-play iconpostgresql icon-processserver icon-python icon-quarkus icon-rabbitmq icon-rails icon-redhat icon-redis icon-rh-integration icon-rh-spring-boot icon-rh-tomcat icon-ruby icon-scala icon-serverlessfx icon-shadowman icon-spring-boot icon-spring icon-sso icon-stackoverflow icon-suse icon-symfony icon-tomcat icon-ubuntu icon-vertx icon-wildfly icon-windows icon-wordpress icon-xamarin icon-zend 7 The name of the person or organization providing the template. 8 A URL referencing further documentation for the template. 9 A URL where support can be obtained for the template. 10 An instructional message that is displayed when this template is instantiated. This field should inform the user how to use the newly created resources. Parameter substitution is performed on the message before being displayed so that generated credentials and other parameters can be included in the output. Include links to any -steps documentation that users should follow. 3.1.7.2. Writing template labels Templates can include a set of labels. These labels are added to each object created when the template is instantiated. Defining a label in this way makes it easy for users to find and manage all the objects created from a particular template. The following is an example of template object labels: kind: "Template" apiVersion: "v1" ... labels: template: "cakephp-mysql-example" 1 app: "USD{NAME}" 2 1 A label that is applied to all objects created from this template. 2 A parameterized label that is also applied to all objects created from this template. Parameter expansion is carried out on both label keys and values. 3.1.7.3. Writing template parameters Parameters allow a value to be supplied by you or generated when the template is instantiated. Then, that value is substituted wherever the parameter is referenced. References can be defined in any field in the objects list field. This is useful for generating random passwords or allowing you to supply a hostname or other user-specific value that is required to customize the template. Parameters can be referenced in two ways: As a string value by placing values in the form USD{PARAMETER_NAME} in any string field in the template. As a JSON or YAML value by placing values in the form USD{{PARAMETER_NAME}} in place of any field in the template. When using the USD{PARAMETER_NAME} syntax, multiple parameter references can be combined in a single field and the reference can be embedded within fixed data, such as "http://USD{PARAMETER_1}USD{PARAMETER_2}" . Both parameter values are substituted and the resulting value is a quoted string. When using the USD{{PARAMETER_NAME}} syntax only a single parameter reference is allowed and leading and trailing characters are not permitted. The resulting value is unquoted unless, after substitution is performed, the result is not a valid JSON object. If the result is not a valid JSON value, the resulting value is quoted and treated as a standard string. A single parameter can be referenced multiple times within a template and it can be referenced using both substitution syntaxes within a single template. A default value can be provided, which is used if you do not supply a different value: The following is an example of setting an explicit value as the default value: parameters: - name: USERNAME description: "The user name for Joe" value: joe Parameter values can also be generated based on rules specified in the parameter definition, for example generating a parameter value: parameters: - name: PASSWORD description: "The random user password" generate: expression from: "[a-zA-Z0-9]{12}" In the example, processing generates a random password 12 characters long consisting of all upper and lowercase alphabet letters and numbers. The syntax available is not a full regular expression syntax. However, you can use \w , \d , \a , and \A modifiers: [\w]{10} produces 10 alphabet characters, numbers, and underscores. This follows the PCRE standard and is equal to [a-zA-Z0-9_]{10} . [\d]{10} produces 10 numbers. This is equal to [0-9]{10} . [\a]{10} produces 10 alphabetical characters. This is equal to [a-zA-Z]{10} . [\A]{10} produces 10 punctuation or symbol characters. This is equal to [~!@#USD%\^&*()\-_+={}\[\]\\|<,>.?/"';:`]{10} . Note Depending on if the template is written in YAML or JSON, and the type of string that the modifier is embedded within, you might need to escape the backslash with a second backslash. The following examples are equivalent: Example YAML template with a modifier parameters: - name: singlequoted_example generate: expression from: '[\A]{10}' - name: doublequoted_example generate: expression from: "[\\A]{10}" Example JSON template with a modifier { "parameters": [ { "name": "json_example", "generate": "expression", "from": "[\\A]{10}" } ] } Here is an example of a full template with parameter definitions and references: kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: cakephp-mysql-example annotations: description: Defines how to build the application spec: source: type: Git git: uri: "USD{SOURCE_REPOSITORY_URL}" 1 ref: "USD{SOURCE_REPOSITORY_REF}" contextDir: "USD{CONTEXT_DIR}" - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: replicas: "USD{{REPLICA_COUNT}}" 2 parameters: - name: SOURCE_REPOSITORY_URL 3 displayName: Source Repository URL 4 description: The URL of the repository with your application source code 5 value: https://github.com/sclorg/cakephp-ex.git 6 required: true 7 - name: GITHUB_WEBHOOK_SECRET description: A secret string used to configure the GitHub webhook generate: expression 8 from: "[a-zA-Z0-9]{40}" 9 - name: REPLICA_COUNT description: Number of replicas to run value: "2" required: true message: "... The GitHub webhook secret is USD{GITHUB_WEBHOOK_SECRET} ..." 10 1 This value is replaced with the value of the SOURCE_REPOSITORY_URL parameter when the template is instantiated. 2 This value is replaced with the unquoted value of the REPLICA_COUNT parameter when the template is instantiated. 3 The name of the parameter. This value is used to reference the parameter within the template. 4 The user-friendly name for the parameter. This is displayed to users. 5 A description of the parameter. Provide more detailed information for the purpose of the parameter, including any constraints on the expected value. Descriptions should use complete sentences to follow the console's text standards. Do not make this a duplicate of the display name. 6 A default value for the parameter which is used if you do not override the value when instantiating the template. Avoid using default values for things like passwords, instead use generated parameters in combination with secrets. 7 Indicates this parameter is required, meaning you cannot override it with an empty value. If the parameter does not provide a default or generated value, you must supply a value. 8 A parameter which has its value generated. 9 The input to the generator. In this case, the generator produces a 40 character alphanumeric value including upper and lowercase characters. 10 Parameters can be included in the template message. This informs you about generated values. 3.1.7.4. Writing the template object list The main portion of the template is the list of objects which is created when the template is instantiated. This can be any valid API object, such as a build configuration, deployment configuration, or service. The object is created exactly as defined here, with any parameter values substituted in prior to creation. The definition of these objects can reference parameters defined earlier. The following is an example of an object list: kind: "Template" apiVersion: "v1" metadata: name: my-template objects: - kind: "Service" 1 apiVersion: "v1" metadata: name: "cakephp-mysql-example" annotations: description: "Exposes and load balances the application pods" spec: ports: - name: "web" port: 8080 targetPort: 8080 selector: name: "cakephp-mysql-example" 1 The definition of a service, which is created by this template. Note If an object definition metadata includes a fixed namespace field value, the field is stripped out of the definition during template instantiation. If the namespace field contains a parameter reference, normal parameter substitution is performed and the object is created in whatever namespace the parameter substitution resolved the value to, assuming the user has permission to create objects in that namespace. 3.1.7.5. Marking a template as bindable The Template Service Broker advertises one service in its catalog for each template object of which it is aware. By default, each of these services is advertised as being bindable, meaning an end user is permitted to bind against the provisioned service. Procedure Template authors can prevent end users from binding against services provisioned from a given template. Prevent end user from binding against services provisioned from a given template by adding the annotation template.openshift.io/bindable: "false" to the template. 3.1.7.6. Exposing template object fields Template authors can indicate that fields of particular objects in a template should be exposed. The Template Service Broker recognizes exposed fields on ConfigMap , Secret , Service , and Route objects, and returns the values of the exposed fields when a user binds a service backed by the broker. To expose one or more fields of an object, add annotations prefixed by template.openshift.io/expose- or template.openshift.io/base64-expose- to the object in the template. Each annotation key, with its prefix removed, is passed through to become a key in a bind response. Each annotation value is a Kubernetes JSONPath expression, which is resolved at bind time to indicate the object field whose value should be returned in the bind response. Note Bind response key-value pairs can be used in other parts of the system as environment variables. Therefore, it is recommended that every annotation key with its prefix removed should be a valid environment variable name - beginning with a character A-Z , a-z , or _ , and being followed by zero or more characters A-Z , a-z , 0-9 , or _ . Note Unless escaped with a backslash, Kubernetes' JSONPath implementation interprets characters such as . , @ , and others as metacharacters, regardless of their position in the expression. Therefore, for example, to refer to a ConfigMap datum named my.key , the required JSONPath expression would be {.data['my\.key']} . Depending on how the JSONPath expression is then written in YAML, an additional backslash might be required, for example "{.data['my\\.key']}" . The following is an example of different objects' fields being exposed: kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: ConfigMap apiVersion: v1 metadata: name: my-template-config annotations: template.openshift.io/expose-username: "{.data['my\\.username']}" data: my.username: foo - kind: Secret apiVersion: v1 metadata: name: my-template-config-secret annotations: template.openshift.io/base64-expose-password: "{.data['password']}" stringData: password: <password> - kind: Service apiVersion: v1 metadata: name: my-template-service annotations: template.openshift.io/expose-service_ip_port: "{.spec.clusterIP}:{.spec.ports[?(.name==\"web\")].port}" spec: ports: - name: "web" port: 8080 - kind: Route apiVersion: route.openshift.io/v1 metadata: name: my-template-route annotations: template.openshift.io/expose-uri: "http://{.spec.host}{.spec.path}" spec: path: mypath An example response to a bind operation given the above partial template follows: { "credentials": { "username": "foo", "password": "YmFy", "service_ip_port": "172.30.12.34:8080", "uri": "http://route-test.router.default.svc.cluster.local/mypath" } } Procedure Use the template.openshift.io/expose- annotation to return the field value as a string. This is convenient, although it does not handle arbitrary binary data. If you want to return binary data, use the template.openshift.io/base64-expose- annotation instead to base64 encode the data before it is returned. 3.1.7.7. Waiting for template readiness Template authors can indicate that certain objects within a template should be waited for before a template instantiation by the service catalog, Template Service Broker, or TemplateInstance API is considered complete. To use this feature, mark one or more objects of kind Build , BuildConfig , Deployment , DeploymentConfig , Job , or StatefulSet in a template with the following annotation: "template.alpha.openshift.io/wait-for-ready": "true" Template instantiation is not complete until all objects marked with the annotation report ready. Similarly, if any of the annotated objects report failed, or if the template fails to become ready within a fixed timeout of one hour, the template instantiation fails. For the purposes of instantiation, readiness and failure of each object kind are defined as follows: Kind Readiness Failure Build Object reports phase complete. Object reports phase canceled, error, or failed. BuildConfig Latest associated build object reports phase complete. Latest associated build object reports phase canceled, error, or failed. Deployment Object reports new replica set and deployment available. This honors readiness probes defined on the object. Object reports progressing condition as false. DeploymentConfig Object reports new replication controller and deployment available. This honors readiness probes defined on the object. Object reports progressing condition as false. Job Object reports completion. Object reports that one or more failures have occurred. StatefulSet Object reports all replicas ready. This honors readiness probes defined on the object. Not applicable. The following is an example template extract, which uses the wait-for-ready annotation. Further examples can be found in the OpenShift Container Platform quick start templates. kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: ... annotations: # wait-for-ready used on BuildConfig ensures that template instantiation # will fail immediately if build fails template.alpha.openshift.io/wait-for-ready: "true" spec: ... - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: ... annotations: template.alpha.openshift.io/wait-for-ready: "true" spec: ... - kind: Service apiVersion: v1 metadata: name: ... spec: ... Additional recommendations Set memory, CPU, and storage default sizes to make sure your application is given enough resources to run smoothly. Avoid referencing the latest tag from images if that tag is used across major versions. This can cause running applications to break when new images are pushed to that tag. A good template builds and deploys cleanly without requiring modifications after the template is deployed. 3.1.7.8. Creating a template from existing objects Rather than writing an entire template from scratch, you can export existing objects from your project in YAML form, and then modify the YAML from there by adding parameters and other customizations as template form. Procedure Export objects in a project in YAML form: USD oc get -o yaml all > <yaml_filename> You can also substitute a particular resource type or multiple resources instead of all . Run oc get -h for more examples. The object types included in oc get -o yaml all are: BuildConfig Build DeploymentConfig ImageStream Pod ReplicationController Route Service Note Using the all alias is not recommended because the contents might vary across different clusters and versions. Instead, specify all required resources. 3.2. Creating applications by using the Developer perspective The Developer perspective in the web console provides you the following options from the +Add view to create applications and associated services and deploy them on OpenShift Container Platform: Getting started resources : Use these resources to help you get started with Developer Console. You can choose to hide the header using the Options menu . Creating applications using samples : Use existing code samples to get started with creating applications on the OpenShift Container Platform. Build with guided documentation : Follow the guided documentation to build applications and familiarize yourself with key concepts and terminologies. Explore new developer features : Explore the new features and resources within the Developer perspective. Developer catalog : Explore the Developer Catalog to select the required applications, services, or source to image builders, and then add it to your project. All Services : Browse the catalog to discover services across OpenShift Container Platform. Database : Select the required database service and add it to your application. Operator Backed : Select and deploy the required Operator-managed service. Helm chart : Select the required Helm chart to simplify deployment of applications and services. Devfile : Select a devfile from the Devfile registry to declaratively define a development environment. Event Source : Select an event source to register interest in a class of events from a particular system. Note The Managed services option is also available if the RHOAS Operator is installed. Git repository : Import an existing codebase, Devfile, or Dockerfile from your Git repository using the From Git , From Devfile , or From Dockerfile options respectively, to build and deploy an application on OpenShift Container Platform. Container images : Use existing images from an image stream or registry to deploy it on to the OpenShift Container Platform. Pipelines : Use Tekton pipeline to create CI/CD pipelines for your software delivery process on the OpenShift Container Platform. Serverless : Explore the Serverless options to create, build, and deploy stateless and serverless applications on the OpenShift Container Platform. Channel : Create a Knative channel to create an event forwarding and persistence layer with in-memory and reliable implementations. Samples : Explore the available sample applications to create, build, and deploy an application quickly. Quick Starts : Explore the quick start options to create, import, and run applications with step-by-step instructions and tasks. From Local Machine : Explore the From Local Machine tile to import or upload files on your local machine for building and deploying applications easily. Import YAML : Upload a YAML file to create and define resources for building and deploying applications. Upload JAR file : Upload a JAR file to build and deploy Java applications. Share my Project : Use this option to add or remove users to a project and provide accessibility options to them. Helm Chart repositories : Use this option to add Helm Chart repositories in a namespace. Re-ordering of resources : Use these resources to re-order pinned resources added to your navigation pane. The drag-and-drop icon is displayed on the left side of the pinned resource when you hover over it in the navigation pane. The dragged resource can be dropped only in the section where it resides. Note that certain options, such as Pipelines , Event Source , and Import Virtual Machines , are displayed only when the OpenShift Pipelines Operator , OpenShift Serverless Operator , and OpenShift Virtualization Operator are installed, respectively. 3.2.1. Prerequisites To create applications using the Developer perspective ensure that: You have logged in to the web console . You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. To create serverless applications, in addition to the preceding prerequisites, ensure that: You have installed the OpenShift Serverless Operator . You have created a KnativeServing resource in the knative-serving namespace . 3.2.2. Creating sample applications You can use the sample applications in the +Add flow of the Developer perspective to create, build, and deploy applications quickly. Prerequisites You have logged in to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the +Add view, click the Samples tile to see the Samples page. On the Samples page, select one of the available sample applications to see the Create Sample Application form. In the Create Sample Application Form : In the Name field, the deployment name is displayed by default. You can modify this name as required. In the Builder Image Version , a builder image is selected by default. You can modify this image version by using the Builder Image Version drop-down list. A sample Git repository URL is added by default. Click Create to create the sample application. The build status of the sample application is displayed on the Topology view. After the sample application is created, you can see the deployment added to the application. 3.2.3. Creating applications by using Quick Starts The Quick Starts page shows you how to create, import, and run applications on OpenShift Container Platform, with step-by-step instructions and tasks. Prerequisites You have logged in to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the +Add view, click the Getting Started resources Build with guided documentation View all quick starts link to view the Quick Starts page. In the Quick Starts page, click the tile for the quick start that you want to use. Click Start to begin the quick start. Perform the steps that are displayed. 3.2.4. Importing a codebase from Git to create an application You can use the Developer perspective to create, build, and deploy an application on OpenShift Container Platform using an existing codebase in GitHub. The following procedure walks you through the From Git option in the Developer perspective to create an application. Procedure In the +Add view, click From Git in the Git Repository tile to see the Import from git form. In the Git section, enter the Git repository URL for the codebase you want to use to create an application. For example, enter the URL of this sample Node.js application https://github.com/sclorg/nodejs-ex . The URL is then validated. Optional: You can click Show Advanced Git Options to add details such as: Git Reference to point to code in a specific branch, tag, or commit to be used to build the application. Context Dir to specify the subdirectory for the application source code you want to use to build the application. Source Secret to create a Secret Name with credentials for pulling your source code from a private repository. Optional: You can import a Devfile , a Dockerfile , Builder Image , or a Serverless Function through your Git repository to further customize your deployment. If your Git repository contains a Devfile , a Dockerfile , a Builder Image , or a func.yaml , it is automatically detected and populated on the respective path fields. If a Devfile , a Dockerfile , or a Builder Image are detected in the same repository, the Devfile is selected by default. If func.yaml is detected in the Git repository, the Import Strategy changes to Serverless Function . Alternatively, you can create a serverless function by clicking Create Serverless function in the +Add view using the Git repository URL. To edit the file import type and select a different strategy, click Edit import strategy option. If multiple Devfiles , a Dockerfiles , or a Builder Images are detected, to import a specific instance, specify the respective paths relative to the context directory. After the Git URL is validated, the recommended builder image is selected and marked with a star. If the builder image is not auto-detected, select a builder image. For the https://github.com/sclorg/nodejs-ex Git URL, by default the Node.js builder image is selected. Optional: Use the Builder Image Version drop-down to specify a version. Optional: Use the Edit import strategy to select a different strategy. Optional: For the Node.js builder image, use the Run command field to override the command to run the application. In the General section: In the Application field, enter a unique name for the application grouping, for example, myapp . Ensure that the application name is unique in a namespace. The Name field to identify the resources created for this application is automatically populated based on the Git repository URL if there are no existing applications. If there are existing applications, you can choose to deploy the component within an existing application, create a new application, or keep the component unassigned. Note The resource name must be unique in a namespace. Modify the resource name if you get an error. In the Resources section, select: Deployment , to create an application in plain Kubernetes style. Deployment Config , to create an OpenShift Container Platform style application. Serverless Deployment , to create a Knative service. Note To set the default resource preference for importing an application, go to User Preferences Applications Resource type field. The Serverless Deployment option is displayed in the Import from Git form only if the OpenShift Serverless Operator is installed in your cluster. The Resources section is not available while creating a serverless function. For further details, refer to the OpenShift Serverless documentation. In the Pipelines section, select Add Pipeline , and then click Show Pipeline Visualization to see the pipeline for the application. A default pipeline is selected, but you can choose the pipeline you want from the list of available pipelines for the application. Note The Add pipeline checkbox is checked and Configure PAC is selected by default if the following criterias are fulfilled: Pipeline operator is installed pipelines-as-code is enabled .tekton directory is detected in the Git repository Add a webhook to your repository. If Configure PAC is checked and the GitHub App is set up, you can see the Use GitHub App and Setup a webhook options. If GitHub App is not set up, you can only see the Setup a webhook option: Go to Settings Webhooks and click Add webhook . Set the Payload URL to the Pipelines as Code controller public URL. Select the content type as application/json . Add a webhook secret and note it in an alternate location. With openssl installed on your local machine, generate a random secret. Click Let me select individual events and select these events: Commit comments , Issue comments , Pull request , and Pushes . Click Add webhook . Optional: In the Advanced Options section, the Target port and the Create a route to the application is selected by default so that you can access your application using a publicly available URL. If your application does not expose its data on the default public port, 80, clear the check box, and set the target port number you want to expose. Optional: You can use the following advanced options to further customize your application: Routing By clicking the Routing link, you can perform the following actions: Customize the hostname for the route. Specify the path the router watches. Select the target port for the traffic from the drop-down list. Secure your route by selecting the Secure Route check box. Select the required TLS termination type and set a policy for insecure traffic from the respective drop-down lists. Note For serverless applications, the Knative service manages all the routing options above. However, you can customize the target port for traffic, if required. If the target port is not specified, the default port of 8080 is used. Domain mapping If you are creating a Serverless Deployment , you can add a custom domain mapping to the Knative service during creation. In the Advanced options section, click Show advanced Routing options . If the domain mapping CR that you want to map to the service already exists, you can select it from the Domain mapping drop-down menu. If you want to create a new domain mapping CR, type the domain name into the box, and select the Create option. For example, if you type in example.com , the Create option is Create "example.com" . Health Checks Click the Health Checks link to add Readiness, Liveness, and Startup probes to your application. All the probes have prepopulated default data; you can add the probes with the default data or customize it as required. To customize the health probes: Click Add Readiness Probe , if required, modify the parameters to check if the container is ready to handle requests, and select the check mark to add the probe. Click Add Liveness Probe , if required, modify the parameters to check if a container is still running, and select the check mark to add the probe. Click Add Startup Probe , if required, modify the parameters to check if the application within the container has started, and select the check mark to add the probe. For each of the probes, you can specify the request type - HTTP GET , Container Command , or TCP Socket , from the drop-down list. The form changes as per the selected request type. You can then modify the default values for the other parameters, such as the success and failure thresholds for the probe, number of seconds before performing the first probe after the container starts, frequency of the probe, and the timeout value. Build Configuration and Deployment Click the Build Configuration and Deployment links to see the respective configuration options. Some options are selected by default; you can customize them further by adding the necessary triggers and environment variables. For serverless applications, the Deployment option is not displayed as the Knative configuration resource maintains the desired state for your deployment instead of a DeploymentConfig resource. Scaling Click the Scaling link to define the number of pods or instances of the application you want to deploy initially. If you are creating a serverless deployment, you can also configure the following settings: Min Pods determines the lower limit for the number of pods that must be running at any given time for a Knative service. This is also known as the minScale setting. Max Pods determines the upper limit for the number of pods that can be running at any given time for a Knative service. This is also known as the maxScale setting. Concurrency target determines the number of concurrent requests desired for each instance of the application at a given time. Concurrency limit determines the limit for the number of concurrent requests allowed for each instance of the application at a given time. Concurrency utilization determines the percentage of the concurrent requests limit that must be met before Knative scales up additional pods to handle additional traffic. Autoscale window defines the time window over which metrics are averaged to provide input for scaling decisions when the autoscaler is not in panic mode. A service is scaled-to-zero if no requests are received during this window. The default duration for the autoscale window is 60s . This is also known as the stable window. Resource Limit Click the Resource Limit link to set the amount of CPU and Memory resources a container is guaranteed or allowed to use when running. Labels Click the Labels link to add custom labels to your application. Click Create to create the application and a success notification is displayed. You can see the build status of the application in the Topology view. 3.2.5. Creating applications by deploying container image You can use an external image registry or an image stream tag from an internal registry to deploy an application on your cluster. Prerequisites You have logged in to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the +Add view, click Container images to view the Deploy Images page. In the Image section: Select Image name from external registry to deploy an image from a public or a private registry, or select Image stream tag from internal registry to deploy an image from an internal registry. Select an icon for your image in the Runtime icon tab. In the General section: In the Application name field, enter a unique name for the application grouping. In the Name field, enter a unique name to identify the resources created for this component. In the Resource type section, select the resource type to generate: Select Deployment to enable declarative updates for Pod and ReplicaSet objects. Select DeploymentConfig to define the template for a Pod object, and manage deploying new images and configuration sources. Select Serverless Deployment to enable scaling to zero when idle. Click Create . You can view the build status of the application in the Topology view. 3.2.6. Deploying a Java application by uploading a JAR file You can use the web console Developer perspective to upload a JAR file by using the following options: Navigate to the +Add view of the Developer perspective, and click Upload JAR file in the From Local Machine tile. Browse and select your JAR file, or drag a JAR file to deploy your application. Navigate to the Topology view and use the Upload JAR file option, or drag a JAR file to deploy your application. Use the in-context menu in the Topology view, and then use the Upload JAR file option to upload your JAR file to deploy your application. Prerequisites The Cluster Samples Operator must be installed by a cluster administrator. You have access to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the Topology view, right-click anywhere to view the Add to Project menu. Hover over the Add to Project menu to see the menu options, and then select the Upload JAR file option to see the Upload JAR file form. Alternatively, you can drag the JAR file into the Topology view. In the JAR file field, browse for the required JAR file on your local machine and upload it. Alternatively, you can drag the JAR file on to the field. A toast alert is displayed at the top right if an incompatible file type is dragged into the Topology view. A field error is displayed if an incompatible file type is dropped on the field in the upload form. The runtime icon and builder image are selected by default. If a builder image is not auto-detected, select a builder image. If required, you can change the version using the Builder Image Version drop-down list. Optional: In the Application Name field, enter a unique name for your application to use for resource labelling. In the Name field, enter a unique component name for the associated resources. Optional: Use the Resource type drop-down list to change the resource type. In the Advanced options menu, click Create a Route to the Application to configure a public URL for your deployed application. Click Create to deploy the application. A toast notification is shown to notify you that the JAR file is being uploaded. The toast notification also includes a link to view the build logs. Note If you attempt to close the browser tab while the build is running, a web alert is displayed. After the JAR file is uploaded and the application is deployed, you can view the application in the Topology view. 3.2.7. Using the Devfile registry to access devfiles You can use the devfiles in the +Add flow of the Developer perspective to create an application. The +Add flow provides a complete integration with the devfile community registry . A devfile is a portable YAML file that describes your development environment without needing to configure it from scratch. Using the Devfile registry , you can use a preconfigured devfile to create an application. Procedure Navigate to Developer Perspective +Add Developer Catalog All Services . A list of all the available services in the Developer Catalog is displayed. Under Type , click Devfiles to browse for devfiles that support a particular language or framework. Alternatively, you can use the keyword filter to search for a particular devfile using their name, tag, or description. Click the devfile you want to use to create an application. The devfile tile displays the details of the devfile, including the name, description, provider, and the documentation of the devfile. Click Create to create an application and view the application in the Topology view. 3.2.8. Using the Developer Catalog to add services or components to your application You use the Developer Catalog to deploy applications and services based on Operator backed services such as Databases, Builder Images, and Helm Charts. The Developer Catalog contains a collection of application components, services, event sources, or source-to-image builders that you can add to your project. Cluster administrators can customize the content made available in the catalog. Procedure In the Developer perspective, navigate to the +Add view and from the Developer Catalog tile, click All Services to view all the available services in the Developer Catalog . Under All Services , select the kind of service or the component you need to add to your project. For this example, select Databases to list all the database services and then click MariaDB to see the details for the service. Click Instantiate Template to see an automatically populated template with details for the MariaDB service, and then click Create to create and view the MariaDB service in the Topology view. Figure 3.1. MariaDB in Topology 3.2.9. Additional resources For more information about Knative routing settings for OpenShift Serverless, see Routing . For more information about domain mapping settings for OpenShift Serverless, see Configuring a custom domain for a Knative service . For more information about Knative autoscaling settings for OpenShift Serverless, see Autoscaling . For more information about adding a new user to a project, see Working with projects . For more information about creating a Helm Chart repository, see Creating Helm Chart repositories . 3.3. Creating applications from installed Operators Operators are a method of packaging, deploying, and managing a Kubernetes application. You can create applications on OpenShift Container Platform using Operators that have been installed by a cluster administrator. This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console. Additional resources See the Operators guide for more on how Operators work and how the Operator Lifecycle Manager is integrated in OpenShift Container Platform. 3.3.1. Creating an etcd cluster using an Operator This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM). Prerequisites Access to an OpenShift Container Platform 4.16 cluster. The etcd Operator already installed cluster-wide by an administrator. Procedure Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called my-etcd . Navigate to the Operators Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator. Tip You can get this list from the CLI using: USD oc get csv On the Installed Operators page, click the etcd Operator to view more details and available actions. As shown under Provided APIs , this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployment or ReplicaSet , but contain logic specific to managing etcd. Create a new etcd cluster: In the etcd Cluster API box, click Create instance . The page allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster. Click the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator. Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command: USD oc policy add-role-to-user edit <user> -n <target_project> You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications. 3.4. Creating applications by using the CLI You can create an OpenShift Container Platform application from components that include source or binary code, images, and templates by using the OpenShift Container Platform CLI. The set of objects created by new-app depends on the artifacts passed as input: source repositories, images, or templates. 3.4.1. Creating an application from source code With the new-app command you can create applications from source code in a local or remote Git repository. The new-app command creates a build configuration, which itself creates a new application image from your source code. The new-app command typically also creates a Deployment object to deploy the new image, and a service to provide load-balanced access to the deployment running your image. OpenShift Container Platform automatically detects whether the pipeline, source, or docker build strategy should be used, and in the case of source build, detects an appropriate language builder image. 3.4.1.1. Local To create an application from a Git repository in a local directory: USD oc new-app /<path to source code> Note If you use a local Git repository, the repository must have a remote named origin that points to a URL that is accessible by the OpenShift Container Platform cluster. If there is no recognized remote, running the new-app command will create a binary build. 3.4.1.2. Remote To create an application from a remote Git repository: USD oc new-app https://github.com/sclorg/cakephp-ex To create an application from a private remote Git repository: USD oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret Note If you use a private remote Git repository, you can use the --source-secret flag to specify an existing source clone secret that will get injected into your build config to access the repository. You can use a subdirectory of your source code repository by specifying a --context-dir flag. To create an application from a remote Git repository and a context subdirectory: USD oc new-app https://github.com/sclorg/s2i-ruby-container.git \ --context-dir=2.0/test/puma-test-app Also, when specifying a remote URL, you can specify a Git branch to use by appending #<branch_name> to the end of the URL: USD oc new-app https://github.com/openshift/ruby-hello-world.git#beta4 3.4.1.3. Build strategy detection OpenShift Container Platform automatically determines which build strategy to use by detecting certain files: If a Jenkins file exists in the root or specified context directory of the source repository when creating a new application, OpenShift Container Platform generates a pipeline build strategy. Note The pipeline build strategy is deprecated; consider using Red Hat OpenShift Pipelines instead. If a Dockerfile exists in the root or specified context directory of the source repository when creating a new application, OpenShift Container Platform generates a docker build strategy. If neither a Jenkins file nor a Dockerfile is detected, OpenShift Container Platform generates a source build strategy. Override the automatically detected build strategy by setting the --strategy flag to docker , pipeline , or source . USD oc new-app /home/user/code/myapp --strategy=docker Note The oc command requires that files containing build sources are available in a remote Git repository. For all source builds, you must use git remote -v . 3.4.1.4. Language detection If you use the source build strategy, new-app attempts to determine the language builder to use by the presence of certain files in the root or specified context directory of the repository: Table 3.1. Languages detected by new-app Language Files dotnet project.json , *.csproj jee pom.xml nodejs app.json , package.json perl cpanfile , index.pl php composer.json , index.php python requirements.txt , setup.py ruby Gemfile , Rakefile , config.ru scala build.sbt golang Godeps , main.go After a language is detected, new-app searches the OpenShift Container Platform server for image stream tags that have a supports annotation matching the detected language, or an image stream that matches the name of the detected language. If a match is not found, new-app searches the Docker Hub registry for an image that matches the detected language based on name. You can override the image the builder uses for a particular source repository by specifying the image, either an image stream or container specification, and the repository with a ~ as a separator. Note that if this is done, build strategy detection and language detection are not carried out. For example, to use the myproject/my-ruby imagestream with the source in a remote repository: USD oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git To use the openshift/ruby-20-centos7:latest container image stream with the source in a local repository: USD oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app Note Language detection requires the Git client to be locally installed so that your repository can be cloned and inspected. If Git is not available, you can avoid the language detection step by specifying the builder image to use with your repository with the <image>~<repository> syntax. The -i <image> <repository> invocation requires that new-app attempt to clone repository to determine what type of artifact it is, so this will fail if Git is not available. The -i <image> --code <repository> invocation requires new-app clone repository to determine whether image should be used as a builder for the source code, or deployed separately, as in the case of a database image. 3.4.2. Creating an application from an image You can deploy an application from an existing image. Images can come from image streams in the OpenShift Container Platform server, images in a specific registry, or images in the local Docker server. The new-app command attempts to determine the type of image specified in the arguments passed to it. However, you can explicitly tell new-app whether the image is a container image using the --docker-image argument or an image stream using the -i|--image-stream argument. Note If you specify an image from your local Docker repository, you must ensure that the same image is available to the OpenShift Container Platform cluster nodes. 3.4.2.1. Docker Hub MySQL image Create an application from the Docker Hub MySQL image, for example: USD oc new-app mysql 3.4.2.2. Image in a private registry Create an application using an image in a private registry, specify the full container image specification: USD oc new-app myregistry:5000/example/myimage 3.4.2.3. Existing image stream and optional image stream tag Create an application from an existing image stream and optional image stream tag: USD oc new-app my-stream:v1 3.4.3. Creating an application from a template You can create an application from a previously stored template or from a template file, by specifying the name of the template as an argument. For example, you can store a sample application template and use it to create an application. Upload an application template to your current project's template library. The following example uploads an application template from a file called examples/sample-app/application-template-stibuild.json : USD oc create -f examples/sample-app/application-template-stibuild.json Then create a new application by referencing the application template. In this example, the template name is ruby-helloworld-sample : USD oc new-app ruby-helloworld-sample To create a new application by referencing a template file in your local file system, without first storing it in OpenShift Container Platform, use the -f|--file argument. For example: USD oc new-app -f examples/sample-app/application-template-stibuild.json 3.4.3.1. Template parameters When creating an application based on a template, use the -p|--param argument to set parameter values that are defined by the template: USD oc new-app ruby-helloworld-sample \ -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword You can store your parameters in a file, then use that file with --param-file when instantiating a template. If you want to read the parameters from standard input, use --param-file=- . The following is an example file called helloworld.params : ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword Reference the parameters in the file when instantiating a template: USD oc new-app ruby-helloworld-sample --param-file=helloworld.params 3.4.4. Modifying application creation The new-app command generates OpenShift Container Platform objects that build, deploy, and run the application that is created. Normally, these objects are created in the current project and assigned names that are derived from the input source repositories or the input images. However, with new-app you can modify this behavior. Table 3.2. new-app output objects Object Description BuildConfig A BuildConfig object is created for each source repository that is specified in the command line. The BuildConfig object specifies the strategy to use, the source location, and the build output location. ImageStreams For the BuildConfig object, two image streams are usually created. One represents the input image. With source builds, this is the builder image. With Docker builds, this is the FROM image. The second one represents the output image. If a container image was specified as input to new-app , then an image stream is created for that image as well. DeploymentConfig A DeploymentConfig object is created either to deploy the output of a build, or a specified image. The new-app command creates emptyDir volumes for all Docker volumes that are specified in containers included in the resulting DeploymentConfig object . Service The new-app command attempts to detect exposed ports in input images. It uses the lowest numeric exposed port to generate a service that exposes that port. To expose a different port, after new-app has completed, simply use the oc expose command to generate additional services. Other Other objects can be generated when instantiating templates, according to the template. 3.4.4.1. Specifying environment variables When generating applications from a template, source, or an image, you can use the -e|--env argument to pass environment variables to the application container at run time: USD oc new-app openshift/postgresql-92-centos7 \ -e POSTGRESQL_USER=user \ -e POSTGRESQL_DATABASE=db \ -e POSTGRESQL_PASSWORD=password The variables can also be read from file using the --env-file argument. The following is an example file called postgresql.env : POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password Read the variables from the file: USD oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env Additionally, environment variables can be given on standard input by using --env-file=- : USD cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=- Note Any BuildConfig objects created as part of new-app processing are not updated with environment variables passed with the -e|--env or --env-file argument. 3.4.4.2. Specifying build environment variables When generating applications from a template, source, or an image, you can use the --build-env argument to pass environment variables to the build container at run time: USD oc new-app openshift/ruby-23-centos7 \ --build-env HTTP_PROXY=http://myproxy.net:1337/ \ --build-env GEM_HOME=~/.gem The variables can also be read from a file using the --build-env-file argument. The following is an example file called ruby.env : HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem Read the variables from the file: USD oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env Additionally, environment variables can be given on standard input by using --build-env-file=- : USD cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=- 3.4.4.3. Specifying labels When generating applications from source, images, or templates, you can use the -l|--label argument to add labels to the created objects. Labels make it easy to collectively select, configure, and delete objects associated with the application. USD oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world 3.4.4.4. Viewing the output without creation To see a dry-run of running the new-app command, you can use the -o|--output argument with a yaml or json value. You can then use the output to preview the objects that are created or redirect it to a file that you can edit. After you are satisfied, you can use oc create to create the OpenShift Container Platform objects. To output new-app artifacts to a file, run the following: USD oc new-app https://github.com/openshift/ruby-hello-world \ -o yaml > myapp.yaml Edit the file: USD vi myapp.yaml Create a new application by referencing the file: USD oc create -f myapp.yaml 3.4.4.5. Creating objects with different names Objects created by new-app are normally named after the source repository, or the image used to generate them. You can set the name of the objects produced by adding a --name flag to the command: USD oc new-app https://github.com/openshift/ruby-hello-world --name=myapp 3.4.4.6. Creating objects in a different project Normally, new-app creates objects in the current project. However, you can create objects in a different project by using the -n|--namespace argument: USD oc new-app https://github.com/openshift/ruby-hello-world -n myproject 3.4.4.7. Creating multiple objects The new-app command allows creating multiple applications specifying multiple parameters to new-app . Labels specified in the command line apply to all objects created by the single command. Environment variables apply to all components created from source or images. To create an application from a source repository and a Docker Hub image: USD oc new-app https://github.com/openshift/ruby-hello-world mysql Note If a source code repository and a builder image are specified as separate arguments, new-app uses the builder image as the builder for the source code repository. If this is not the intent, specify the required builder image for the source using the ~ separator. 3.4.4.8. Grouping images and source in a single pod The new-app command allows deploying multiple images together in a single pod. To specify which images to group together, use the + separator. The --group command line argument can also be used to specify the images that should be grouped together. To group the image built from a source repository with other images, specify its builder image in the group: USD oc new-app ruby+mysql To deploy an image built from source and an external image together: USD oc new-app \ ruby~https://github.com/openshift/ruby-hello-world \ mysql \ --group=ruby+mysql 3.4.4.9. Searching for images, templates, and other inputs To search for images, templates, and other inputs for the oc new-app command, add the --search and --list flags. For example, to find all of the images or templates that include PHP: USD oc new-app --search php 3.4.4.10. Setting the import mode To set the import mode when using oc new-app , add the --import-mode flag. This flag can be appended with Legacy or PreserveOriginal , which provides users the option to create image streams using a single sub-manifest, or all manifests, respectively. USD oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=Legacy --name=test USD oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=PreserveOriginal --name=test 3.5. Creating applications using Ruby on Rails Ruby on Rails is a web framework written in Ruby. This guide covers using Rails 4 on OpenShift Container Platform. Warning Go through the whole tutorial to have an overview of all the steps necessary to run your application on the OpenShift Container Platform. If you experience a problem try reading through the entire tutorial and then going back to your issue. It can also be useful to review your steps to ensure that all the steps were run correctly. 3.5.1. Prerequisites Basic Ruby and Rails knowledge. Locally installed version of Ruby 2.0.0+, Rubygems, Bundler. Basic Git knowledge. Running instance of OpenShift Container Platform 4. Make sure that an instance of OpenShift Container Platform is running and is available. Also make sure that your oc CLI client is installed and the command is accessible from your command shell, so you can use it to log in using your email address and password. 3.5.2. Setting up the database Rails applications are almost always used with a database. For local development use the PostgreSQL database. Procedure Install the database: USD sudo yum install -y postgresql postgresql-server postgresql-devel Initialize the database: USD sudo postgresql-setup initdb This command creates the /var/lib/pgsql/data directory, in which the data is stored. Start the database: USD sudo systemctl start postgresql.service When the database is running, create your rails user: USD sudo -u postgres createuser -s rails Note that the user created has no password. 3.5.3. Writing your application If you are starting your Rails application from scratch, you must install the Rails gem first. Then you can proceed with writing your application. Procedure Install the Rails gem: USD gem install rails Example output Successfully installed rails-4.3.0 1 gem installed After you install the Rails gem, create a new application with PostgreSQL as your database: USD rails new rails-app --database=postgresql Change into your new application directory: USD cd rails-app If you already have an application, make sure the pg (postgresql) gem is present in your Gemfile . If not, edit your Gemfile by adding the gem: gem 'pg' Generate a new Gemfile.lock with all your dependencies: USD bundle install In addition to using the postgresql database with the pg gem, you also must ensure that the config/database.yml is using the postgresql adapter. Make sure you updated default section in the config/database.yml file, so it looks like this: default: &default adapter: postgresql encoding: unicode pool: 5 host: localhost username: rails password: <password> Create your application's development and test databases: USD rake db:create This creates development and test database in your PostgreSQL server. 3.5.3.1. Creating a welcome page Since Rails 4 no longer serves a static public/index.html page in production, you must create a new root page. To have a custom welcome page must do following steps: Create a controller with an index action. Create a view page for the welcome controller index action. Create a route that serves applications root page with the created controller and view. Rails offers a generator that completes all necessary steps for you. Procedure Run Rails generator: USD rails generate controller welcome index All the necessary files are created. edit line 2 in config/routes.rb file as follows: Run the rails server to verify the page is available: USD rails server You should see your page by visiting http://localhost:3000 in your browser. If you do not see the page, check the logs that are output to your server to debug. 3.5.3.2. Configuring application for OpenShift Container Platform To have your application communicate with the PostgreSQL database service running in OpenShift Container Platform you must edit the default section in your config/database.yml to use environment variables, which you must define later, upon the database service creation. Procedure Edit the default section in your config/database.yml with pre-defined variables as follows: Sample config/database YAML file <% user = ENV.key?("POSTGRESQL_ADMIN_PASSWORD") ? "root" : ENV["POSTGRESQL_USER"] %> <% password = ENV.key?("POSTGRESQL_ADMIN_PASSWORD") ? ENV["POSTGRESQL_ADMIN_PASSWORD"] : ENV["POSTGRESQL_PASSWORD"] %> <% db_service = ENV.fetch("DATABASE_SERVICE_NAME","").upcase %> default: &default adapter: postgresql encoding: unicode # For details on connection pooling, see rails configuration guide # http://guides.rubyonrails.org/configuring.html#database-pooling pool: <%= ENV["POSTGRESQL_MAX_CONNECTIONS"] || 5 %> username: <%= user %> password: <%= password %> host: <%= ENV["#{db_service}_SERVICE_HOST"] %> port: <%= ENV["#{db_service}_SERVICE_PORT"] %> database: <%= ENV["POSTGRESQL_DATABASE"] %> 3.5.3.3. Storing your application in Git Building an application in OpenShift Container Platform usually requires that the source code be stored in a git repository, so you must install git if you do not already have it. Prerequisites Install git. Procedure Make sure you are in your Rails application directory by running the ls -1 command. The output of the command should look like: USD ls -1 Example output app bin config config.ru db Gemfile Gemfile.lock lib log public Rakefile README.rdoc test tmp vendor Run the following commands in your Rails app directory to initialize and commit your code to git: USD git init USD git add . USD git commit -m "initial commit" After your application is committed you must push it to a remote repository. GitHub account, in which you create a new repository. Set the remote that points to your git repository: USD git remote add origin [email protected]:<namespace/repository-name>.git Push your application to your remote git repository. USD git push 3.5.4. Deploying your application to OpenShift Container Platform You can deploy you application to OpenShift Container Platform. After creating the rails-app project, you are automatically switched to the new project namespace. Deploying your application in OpenShift Container Platform involves three steps: Creating a database service from OpenShift Container Platform's PostgreSQL image. Creating a frontend service from OpenShift Container Platform's Ruby 2.0 builder image and your Ruby on Rails source code, which are wired with the database service. Creating a route for your application. Procedure To deploy your Ruby on Rails application, create a new project for the application: USD oc new-project rails-app --description="My Rails application" --display-name="Rails Application" 3.5.4.1. Creating the database service Your Rails application expects a running database service. For this service use PostgreSQL database image. To create the database service, use the oc new-app command. To this command you must pass some necessary environment variables which are used inside the database container. These environment variables are required to set the username, password, and name of the database. You can change the values of these environment variables to anything you would like. The variables are as follows: POSTGRESQL_DATABASE POSTGRESQL_USER POSTGRESQL_PASSWORD Setting these variables ensures: A database exists with the specified name. A user exists with the specified name. The user can access the specified database with the specified password. Procedure Create the database service: USD oc new-app postgresql -e POSTGRESQL_DATABASE=db_name -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password To also set the password for the database administrator, append to the command with: -e POSTGRESQL_ADMIN_PASSWORD=admin_pw Watch the progress: USD oc get pods --watch 3.5.4.2. Creating the frontend service To bring your application to OpenShift Container Platform, you must specify a repository in which your application lives. Procedure Create the frontend service and specify database related environment variables that were setup when creating the database service: USD oc new-app path/to/source/code --name=rails-app -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=db_name -e DATABASE_SERVICE_NAME=postgresql With this command, OpenShift Container Platform fetches the source code, sets up the builder, builds your application image, and deploys the newly created image together with the specified environment variables. The application is named rails-app . Verify the environment variables have been added by viewing the JSON document of the rails-app deployment config: USD oc get dc rails-app -o json You should see the following section: Example output env": [ { "name": "POSTGRESQL_USER", "value": "username" }, { "name": "POSTGRESQL_PASSWORD", "value": "password" }, { "name": "POSTGRESQL_DATABASE", "value": "db_name" }, { "name": "DATABASE_SERVICE_NAME", "value": "postgresql" } ], Check the build process: USD oc logs -f build/rails-app-1 After the build is complete, look at the running pods in OpenShift Container Platform: USD oc get pods You should see a line starting with myapp-<number>-<hash> , and that is your application running in OpenShift Container Platform. Before your application is functional, you must initialize the database by running the database migration script. There are two ways you can do this: Manually from the running frontend container: Exec into frontend container with rsh command: USD oc rsh <frontend_pod_id> Run the migration from inside the container: USD RAILS_ENV=production bundle exec rake db:migrate If you are running your Rails application in a development or test environment you do not have to specify the RAILS_ENV environment variable. By adding pre-deployment lifecycle hooks in your template. 3.5.4.3. Creating a route for your application You can expose a service to create a route for your application. Procedure To expose a service by giving it an externally-reachable hostname like www.example.com use OpenShift Container Platform route. In your case you need to expose the frontend service by typing: USD oc expose service rails-app --hostname=www.example.com Warning Ensure the hostname you specify resolves into the IP address of the router.
|
[
"oc create -f <filename>",
"oc create -f <filename> -n <project>",
"kind: \"ImageStream\" apiVersion: \"image.openshift.io/v1\" metadata: name: \"ruby\" creationTimestamp: null spec: tags: - name: \"2.6\" annotations: description: \"Build and run Ruby 2.6 applications\" iconClass: \"icon-ruby\" tags: \"builder,ruby\" 1 supports: \"ruby:2.6,ruby\" version: \"2.6\"",
"oc process -f <filename> -l name=otherLabel",
"oc process --parameters -f <filename>",
"oc process --parameters -n <project> <template_name>",
"oc process --parameters -n openshift rails-postgresql-example",
"NAME DESCRIPTION GENERATOR VALUE SOURCE_REPOSITORY_URL The URL of the repository with your application source code https://github.com/sclorg/rails-ex.git SOURCE_REPOSITORY_REF Set this to a branch name, tag or other ref of your repository if you are not using the default branch CONTEXT_DIR Set this to the relative path to your project if it is not in the root of your repository APPLICATION_DOMAIN The exposed hostname that will route to the Rails service rails-postgresql-example.openshiftapps.com GITHUB_WEBHOOK_SECRET A secret string used to configure the GitHub webhook expression [a-zA-Z0-9]{40} SECRET_KEY_BASE Your secret key for verifying the integrity of signed cookies expression [a-z0-9]{127} APPLICATION_USER The application user that is used within the sample application to authorize access on pages openshift APPLICATION_PASSWORD The application password that is used within the sample application to authorize access on pages secret DATABASE_SERVICE_NAME Database service name postgresql POSTGRESQL_USER database username expression user[A-Z0-9]{3} POSTGRESQL_PASSWORD database password expression [a-zA-Z0-9]{8} POSTGRESQL_DATABASE database name root POSTGRESQL_MAX_CONNECTIONS database max connections 10 POSTGRESQL_SHARED_BUFFERS database shared buffers 12MB",
"oc process -f <filename>",
"oc process <template_name>",
"oc process -f <filename> | oc create -f -",
"oc process <template> | oc create -f -",
"oc process -f my-rails-postgresql -p POSTGRESQL_USER=bob -p POSTGRESQL_DATABASE=mydatabase",
"oc process -f my-rails-postgresql -p POSTGRESQL_USER=bob -p POSTGRESQL_DATABASE=mydatabase | oc create -f -",
"cat postgres.env POSTGRESQL_USER=bob POSTGRESQL_DATABASE=mydatabase",
"oc process -f my-rails-postgresql --param-file=postgres.env",
"sed s/bob/alice/ postgres.env | oc process -f my-rails-postgresql --param-file=-",
"oc edit template <template>",
"oc get templates -n openshift",
"apiVersion: template.openshift.io/v1 kind: Template metadata: name: redis-template annotations: description: \"Description\" iconClass: \"icon-redis\" tags: \"database,nosql\" objects: - apiVersion: v1 kind: Pod metadata: name: redis-master spec: containers: - env: - name: REDIS_PASSWORD value: USD{REDIS_PASSWORD} image: dockerfile/redis name: master ports: - containerPort: 6379 protocol: TCP parameters: - description: Password used for Redis authentication from: '[A-Z0-9]{8}' generate: expression name: REDIS_PASSWORD labels: redis: master",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: cakephp-mysql-example 1 annotations: openshift.io/display-name: \"CakePHP MySQL Example (Ephemeral)\" 2 description: >- An example CakePHP application with a MySQL database. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/cakephp-ex/blob/master/README.md. WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing.\" 3 openshift.io/long-description: >- This template defines resources needed to develop a CakePHP application, including a build configuration, application DeploymentConfig, and database DeploymentConfig. The database is stored in non-persistent storage, so this configuration should be used for experimental purposes only. 4 tags: \"quickstart,php,cakephp\" 5 iconClass: icon-php 6 openshift.io/provider-display-name: \"Red Hat, Inc.\" 7 openshift.io/documentation-url: \"https://github.com/sclorg/cakephp-ex\" 8 openshift.io/support-url: \"https://access.redhat.com\" 9 message: \"Your admin credentials are USD{ADMIN_USERNAME}:USD{ADMIN_PASSWORD}\" 10",
"kind: \"Template\" apiVersion: \"v1\" labels: template: \"cakephp-mysql-example\" 1 app: \"USD{NAME}\" 2",
"parameters: - name: USERNAME description: \"The user name for Joe\" value: joe",
"parameters: - name: PASSWORD description: \"The random user password\" generate: expression from: \"[a-zA-Z0-9]{12}\"",
"parameters: - name: singlequoted_example generate: expression from: '[\\A]{10}' - name: doublequoted_example generate: expression from: \"[\\\\A]{10}\"",
"{ \"parameters\": [ { \"name\": \"json_example\", \"generate\": \"expression\", \"from\": \"[\\\\A]{10}\" } ] }",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: cakephp-mysql-example annotations: description: Defines how to build the application spec: source: type: Git git: uri: \"USD{SOURCE_REPOSITORY_URL}\" 1 ref: \"USD{SOURCE_REPOSITORY_REF}\" contextDir: \"USD{CONTEXT_DIR}\" - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: replicas: \"USD{{REPLICA_COUNT}}\" 2 parameters: - name: SOURCE_REPOSITORY_URL 3 displayName: Source Repository URL 4 description: The URL of the repository with your application source code 5 value: https://github.com/sclorg/cakephp-ex.git 6 required: true 7 - name: GITHUB_WEBHOOK_SECRET description: A secret string used to configure the GitHub webhook generate: expression 8 from: \"[a-zA-Z0-9]{40}\" 9 - name: REPLICA_COUNT description: Number of replicas to run value: \"2\" required: true message: \"... The GitHub webhook secret is USD{GITHUB_WEBHOOK_SECRET} ...\" 10",
"kind: \"Template\" apiVersion: \"v1\" metadata: name: my-template objects: - kind: \"Service\" 1 apiVersion: \"v1\" metadata: name: \"cakephp-mysql-example\" annotations: description: \"Exposes and load balances the application pods\" spec: ports: - name: \"web\" port: 8080 targetPort: 8080 selector: name: \"cakephp-mysql-example\"",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: ConfigMap apiVersion: v1 metadata: name: my-template-config annotations: template.openshift.io/expose-username: \"{.data['my\\\\.username']}\" data: my.username: foo - kind: Secret apiVersion: v1 metadata: name: my-template-config-secret annotations: template.openshift.io/base64-expose-password: \"{.data['password']}\" stringData: password: <password> - kind: Service apiVersion: v1 metadata: name: my-template-service annotations: template.openshift.io/expose-service_ip_port: \"{.spec.clusterIP}:{.spec.ports[?(.name==\\\"web\\\")].port}\" spec: ports: - name: \"web\" port: 8080 - kind: Route apiVersion: route.openshift.io/v1 metadata: name: my-template-route annotations: template.openshift.io/expose-uri: \"http://{.spec.host}{.spec.path}\" spec: path: mypath",
"{ \"credentials\": { \"username\": \"foo\", \"password\": \"YmFy\", \"service_ip_port\": \"172.30.12.34:8080\", \"uri\": \"http://route-test.router.default.svc.cluster.local/mypath\" } }",
"\"template.alpha.openshift.io/wait-for-ready\": \"true\"",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: annotations: # wait-for-ready used on BuildConfig ensures that template instantiation # will fail immediately if build fails template.alpha.openshift.io/wait-for-ready: \"true\" spec: - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: annotations: template.alpha.openshift.io/wait-for-ready: \"true\" spec: - kind: Service apiVersion: v1 metadata: name: spec:",
"oc get -o yaml all > <yaml_filename>",
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc new-app /<path to source code>",
"oc new-app https://github.com/sclorg/cakephp-ex",
"oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret",
"oc new-app https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.0/test/puma-test-app",
"oc new-app https://github.com/openshift/ruby-hello-world.git#beta4",
"oc new-app /home/user/code/myapp --strategy=docker",
"oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git",
"oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app",
"oc new-app mysql",
"oc new-app myregistry:5000/example/myimage",
"oc new-app my-stream:v1",
"oc create -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample",
"oc new-app -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword",
"ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword",
"oc new-app ruby-helloworld-sample --param-file=helloworld.params",
"oc new-app openshift/postgresql-92-centos7 -e POSTGRESQL_USER=user -e POSTGRESQL_DATABASE=db -e POSTGRESQL_PASSWORD=password",
"POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password",
"oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env",
"cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=-",
"oc new-app openshift/ruby-23-centos7 --build-env HTTP_PROXY=http://myproxy.net:1337/ --build-env GEM_HOME=~/.gem",
"HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem",
"oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env",
"cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=-",
"oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world",
"oc new-app https://github.com/openshift/ruby-hello-world -o yaml > myapp.yaml",
"vi myapp.yaml",
"oc create -f myapp.yaml",
"oc new-app https://github.com/openshift/ruby-hello-world --name=myapp",
"oc new-app https://github.com/openshift/ruby-hello-world -n myproject",
"oc new-app https://github.com/openshift/ruby-hello-world mysql",
"oc new-app ruby+mysql",
"oc new-app ruby~https://github.com/openshift/ruby-hello-world mysql --group=ruby+mysql",
"oc new-app --search php",
"oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=Legacy --name=test",
"oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=PreserveOriginal --name=test",
"sudo yum install -y postgresql postgresql-server postgresql-devel",
"sudo postgresql-setup initdb",
"sudo systemctl start postgresql.service",
"sudo -u postgres createuser -s rails",
"gem install rails",
"Successfully installed rails-4.3.0 1 gem installed",
"rails new rails-app --database=postgresql",
"cd rails-app",
"gem 'pg'",
"bundle install",
"default: &default adapter: postgresql encoding: unicode pool: 5 host: localhost username: rails password: <password>",
"rake db:create",
"rails generate controller welcome index",
"root 'welcome#index'",
"rails server",
"<% user = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? \"root\" : ENV[\"POSTGRESQL_USER\"] %> <% password = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? ENV[\"POSTGRESQL_ADMIN_PASSWORD\"] : ENV[\"POSTGRESQL_PASSWORD\"] %> <% db_service = ENV.fetch(\"DATABASE_SERVICE_NAME\",\"\").upcase %> default: &default adapter: postgresql encoding: unicode # For details on connection pooling, see rails configuration guide # http://guides.rubyonrails.org/configuring.html#database-pooling pool: <%= ENV[\"POSTGRESQL_MAX_CONNECTIONS\"] || 5 %> username: <%= user %> password: <%= password %> host: <%= ENV[\"#{db_service}_SERVICE_HOST\"] %> port: <%= ENV[\"#{db_service}_SERVICE_PORT\"] %> database: <%= ENV[\"POSTGRESQL_DATABASE\"] %>",
"ls -1",
"app bin config config.ru db Gemfile Gemfile.lock lib log public Rakefile README.rdoc test tmp vendor",
"git init",
"git add .",
"git commit -m \"initial commit\"",
"git remote add origin [email protected]:<namespace/repository-name>.git",
"git push",
"oc new-project rails-app --description=\"My Rails application\" --display-name=\"Rails Application\"",
"oc new-app postgresql -e POSTGRESQL_DATABASE=db_name -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password",
"-e POSTGRESQL_ADMIN_PASSWORD=admin_pw",
"oc get pods --watch",
"oc new-app path/to/source/code --name=rails-app -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=db_name -e DATABASE_SERVICE_NAME=postgresql",
"oc get dc rails-app -o json",
"env\": [ { \"name\": \"POSTGRESQL_USER\", \"value\": \"username\" }, { \"name\": \"POSTGRESQL_PASSWORD\", \"value\": \"password\" }, { \"name\": \"POSTGRESQL_DATABASE\", \"value\": \"db_name\" }, { \"name\": \"DATABASE_SERVICE_NAME\", \"value\": \"postgresql\" } ],",
"oc logs -f build/rails-app-1",
"oc get pods",
"oc rsh <frontend_pod_id>",
"RAILS_ENV=production bundle exec rake db:migrate",
"oc expose service rails-app --hostname=www.example.com"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/building_applications/creating-applications
|
Preface
|
Preface The Red Hat build of Cryostat is a container-native implementation of JDK Flight Recorder (JFR) that you can use to securely monitor the Java Virtual Machine (JVM) performance in workloads that run on an OpenShift Container Platform cluster. You can use Cryostat 2.4 to start, stop, retrieve, archive, import, and export JFR data for JVMs inside your containerized applications by using a web console or an HTTP API. Depending on your use case, you can store and analyze your recordings directly on your Red Hat OpenShift cluster by using the built-in tools that Cryostat provides or you can export recordings to an external monitoring application to perform a more in-depth analysis of your recorded data. Important Red Hat build of Cryostat is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/using_the_red_hat_build_of_cryostat_operator_to_configure_cryostat/preface-cryostat
|
Chapter 3. Performing a cluster update
|
Chapter 3. Performing a cluster update 3.1. Updating a cluster using the CLI You can perform minor version and patch updates on an OpenShift Container Platform cluster by using the OpenShift CLI ( oc ). 3.1.1. Prerequisites Have access to the cluster as a user with admin privileges. See Using RBAC to define and apply permissions . Have a recent etcd backup in case your update fails and you must restore your cluster to a state. Have a recent Container Storage Interface (CSI) volume snapshot in case you need to restore persistent volumes due to a pod failure. Your RHEL7 workers are replaced with RHEL8 or RHCOS workers. Red Hat does not support in-place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean operating system install. You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See Updating installed Operators for more information on how to check compatibility and, if necessary, update the installed Operators. Ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see Preparing to update a cluster with manually maintained credentials . Ensure that you address all Upgradeable=False conditions so the cluster allows an update to the minor version. An alert displays at the top of the Cluster Settings page when you have one or more cluster Operators that cannot be updated. You can still update to the available patch update for the minor release you are currently on. If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If minAvailable is set to 1 in PodDisruptionBudget , the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. Important When an update is failing to complete, the Cluster Version Operator (CVO) reports the status of any blocking components while attempting to reconcile the update. Rolling your cluster back to a version is not supported. If your update is failing to complete, contact Red Hat support. Using the unsupportedConfigOverrides section to modify the configuration of an Operator is unsupported and might block cluster updates. You must remove this setting before you can update your cluster. Additional resources Support policy for unmanaged Operators 3.1.2. Pausing a MachineHealthCheck resource During the update process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the MachineHealthCheck resources before updating the cluster. Prerequisites Install the OpenShift CLI ( oc ). Procedure To list all the available MachineHealthCheck resources that you want to pause, run the following command: USD oc get machinehealthcheck -n openshift-machine-api To pause the machine health checks, add the cluster.x-k8s.io/paused="" annotation to the MachineHealthCheck resource. Run the following command: USD oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused="" The annotated MachineHealthCheck resource resembles the following YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: "" spec: selector: matchLabels: role: worker unhealthyConditions: - type: "Ready" status: "Unknown" timeout: "300s" - type: "Ready" status: "False" timeout: "300s" maxUnhealthy: "40%" status: currentHealthy: 5 expectedMachines: 5 Important Resume the machine health checks after updating the cluster. To resume the check, remove the pause annotation from the MachineHealthCheck resource by running the following command: USD oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused- 3.1.3. About updating single node OpenShift Container Platform You can update, or upgrade, a single-node OpenShift Container Platform cluster by using either the console or CLI. However, note the following limitations: The prerequisite to pause the MachineHealthCheck resources is not required because there is no other node to perform the health check. Restoring a single-node OpenShift Container Platform cluster using an etcd backup is not officially supported. However, it is good practice to perform the etcd backup in case your update fails. If your control plane is healthy, you might be able to restore your cluster to a state by using the backup. Updating a single-node OpenShift Container Platform cluster requires downtime and can include an automatic reboot. The amount of downtime depends on the update payload, as described in the following scenarios: If the update payload contains an operating system update, which requires a reboot, the downtime is significant and impacts cluster management and user workloads. If the update contains machine configuration changes that do not require a reboot, the downtime is less, and the impact on the cluster management and user workloads is lessened. In this case, the node draining step is skipped with single-node OpenShift Container Platform because there is no other node in the cluster to reschedule the workloads to. If the update payload does not contain an operating system update or machine configuration changes, a short API outage occurs and resolves quickly. Important There are conditions, such as bugs in an updated package, that can cause the single node to not restart after a reboot. In this case, the update does not rollback automatically. Additional resources For information on which machine configuration changes require a reboot, see the note in About the Machine Config Operator . 3.1.4. Updating a cluster by using the CLI You can use the OpenShift CLI ( oc ) to review and request cluster updates. You can find information about available OpenShift Container Platform advisories and updates in the errata section of the Customer Portal. Prerequisites Install the OpenShift CLI ( oc ) that matches the version for your updated version. Log in to the cluster as user with cluster-admin privileges. Pause all MachineHealthCheck resources. Procedure View the available updates and note the version number of the update that you want to apply: USD oc adm upgrade Example output Cluster version is 4.13.10 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.13 (available channels: candidate-4.13, candidate-4.14, fast-4.13, stable-4.13) Recommended updates: VERSION IMAGE 4.13.14 quay.io/openshift-release-dev/ocp-release@sha256:406fcc160c097f61080412afcfa7fd65284ac8741ac7ad5b480e304aba73674b 4.13.13 quay.io/openshift-release-dev/ocp-release@sha256:d62495768e335c79a215ba56771ff5ae97e3cbb2bf49ed8fb3f6cefabcdc0f17 4.13.12 quay.io/openshift-release-dev/ocp-release@sha256:73946971c03b43a0dc6f7b0946b26a177c2f3c9d37105441315b4e3359373a55 4.13.11 quay.io/openshift-release-dev/ocp-release@sha256:e1c2377fdae1d063aaddc753b99acf25972b6997ab9a0b7e80cfef627b9ef3dd Note If there are no available updates, updates that are supported but not recommended might still be available. See Updating along a conditional update path for more information. For details and information on how to perform a Control Plane Only update, please refer to the Preparing to perform a Control Plane Only update page, listed in the Additional resources section. Based on your organization requirements, set the appropriate update channel. For example, you can set your channel to stable-4.13 or fast-4.13 . For more information about channels, refer to Understanding update channels and releases listed in the Additional resources section. USD oc adm upgrade channel <channel> For example, to set the channel to stable-4.15 : USD oc adm upgrade channel stable-4.15 Important For production clusters, you must subscribe to a stable-* , eus-* , or fast-* channel. Note When you are ready to move to the minor version, choose the channel that corresponds to that minor version. The sooner the update channel is declared, the more effectively the cluster can recommend update paths to your target version. The cluster might take some time to evaluate all the possible updates that are available and offer the best update recommendations to choose from. Update recommendations can change over time, as they are based on what update options are available at the time. If you cannot see an update path to your target minor version, keep updating your cluster to the latest patch release for your current version until the minor version is available in the path. Apply an update: To update to the latest version: USD oc adm upgrade --to-latest=true 1 To update to a specific version: USD oc adm upgrade --to=<version> 1 1 1 <version> is the update version that you obtained from the output of the oc adm upgrade command. Important When using oc adm upgrade --help , there is a listed option for --force . This is heavily discouraged , as using the --force option bypasses cluster-side guards, including release verification and precondition checks. Using --force does not guarantee a successful update. Bypassing guards put the cluster at risk. Review the status of the Cluster Version Operator: USD oc adm upgrade After the update completes, you can confirm that the cluster version has updated to the new version: USD oc adm upgrade Example output Cluster version is <version> Upstream is unset, so the cluster will use an appropriate default. Channel: stable-<version> (available channels: candidate-<version>, eus-<version>, fast-<version>, stable-<version>) No updates available. You may force an update to a specific release image, but doing so might not be supported and might result in downtime or data loss. If you are updating your cluster to the minor version, such as version X.y to X.(y+1), it is recommended to confirm that your nodes are updated before deploying workloads that rely on a new feature: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.28.5 ip-10-0-170-223.ec2.internal Ready master 82m v1.28.5 ip-10-0-179-95.ec2.internal Ready worker 70m v1.28.5 ip-10-0-182-134.ec2.internal Ready worker 70m v1.28.5 ip-10-0-211-16.ec2.internal Ready master 82m v1.28.5 ip-10-0-250-100.ec2.internal Ready worker 69m v1.28.5 Additional resources Performing a Control Plane Only update Updating along a conditional update path Understanding update channels and releases 3.1.5. Updating along a conditional update path You can update along a recommended conditional update path using the web console or the OpenShift CLI ( oc ). When a conditional update is not recommended for your cluster, you can update along a conditional update path using the OpenShift CLI ( oc ) 4.10 or later. Procedure To view the description of the update when it is not recommended because a risk might apply, run the following command: USD oc adm upgrade --include-not-recommended If the cluster administrator evaluates the potential known risks and decides it is acceptable for the current cluster, then the administrator can waive the safety guards and proceed the update by running the following command: USD oc adm upgrade --allow-not-recommended --to <version> <.> <.> <version> is the supported but not recommended update version that you obtained from the output of the command. Additional resources Understanding update channels and releases 3.1.6. Changing the update server by using the CLI Changing the update server is optional. If you have an OpenShift Update Service (OSUS) installed and configured locally, you must set the URL for the server as the upstream to use the local server during updates. The default value for upstream is https://api.openshift.com/api/upgrades_info/v1/graph . Procedure Change the upstream parameter value in the cluster version: USD oc patch clusterversion/version --patch '{"spec":{"upstream":"<update-server-url>"}}' --type=merge The <update-server-url> variable specifies the URL for the update server. Example output clusterversion.config.openshift.io/version patched 3.2. Updating a cluster using the web console You can perform minor version and patch updates on an OpenShift Container Platform cluster by using the web console. Note Use the web console or oc adm upgrade channel <channel> to change the update channel. You can follow the steps in Updating a cluster using the CLI to complete the update after you change to a 4.15 channel. 3.2.1. Before updating the OpenShift Container Platform cluster Before updating, consider the following: You have recently backed up etcd. In PodDisruptionBudget , if minAvailable is set to 1 , the nodes are drained to apply pending machine configs that might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. You might need to update the cloud provider resources for the new release if your cluster uses manually maintained credentials. You must review administrator acknowledgement requests, take any recommended actions, and provide the acknowledgement when you are ready. You can perform a partial update by updating the worker or custom pool nodes to accommodate the time it takes to update. You can pause and resume within the progress bar of each pool. Important When an update is failing to complete, the Cluster Version Operator (CVO) reports the status of any blocking components while attempting to reconcile the update. Rolling your cluster back to a version is not supported. If your update is failing to complete, contact Red Hat support. Using the unsupportedConfigOverrides section to modify the configuration of an Operator is unsupported and might block cluster updates. You must remove this setting before you can update your cluster. 3.2.2. Changing the update server by using the web console Changing the update server is optional. If you have an OpenShift Update Service (OSUS) installed and configured locally, you must set the URL for the server as the upstream to use the local server during updates. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Navigate to Administration Cluster Settings , click version . Click the YAML tab and then edit the upstream parameter value: Example output ... spec: clusterID: db93436d-7b05-42cc-b856-43e11ad2d31a upstream: '<update-server-url>' 1 ... 1 The <update-server-url> variable specifies the URL for the update server. The default upstream is https://api.openshift.com/api/upgrades_info/v1/graph . Click Save . Additional resources Understanding update channels and releases 3.2.3. Pausing a MachineHealthCheck resource by using the web console During the update process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the MachineHealthCheck resources before updating the cluster. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Navigate to Compute MachineHealthChecks . To pause the machine health checks, add the cluster.x-k8s.io/paused="" annotation to each MachineHealthCheck resource. For example, to add the annotation to the machine-api-termination-handler resource, complete the following steps: Click the Options menu to the machine-api-termination-handler and click Edit annotations . In the Edit annotations dialog, click Add more . In the Key and Value fields, add cluster.x-k8s.io/paused and "" values, respectively, and click Save . 3.2.4. Updating a cluster by using the web console If updates are available, you can update your cluster from the web console. You can find information about available OpenShift Container Platform advisories and updates in the errata section of the Customer Portal. Prerequisites Have access to the web console as a user with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Pause all MachineHealthCheck resources. You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See "Updating installed Operators" in the "Additional resources" section for more information on how to check compatibility and, if necessary, update the installed Operators. Your machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. Your RHEL7 workers are replaced with RHEL8 or RHCOS workers. Red Hat does not support in-place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean operating system install. Procedure From the web console, click Administration Cluster Settings and review the contents of the Details tab. For production clusters, ensure that the Channel is set to the correct channel for the version that you want to update to, such as stable-4.15 . Important For production clusters, you must subscribe to a stable-* , eus-* or fast-* channel. Note When you are ready to move to the minor version, choose the channel that corresponds to that minor version. The sooner the update channel is declared, the more effectively the cluster can recommend update paths to your target version. The cluster might take some time to evaluate all the possible updates that are available and offer the best update recommendations to choose from. Update recommendations can change over time, as they are based on what update options are available at the time. If you cannot see an update path to your target minor version, keep updating your cluster to the latest patch release for your current version until the minor version is available in the path. If the Update status is not Updates available , you cannot update your cluster. Select channel indicates the cluster version that your cluster is running or is updating to. Select a version to update to, and click Save . The Input channel Update status changes to Update to <product-version> in progress , and you can review the progress of the cluster update by watching the progress bars for the Operators and nodes. Note If you are updating your cluster to the minor version, for example from version 4.10 to 4.11, confirm that your nodes are updated before deploying workloads that rely on a new feature. Any pools with worker nodes that are not yet updated are displayed on the Cluster Settings page. After the update completes and the Cluster Version Operator refreshes the available updates, check if more updates are available in your current channel. If updates are available, continue to perform updates in the current channel until you can no longer update. If no updates are available, change the Channel to the stable-* , eus-* or fast-* channel for the minor version, and update to the version that you want in that channel. You might need to perform several intermediate updates until you reach the version that you want. Additional resources Updating installed Operators 3.2.5. Viewing conditional updates in the web console You can view and assess the risks associated with particular updates with conditional updates. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Pause all MachineHealthCheck resources. You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See "Updating installed Operators" in the "Additional resources" section for more information on how to check compatibility and, if necessary, update the installed Operators. Your machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing an advanced update strategy, such as a canary rollout, an EUS update, or a control-plane update. Procedure From the web console, click Administration Cluster settings page and review the contents of the Details tab. You can enable Include supported but not recommended versions in the Select new version dropdown of the Update cluster modal to populate the dropdown list with conditional updates. Note If a Supported but not recommended version is selected, more information is provided with potential issues with the version. Review the notification detailing the potential risks to updating. Additional resources Updating installed Operators Update recommendations and Conditional Updates 3.2.6. Performing a canary rollout update In some specific use cases, you might want a more controlled update process where you do not want specific nodes updated concurrently with the rest of the cluster. These use cases include, but are not limited to: You have mission-critical applications that you do not want unavailable during the update. You can slowly test the applications on your nodes in small batches after the update. You have a small maintenance window that does not allow the time for all nodes to be updated, or you have multiple maintenance windows. The rolling update process is not a typical update workflow. With larger clusters, it can be a time-consuming process that requires you execute multiple commands. This complexity can result in errors that can affect the entire cluster. It is recommended that you carefully consider whether your organization wants to use a rolling update and carefully plan the implementation of the process before you start. The rolling update process described in this topic involves: Creating one or more custom machine config pools (MCPs). Labeling each node that you do not want to update immediately to move those nodes to the custom MCPs. Pausing those custom MCPs, which prevents updates to those nodes. Performing the cluster update. Unpausing one custom MCP, which triggers the update on those nodes. Testing the applications on those nodes to make sure the applications work as expected on those newly-updated nodes. Optionally removing the custom labels from the remaining nodes in small batches and testing the applications on those nodes. Note Pausing an MCP should be done with careful consideration and for short periods of time only. If you want to use the canary rollout update process, see Performing a canary rollout update . 3.2.7. About updating single node OpenShift Container Platform You can update, or upgrade, a single-node OpenShift Container Platform cluster by using either the console or CLI. However, note the following limitations: The prerequisite to pause the MachineHealthCheck resources is not required because there is no other node to perform the health check. Restoring a single-node OpenShift Container Platform cluster using an etcd backup is not officially supported. However, it is good practice to perform the etcd backup in case your update fails. If your control plane is healthy, you might be able to restore your cluster to a state by using the backup. Updating a single-node OpenShift Container Platform cluster requires downtime and can include an automatic reboot. The amount of downtime depends on the update payload, as described in the following scenarios: If the update payload contains an operating system update, which requires a reboot, the downtime is significant and impacts cluster management and user workloads. If the update contains machine configuration changes that do not require a reboot, the downtime is less, and the impact on the cluster management and user workloads is lessened. In this case, the node draining step is skipped with single-node OpenShift Container Platform because there is no other node in the cluster to reschedule the workloads to. If the update payload does not contain an operating system update or machine configuration changes, a short API outage occurs and resolves quickly. Important There are conditions, such as bugs in an updated package, that can cause the single node to not restart after a reboot. In this case, the update does not rollback automatically. Additional resources About the Machine Config Operator . 3.3. Performing a Control Plane Only update Due to fundamental Kubernetes design, all OpenShift Container Platform updates between minor versions must be serialized. You must update from OpenShift Container Platform <4.y> to <4.y+1>, and then to <4.y+2>. You cannot update from OpenShift Container Platform <4.y> to <4.y+2> directly. However, administrators who want to update between two even-numbered minor versions can do so incurring only a single reboot of non-control plane hosts. Important This update was previously known as an EUS-to-EUS update and is now referred to as a Control Plane Only update. These updates are only viable between even-numbered minor versions of OpenShift Container Platform. There are a number of caveats to consider when attempting a Control Plane Only update. Control Plane Only updates are only offered after updates between all versions involved have been made available in stable channels. If you encounter issues during or after updating to the odd-numbered minor version but before updating to the even-numbered version, then remediation of those issues may require that non-control plane hosts complete the update to the odd-numbered version before moving forward. You can do a partial update by updating the worker or custom pool nodes to accommodate the time it takes for maintenance. You can complete the update process during multiple maintenance windows by pausing at intermediate steps. However, plan to complete the entire update within 60 days. This is critical to ensure that normal cluster automation processes are completed. Until the machine config pools are unpaused and the update is complete, some features and bugs fixes in <4.y+1> and <4.y+2> of OpenShift Container Platform are not available. All the clusters might update using EUS channels for a conventional update without pools paused, but only clusters with non control-plane MachineConfigPools objects can do Control Plane Only updates with pools paused. 3.3.1. Performing a Control Plane Only update The following procedure pauses all non-master machine config pools and performs updates from OpenShift Container Platform <4.y> to <4.y+1> to <4.y+2>, then unpauses the previously paused machine config pools. Following this procedure reduces the total update duration and the number of times worker nodes are restarted. Prerequisites Review the release notes for OpenShift Container Platform <4.y+1> and <4.y+2> Review the release notes and product lifecycles for any layered products and Operator Lifecycle Manager (OLM) Operators. Some may require updates either before or during a Control Plane Only update. Ensure that you are familiar with version-specific prerequisites, such as the removal of deprecated APIs, that are required prior to updating from OpenShift Container Platform <4.y+1> to <4.y+2>. 3.3.1.1. Performing a Control Plane Only update using the web console Prerequisites Verify that machine config pools are unpaused. Have access to the web console as a user with admin privileges. Procedure Using the Administrator perspective on the web console, update any Operator Lifecycle Manager (OLM) Operators to the versions that are compatible with your intended updated version. You can find more information on how to perform this action in "Updating installed Operators"; see "Additional resources". Verify that all machine config pools display a status of Up to date and that no machine config pool displays a status of UPDATING . To view the status of all machine config pools, click Compute MachineConfigPools and review the contents of the Update status column. Note If your machine config pools have an Updating status, please wait for this status to change to Up to date . This process could take several minutes. Set your channel to eus-<4.y+2> . To set your channel, click Administration Cluster Settings Channel . You can edit your channel by clicking on the current hyperlinked channel. Pause all worker machine pools except for the master pool. You can perform this action on the MachineConfigPools tab under the Compute page. Select the vertical ellipses to the machine config pool you'd like to pause and click Pause updates . Update to version <4.y+1> and complete up to the Save step. You can find more information on how to perform these actions in "Updating a cluster by using the web console"; see "Additional resources". Ensure that the <4.y+1> updates are complete by viewing the Last completed version of your cluster. You can find this information on the Cluster Settings page under the Details tab. If necessary, update your OLM Operators by using the Administrator perspective on the web console. You can find more information on how to perform these actions in "Updating installed Operators"; see "Additional resources". Update to version <4.y+2> and complete up to the Save step. You can find more information on how to perform these actions in "Updating a cluster by using the web console"; see "Additional resources". Ensure that the <4.y+2> update is complete by viewing the Last completed version of your cluster. You can find this information on the Cluster Settings page under the Details tab. Unpause all previously paused machine config pools. You can perform this action on the MachineConfigPools tab under the Compute page. Select the vertical ellipses to the machine config pool you'd like to unpause and click Unpause updates . Important If pools are paused, the cluster is not permitted to upgrade to any future minor versions, and some maintenance tasks are inhibited. This puts the cluster at risk for future degradation. Verify that your previously paused pools are updated and that your cluster has completed the update to version <4.y+2>. You can verify that your pools have updated on the MachineConfigPools tab under the Compute page by confirming that the Update status has a value of Up to date . Important When you update a cluster that contains Red Hat Enterprise Linux (RHEL) compute machines, those machines temporarily become unavailable during the update process. You must run the upgrade playbook against each RHEL machine as it enters the NotReady state for the cluster to finish updating. For more information, see "Updating a cluster that includes RHEL compute machines" in the additional resources section. You can verify that your cluster has completed the update by viewing the Last completed version of your cluster. You can find this information on the Cluster Settings page under the Details tab. Additional resources Updating installed Operators Updating a cluster by using the web console Updating a cluster that includes RHEL compute machines 3.3.1.2. Performing a Control Plane Only update using the CLI Prerequisites Verify that machine config pools are unpaused. Update the OpenShift CLI ( oc ) to the target version before each update. Important It is highly discouraged to skip this prerequisite. If the OpenShift CLI ( oc ) is not updated to the target version before your update, unexpected issues may occur. Procedure Using the Administrator perspective on the web console, update any Operator Lifecycle Manager (OLM) Operators to the versions that are compatible with your intended updated version. You can find more information on how to perform this action in "Updating installed Operators"; see "Additional resources". Verify that all machine config pools display a status of UPDATED and that no machine config pool displays a status of UPDATING . To view the status of all machine config pools, run the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING master rendered-master-ecbb9582781c1091e1c9f19d50cf836c True False worker rendered-worker-00a3f0c68ae94e747193156b491553d5 True False Your current version is <4.y>, and your intended version to update is <4.y+2>. Change to the eus-<4.y+2> channel by running the following command: USD oc adm upgrade channel eus-<4.y+2> Note If you receive an error message indicating that eus-<4.y+2> is not one of the available channels, this indicates that Red Hat is still rolling out EUS version updates. This rollout process generally takes 45-90 days starting at the GA date. Pause all worker machine pools except for the master pool by running the following command: USD oc patch mcp/worker --type merge --patch '{"spec":{"paused":true}}' Note You cannot pause the master pool. Update to the latest version by running the following command: USD oc adm upgrade --to-latest Example output Updating to latest version <4.y+1.z> Review the cluster version to ensure that the updates are complete by running the following command: USD oc adm upgrade Example output Cluster version is <4.y+1.z> ... Update to version <4.y+2> by running the following command: USD oc adm upgrade --to-latest Retrieve the cluster version to ensure that the <4.y+2> updates are complete by running the following command: USD oc adm upgrade Example output Cluster version is <4.y+2.z> ... To update your worker nodes to <4.y+2>, unpause all previously paused machine config pools by running the following command: USD oc patch mcp/worker --type merge --patch '{"spec":{"paused":false}}' Important If pools are not unpaused, the cluster is not permitted to update to any future minor versions, and some maintenance tasks are inhibited. This puts the cluster at risk for future degradation. Verify that your previously paused pools are updated and that the update to version <4.y+2> is complete by running the following command: USD oc get mcp Important When you update a cluster that contains Red Hat Enterprise Linux (RHEL) compute machines, those machines temporarily become unavailable during the update process. You must run the upgrade playbook against each RHEL machine as it enters the NotReady state for the cluster to finish updating. For more information, see "Updating a cluster that includes RHEL compute machines" in the additional resources section. Example output NAME CONFIG UPDATED UPDATING master rendered-master-52da4d2760807cb2b96a3402179a9a4c True False worker rendered-worker-4756f60eccae96fb9dcb4c392c69d497 True False Additional resources Updating installed Operators Updating a cluster that includes RHEL compute machines 3.3.1.3. Performing a Control Plane Only update for layered products and Operators installed through Operator Lifecycle Manager In addition to the Control Plane Only update steps mentioned for the web console and CLI, there are additional steps to consider when performing Control Plane Only updates for clusters with the following: Layered products Operators installed through Operator Lifecycle Manager (OLM) What is a layered product? Layered products refer to products that are made of multiple underlying products that are intended to be used together and cannot be broken into individual subscriptions. For examples of layered OpenShift Container Platform products, see Layered Offering On OpenShift . As you perform a Control Plane only update for the clusters of layered products and those of Operators that have been installed through OLM, you must complete the following: You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See "Updating installed Operators" in the "Additional resources" section for more information on how to check compatibility and, if necessary, update the installed Operators. Confirm the cluster version compatibility between the current and intended Operator versions. You can verify which versions your OLM Operators are compatible with by using the Red Hat OpenShift Container Platform Operator Update Information Checker . As an example, here are the steps to perform a Control Plane Only update from <4.y> to <4.y+2> for OpenShift Data Foundation (ODF). This can be done through the CLI or web console. For information on how to update clusters through your desired interface, see Performing a Control Plane Only update using the web console and Performing a Control Plane Only update using the CLI in "Additional resources". Example workflow Pause the worker machine pools. Update OpenShift <4.y> OpenShift <4.y+1>. Update ODF <4.y> ODF <4.y+1>. Update OpenShift <4.y+1> OpenShift <4.y+2>. Update to ODF <4.y+2>. Unpause the worker machine pools. Note The update to ODF <4.y+2> can happen before or after worker machine pools have been unpaused. Additional resources Updating installed Operators Performing a Control Plane Only update using the web console Performing a Control Plane Only update using the CLI Preventing workload updates during a Control Plane Only update 3.4. Performing a canary rollout update A canary update is an update strategy where worker node updates are performed in discrete, sequential stages instead of updating all worker nodes at the same time. This strategy can be useful in the following scenarios: You want a more controlled rollout of worker node updates to ensure that mission-critical applications stay available during the whole update, even if the update process causes your applications to fail. You want to update a small subset of worker nodes, evaluate cluster and workload health over a period of time, and then update the remaining nodes. You want to fit worker node updates, which often require a host reboot, into smaller defined maintenance windows when it is not possible to take a large maintenance window to update the entire cluster at one time. In these scenarios, you can create multiple custom machine config pools (MCPs) to prevent certain worker nodes from updating when you update the cluster. After the rest of the cluster is updated, you can update those worker nodes in batches at appropriate times. 3.4.1. Example Canary update strategy The following example describes a canary update strategy where you have a cluster with 100 nodes with 10% excess capacity, you have maintenance windows that must not exceed 4 hours, and you know that it takes no longer than 8 minutes to drain and reboot a worker node. Note The values are an example only. The time it takes to drain a node might vary depending on factors such as workloads. Defining custom machine config pools In order to organize the worker node updates into separate stages, you can begin by defining the following MCPs: workerpool-canary with 10 nodes workerpool-A with 30 nodes workerpool-B with 30 nodes workerpool-C with 30 nodes Updating the canary worker pool During your first maintenance window, you pause the MCPs for workerpool-A , workerpool-B , and workerpool-C , and then initiate the cluster update. This updates components that run on top of OpenShift Container Platform and the 10 nodes that are part of the unpaused workerpool-canary MCP. The other three MCPs are not updated because they were paused. Determining whether to proceed with the remaining worker pool updates If for some reason you determine that your cluster or workload health was negatively affected by the workerpool-canary update, you then cordon and drain all nodes in that pool while still maintaining sufficient capacity until you have diagnosed and resolved the problem. When everything is working as expected, you evaluate the cluster and workload health before deciding to unpause, and thus update, workerpool-A , workerpool-B , and workerpool-C in succession during each additional maintenance window. Managing worker node updates using custom MCPs provides flexibility, however it can be a time-consuming process that requires you execute multiple commands. This complexity can result in errors that might affect the entire cluster. It is recommended that you carefully consider your organizational needs and carefully plan the implementation of the process before you start. Important Pausing a machine config pool prevents the Machine Config Operator from applying any configuration changes on the associated nodes. Pausing an MCP also prevents any automatically rotated certificates from being pushed to the associated nodes, including the automatic CA rotation of the kube-apiserver-to-kubelet-signer CA certificate. If the MCP is paused when the kube-apiserver-to-kubelet-signer CA certificate expires and the MCO attempts to automatically renew the certificate, the MCO cannot push the newly rotated certificates to those nodes. This causes failure in multiple oc commands, including oc debug , oc logs , oc exec , and oc attach . You receive alerts in the Alerting UI of the OpenShift Container Platform web console if an MCP is paused when the certificates are rotated. Pausing an MCP should be done with careful consideration about the kube-apiserver-to-kubelet-signer CA certificate expiration and for short periods of time only. Note It is not recommended to update the MCPs to different OpenShift Container Platform versions. For example, do not update one MCP from 4.y.10 to 4.y.11 and another to 4.y.12. This scenario has not been tested and might result in an undefined cluster state. 3.4.2. About the canary rollout update process and MCPs In OpenShift Container Platform, nodes are not considered individually. Instead, they are grouped into machine config pools (MCPs). By default, nodes in an OpenShift Container Platform cluster are grouped into two MCPs: one for the control plane nodes and one for the worker nodes. An OpenShift Container Platform update affects all MCPs concurrently. During the update, the Machine Config Operator (MCO) drains and cordons all nodes within an MCP up to the specified maxUnavailable number of nodes, if a max number is specified. By default, maxUnavailable is set to 1 . Draining and cordoning a node deschedules all pods on the node and marks the node as unschedulable. After the node is drained, the Machine Config Daemon applies a new machine configuration, which can include updating the operating system (OS). Updating the OS requires the host to reboot. Using custom machine config pools To prevent specific nodes from being updated, you can create custom MCPs. Because the MCO does not update nodes within paused MCPs, you can pause the MCPs containing nodes that you do not want to update before initiating a cluster update. Using one or more custom MCPs can give you more control over the sequence in which you update your worker nodes. For example, after you update the nodes in the first MCP, you can verify the application compatibility and then update the rest of the nodes gradually to the new version. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. Note To ensure the stability of the control plane, creating a custom MCP from the control plane nodes is not supported. The Machine Config Operator (MCO) ignores any custom MCP created for the control plane nodes. Considerations when using custom machine config pools Give careful consideration to the number of MCPs that you create and the number of nodes in each MCP, based on your workload deployment topology. For example, if you must fit updates into specific maintenance windows, you must know how many nodes OpenShift Container Platform can update within a given window. This number is dependent on your unique cluster and workload characteristics. You must also consider how much extra capacity is available in your cluster to determine the number of custom MCPs and the amount of nodes within each MCP. In a case where your applications fail to work as expected on newly updated nodes, you can cordon and drain those nodes in the pool, which moves the application pods to other nodes. However, you must determine whether the available nodes in the remaining MCPs can provide sufficient quality-of-service (QoS) for your applications. Note You can use this update process with all documented OpenShift Container Platform update processes. However, the process does not work with Red Hat Enterprise Linux (RHEL) machines, which are updated using Ansible playbooks. 3.4.3. About performing a canary rollout update The following steps outline the high-level workflow of the canary rollout update process: Create custom machine config pools (MCP) based on the worker pool. Note You can change the maxUnavailable setting in an MCP to specify the percentage or the number of machines that can be updating at any given time. The default is 1 . Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. Add a node selector to the custom MCPs. For each node that you do not want to update simultaneously with the rest of the cluster, add a matching label to the nodes. This label associates the node to the MCP. Important Do not remove the default worker label from the nodes. The nodes must have a role label to function properly in the cluster. Pause the MCPs you do not want to update as part of the update process. Perform the cluster update. The update process updates the MCPs that are not paused, including the control plane nodes. Test your applications on the updated nodes to ensure they are working as expected. Unpause one of the remaining MCPs, wait for the nodes in that pool to finish updating, and test the applications on those nodes. Repeat this process until all worker nodes are updated. Optional: Remove the custom label from updated nodes and delete the custom MCPs. 3.4.4. Creating machine config pools to perform a canary rollout update To perform a canary rollout update, you must first create one or more custom machine config pools (MCP). Procedure List the worker nodes in your cluster by running the following command: USD oc get -l 'node-role.kubernetes.io/master!=' -o 'jsonpath={range .items[*]}{.metadata.name}{"\n"}{end}' nodes Example output ci-ln-pwnll6b-f76d1-s8t9n-worker-a-s75z4 ci-ln-pwnll6b-f76d1-s8t9n-worker-b-dglj2 ci-ln-pwnll6b-f76d1-s8t9n-worker-c-lldbm For each node that you want to delay, add a custom label to the node by running the following command: USD oc label node <node_name> node-role.kubernetes.io/<custom_label>= For example: USD oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary= Example output node/ci-ln-gtrwm8t-f76d1-spbl7-worker-a-xk76k labeled Create the new MCP: Create an MCP YAML file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: workerpool-canary 1 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,workerpool-canary] 2 } nodeSelector: matchLabels: node-role.kubernetes.io/workerpool-canary: "" 3 1 Specify a name for the MCP. 2 Specify the worker and custom MCP name. 3 Specify the custom label you added to the nodes that you want in this pool. Create the MachineConfigPool object by running the following command: USD oc create -f <file_name> Example output machineconfigpool.machineconfiguration.openshift.io/workerpool-canary created View the list of MCPs in the cluster and their current state by running the following command: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-b0bb90c4921860f2a5d8a2f8137c1867 True False False 3 3 3 0 97m workerpool-canary rendered-workerpool-canary-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 1 1 1 0 2m42s worker rendered-worker-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 2 2 2 0 97m The new machine config pool, workerpool-canary , is created and the number of nodes to which you added the custom label are shown in the machine counts. The worker MCP machine counts are reduced by the same number. It can take several minutes to update the machine counts. In this example, one node was moved from the worker MCP to the workerpool-canary MCP. 3.4.5. Managing machine configuration inheritance for a worker pool canary You can configure a machine config pool (MCP) canary to inherit any MachineConfig assigned to an existing MCP. This configuration is useful when you want to use an MCP canary to test as you update nodes one at a time for an existing MCP. Prerequisites You have created one or more MCPs. Procedure Create a secondary MCP as described in the following two steps: Save the following configuration file as machineConfigPool.yaml . Example machineConfigPool YAML apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf] } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf: "" # ... Create the new machine config pool by running the following command: USD oc create -f machineConfigPool.yaml Example output machineconfigpool.machineconfiguration.openshift.io/worker-perf created Add some machines to the secondary MCP. The following example labels the worker nodes worker-a , worker-b , and worker-c to the MCP worker-perf : USD oc label node worker-a node-role.kubernetes.io/worker-perf='' USD oc label node worker-b node-role.kubernetes.io/worker-perf='' USD oc label node worker-c node-role.kubernetes.io/worker-perf='' Create a new MachineConfig for the MCP worker-perf as described in the following two steps: Save the following MachineConfig example as a file called new-machineconfig.yaml : Example MachineConfig YAML apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker-perf name: 06-kdump-enable-worker-perf spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M # ... Apply the MachineConfig by running the following command: USD oc create -f new-machineconfig.yaml Create the new canary MCP and add machines from the MCP you created in the steps. The following example creates an MCP called worker-perf-canary , and adds machines from the worker-perf MCP that you previosuly created. Label the canary worker node worker-a by running the following command: USD oc label node worker-a node-role.kubernetes.io/worker-perf-canary='' Remove the canary worker node worker-a from the original MCP by running the following command: USD oc label node worker-a node-role.kubernetes.io/worker-perf- Save the following file as machineConfigPool-Canary.yaml . Example machineConfigPool-Canary.yaml file apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf-canary spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf,worker-perf-canary] 1 } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf-canary: "" 1 Optional value. This example includes worker-perf-canary as an additional value. You can use a value in this way to configure members of an additional MachineConfig . Create the new worker-perf-canary by running the following command: USD oc create -f machineConfigPool-Canary.yaml Example output machineconfigpool.machineconfiguration.openshift.io/worker-perf-canary created Check if the MachineConfig is inherited in worker-perf-canary . Verify that no MCP is degraded by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2bf1379b39e22bae858ea1a3ff54b2ac True False False 3 3 3 0 5d16h worker rendered-worker-b9576d51e030413cfab12eb5b9841f34 True False False 0 0 0 0 5d16h worker-perf rendered-worker-perf-b98a1f62485fa702c4329d17d9364f6a True False False 2 2 2 0 56m worker-perf-canary rendered-worker-perf-canary-b98a1f62485fa702c4329d17d9364f6a True False False 1 1 1 0 44m Verify that the machines are inherited from worker-perf into worker-perf-canary . USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ... worker-a Ready worker,worker-perf-canary 5d15h v1.27.13+e709aa5 worker-b Ready worker,worker-perf 5d15h v1.27.13+e709aa5 worker-c Ready worker,worker-perf 5d15h v1.27.13+e709aa5 Verify that kdump service is enabled on worker-a by running the following command: USD systemctl status kdump.service Example output NAME STATUS ROLES AGE VERSION ... kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; preset: disabled) Active: active (exited) since Tue 2024-09-03 12:44:43 UTC; 10s ago Process: 4151139 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCESS) Main PID: 4151139 (code=exited, status=0/SUCCESS) Verify that the MCP has updated the crashkernel by running the following command: USD cat /proc/cmdline The output should include the updated crashekernel value, for example: Example output crashkernel=512M Optional: If you are satisfied with the upgrade, you can return worker-a to worker-perf . Return worker-a to worker-perf by running the following command: USD oc label node worker-a node-role.kubernetes.io/worker-perf='' Remove worker-a from the canary MCP by running the following command: USD oc label node worker-a node-role.kubernetes.io/worker-perf-canary- 3.4.6. Pausing the machine config pools After you create your custom machine config pools (MCPs), you then pause those MCPs. Pausing an MCP prevents the Machine Config Operator (MCO) from updating the nodes associated with that MCP. Procedure Patch the MCP that you want paused by running the following command: USD oc patch mcp/<mcp_name> --patch '{"spec":{"paused":true}}' --type=merge For example: USD oc patch mcp/workerpool-canary --patch '{"spec":{"paused":true}}' --type=merge Example output machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched 3.4.7. Performing the cluster update After the machine config pools (MCP) enter a ready state, you can perform the cluster update. See one of the following update methods, as appropriate for your cluster: Updating a cluster using the web console Updating a cluster using the CLI After the cluster update is complete, you can begin to unpause the MCPs one at a time. 3.4.8. Unpausing the machine config pools After the OpenShift Container Platform update is complete, unpause your custom machine config pools (MCP) one at a time. Unpausing an MCP allows the Machine Config Operator (MCO) to update the nodes associated with that MCP. Procedure Patch the MCP that you want to unpause: USD oc patch mcp/<mcp_name> --patch '{"spec":{"paused":false}}' --type=merge For example: USD oc patch mcp/workerpool-canary --patch '{"spec":{"paused":false}}' --type=merge Example output machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched Optional: Check the progress of the update by using one of the following options: Check the progress from the web console by clicking Administration Cluster settings . Check the progress by running the following command: USD oc get machineconfigpools Test your applications on the updated nodes to ensure that they are working as expected. Repeat this process for any other paused MCPs, one at a time. Note In case of a failure, such as your applications not working on the updated nodes, you can cordon and drain the nodes in the pool, which moves the application pods to other nodes to help maintain the quality-of-service for the applications. This first MCP should be no larger than the excess capacity. 3.4.9. Moving a node to the original machine config pool After you update and verify applications on nodes in a custom machine config pool (MCP), move the nodes back to their original MCP by removing the custom label that you added to the nodes. Important A node must have a role to be properly functioning in the cluster. Procedure For each node in a custom MCP, remove the custom label from the node by running the following command: USD oc label node <node_name> node-role.kubernetes.io/<custom_label>- For example: USD oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary- Example output node/ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz labeled The Machine Config Operator moves the nodes back to the original MCP and reconciles the node to the MCP configuration. To ensure that node has been removed from the custom MCP, view the list of MCPs in the cluster and their current state by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-1203f157d053fd987c7cbd91e3fbc0ed True False False 3 3 3 0 61m workerpool-canary rendered-mcp-noupdate-5ad4791166c468f3a35cd16e734c9028 True False False 0 0 0 0 21m worker rendered-worker-5ad4791166c468f3a35cd16e734c9028 True False False 3 3 3 0 61m When the node is removed from the custom MCP and moved back to the original MCP, it can take several minutes to update the machine counts. In this example, one node was moved from the removed workerpool-canary MCP to the worker MCP. Optional: Delete the custom MCP by running the following command: USD oc delete mcp <mcp_name> 3.5. Updating a cluster that includes RHEL compute machines You can perform minor version and patch updates on an OpenShift Container Platform cluster. If your cluster contains Red Hat Enterprise Linux (RHEL) machines, you must take additional steps to update those machines. 3.5.1. Prerequisites Have access to the cluster as a user with admin privileges. See Using RBAC to define and apply permissions . Have a recent etcd backup in case your update fails and you must restore your cluster to a state. Your RHEL7 workers are replaced with RHEL8 or RHCOS workers. Red Hat does not support in-place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean operating system install. If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see Preparing to update a cluster with manually maintained credentials . If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If minAvailable is set to 1 in PodDisruptionBudget , the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. Additional resources Support policy for unmanaged Operators 3.5.2. Updating a cluster by using the web console If updates are available, you can update your cluster from the web console. You can find information about available OpenShift Container Platform advisories and updates in the errata section of the Customer Portal. Prerequisites Have access to the web console as a user with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Pause all MachineHealthCheck resources. You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See "Updating installed Operators" in the "Additional resources" section for more information on how to check compatibility and, if necessary, update the installed Operators. Your machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. Your RHEL7 workers are replaced with RHEL8 or RHCOS workers. Red Hat does not support in-place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean operating system install. Procedure From the web console, click Administration Cluster Settings and review the contents of the Details tab. For production clusters, ensure that the Channel is set to the correct channel for the version that you want to update to, such as stable-4.15 . Important For production clusters, you must subscribe to a stable-* , eus-* or fast-* channel. Note When you are ready to move to the minor version, choose the channel that corresponds to that minor version. The sooner the update channel is declared, the more effectively the cluster can recommend update paths to your target version. The cluster might take some time to evaluate all the possible updates that are available and offer the best update recommendations to choose from. Update recommendations can change over time, as they are based on what update options are available at the time. If you cannot see an update path to your target minor version, keep updating your cluster to the latest patch release for your current version until the minor version is available in the path. If the Update status is not Updates available , you cannot update your cluster. Select channel indicates the cluster version that your cluster is running or is updating to. Select a version to update to, and click Save . The Input channel Update status changes to Update to <product-version> in progress , and you can review the progress of the cluster update by watching the progress bars for the Operators and nodes. Note If you are updating your cluster to the minor version, for example from version 4.10 to 4.11, confirm that your nodes are updated before deploying workloads that rely on a new feature. Any pools with worker nodes that are not yet updated are displayed on the Cluster Settings page. After the update completes and the Cluster Version Operator refreshes the available updates, check if more updates are available in your current channel. If updates are available, continue to perform updates in the current channel until you can no longer update. If no updates are available, change the Channel to the stable-* , eus-* or fast-* channel for the minor version, and update to the version that you want in that channel. You might need to perform several intermediate updates until you reach the version that you want. Important When you update a cluster that contains Red Hat Enterprise Linux (RHEL) worker machines, those workers temporarily become unavailable during the update process. You must run the update playbook against each RHEL machine as it enters the NotReady state for the cluster to finish updating. Additional resources Updating installed Operators 3.5.3. Optional: Adding hooks to perform Ansible tasks on RHEL machines You can use hooks to run Ansible tasks on the RHEL compute machines during the OpenShift Container Platform update. 3.5.3.1. About Ansible hooks for updates When you update OpenShift Container Platform, you can run custom tasks on your Red Hat Enterprise Linux (RHEL) nodes during specific operations by using hooks . Hooks allow you to provide files that define tasks to run before or after specific update tasks. You can use hooks to validate or modify custom infrastructure when you update the RHEL compute nodes in you OpenShift Container Platform cluster. Because when a hook fails, the operation fails, you must design hooks that are idempotent, or can run multiple times and provide the same results. Hooks have the following important limitations: - Hooks do not have a defined or versioned interface. They can use internal openshift-ansible variables, but it is possible that the variables will be modified or removed in future OpenShift Container Platform releases. - Hooks do not have error handling, so an error in a hook halts the update process. If you get an error, you must address the problem and then start the update again. 3.5.3.2. Configuring the Ansible inventory file to use hooks You define the hooks to use when you update the Red Hat Enterprise Linux (RHEL) compute machines, which are also known as worker machines, in the hosts inventory file under the all:vars section. Prerequisites You have access to the machine that you used to add the RHEL compute machines cluster. You must have access to the hosts Ansible inventory file that defines your RHEL machines. Procedure After you design the hook, create a YAML file that defines the Ansible tasks for it. This file must be a set of tasks and cannot be a playbook, as shown in the following example: --- # Trivial example forcing an operator to acknowledge the start of an upgrade # file=/home/user/openshift-ansible/hooks/pre_compute.yml - name: note the start of a compute machine update debug: msg: "Compute machine upgrade of {{ inventory_hostname }} is about to start" - name: require the user agree to start an upgrade pause: prompt: "Press Enter to start the compute machine update" Modify the hosts Ansible inventory file to specify the hook files. The hook files are specified as parameter values in the [all:vars] section, as shown: Example hook definitions in an inventory file To avoid ambiguity in the paths to the hook, use absolute paths instead of a relative paths in their definitions. 3.5.3.3. Available hooks for RHEL compute machines You can use the following hooks when you update the Red Hat Enterprise Linux (RHEL) compute machines in your OpenShift Container Platform cluster. Hook name Description openshift_node_pre_cordon_hook Runs before each node is cordoned. This hook runs against each node in serial. If a task must run against a different host, the task must use delegate_to or local_action . openshift_node_pre_upgrade_hook Runs after each node is cordoned but before it is updated. This hook runs against each node in serial. If a task must run against a different host, the task must use delegate_to or local_action . openshift_node_pre_uncordon_hook Runs after each node is updated but before it is uncordoned. This hook runs against each node in serial. If a task must run against a different host, they task must use delegate_to or local_action . openshift_node_post_upgrade_hook Runs after each node uncordoned. It is the last node update action. This hook runs against each node in serial. If a task must run against a different host, the task must use delegate_to or local_action . 3.5.4. Updating RHEL compute machines in your cluster After you update your cluster, you must update the Red Hat Enterprise Linux (RHEL) compute machines in your cluster. Important Red Hat Enterprise Linux (RHEL) versions 8.6 and later are supported for RHEL compute machines. You can also update your compute machines to another minor version of OpenShift Container Platform if you are using RHEL as the operating system. You do not need to exclude any RPM packages from RHEL when performing a minor version update. Important You cannot update RHEL 7 compute machines to RHEL 8. You must deploy new RHEL 8 hosts, and the old RHEL 7 hosts should be removed. Prerequisites You updated your cluster. Important Because the RHEL machines require assets that are generated by the cluster to complete the update process, you must update the cluster before you update the RHEL worker machines in it. You have access to the local machine that you used to add the RHEL compute machines to your cluster. You must have access to the hosts Ansible inventory file that defines your RHEL machines and the upgrade playbook. For updates to a minor version, the RPM repository is using the same version of OpenShift Container Platform that is running on your cluster. Procedure Stop and disable firewalld on the host: # systemctl disable --now firewalld.service Note By default, the base OS RHEL with "Minimal" installation option enables firewalld service. Having the firewalld service enabled on your host prevents you from accessing OpenShift Container Platform logs on the worker. Do not enable firewalld later if you wish to continue accessing OpenShift Container Platform logs on the worker. Enable the repositories that are required for OpenShift Container Platform 4.15: On the machine that you run the Ansible playbooks, update the required repositories: # subscription-manager repos --disable=rhocp-4.14-for-rhel-8-x86_64-rpms \ --enable=rhocp-4.15-for-rhel-8-x86_64-rpms Important As of OpenShift Container Platform 4.11, the Ansible playbooks are provided only for RHEL 8. If a RHEL 7 system was used as a host for the OpenShift Container Platform 4.10 Ansible playbooks, you must either update the Ansible host to RHEL 8, or create a new Ansible host on a RHEL 8 system and copy over the inventories from the old Ansible host. On the machine that you run the Ansible playbooks, update the Ansible package: # yum swap ansible ansible-core On the machine that you run the Ansible playbooks, update the required packages, including openshift-ansible : # yum update openshift-ansible openshift-clients On each RHEL compute node, update the required repositories: # subscription-manager repos --disable=rhocp-4.14-for-rhel-8-x86_64-rpms \ --enable=rhocp-4.15-for-rhel-8-x86_64-rpms Update a RHEL worker machine: Review your Ansible inventory file at /<path>/inventory/hosts and update its contents so that the RHEL 8 machines are listed in the [workers] section, as shown in the following example: Change to the openshift-ansible directory: USD cd /usr/share/ansible/openshift-ansible Run the upgrade playbook: USD ansible-playbook -i /<path>/inventory/hosts playbooks/upgrade.yml 1 1 For <path> , specify the path to the Ansible inventory file that you created. Note The upgrade playbook only updates the OpenShift Container Platform packages. It does not update the operating system packages. After you update all of the workers, confirm that all of your cluster nodes have updated to the new version: # oc get node Example output NAME STATUS ROLES AGE VERSION mycluster-control-plane-0 Ready master 145m v1.28.5 mycluster-control-plane-1 Ready master 145m v1.28.5 mycluster-control-plane-2 Ready master 145m v1.28.5 mycluster-rhel8-0 Ready worker 98m v1.28.5 mycluster-rhel8-1 Ready worker 98m v1.28.5 mycluster-rhel8-2 Ready worker 98m v1.28.5 mycluster-rhel8-3 Ready worker 98m v1.28.5 Optional: Update the operating system packages that were not updated by the upgrade playbook. To update packages that are not on 4.15, use the following command: # yum update Note You do not need to exclude RPM packages if you are using the same RPM repository that you used when you installed 4.15. 3.6. Updating a cluster in a disconnected environment 3.6.1. About cluster updates in a disconnected environment A disconnected environment is one in which your cluster nodes cannot access the internet. For this reason, you must populate a registry with the installation images. If your registry host cannot access both the internet and the cluster, you can mirror the images to a file system that is disconnected from that environment and then bring that host or removable media across that gap. If the local container registry and the cluster are connected to the mirror registry's host, you can directly push the release images to the local registry. A single container image registry is sufficient to host mirrored images for several clusters in the disconnected network. 3.6.1.1. Mirroring OpenShift Container Platform images To update your cluster in a disconnected environment, your cluster environment must have access to a mirror registry that has the necessary images and resources for your targeted update. The following page has instructions for mirroring images onto a repository in your disconnected cluster: Mirroring OpenShift Container Platform images 3.6.1.2. Performing a cluster update in a disconnected environment You can use one of the following procedures to update a disconnected OpenShift Container Platform cluster: Updating a cluster in a disconnected environment using the OpenShift Update Service Updating a cluster in a disconnected environment without the OpenShift Update Service 3.6.1.3. Uninstalling the OpenShift Update Service from a cluster You can use the following procedure to uninstall a local copy of the OpenShift Update Service (OSUS) from your cluster: Uninstalling the OpenShift Update Service from a cluster 3.6.2. Mirroring OpenShift Container Platform images You must mirror container images onto a mirror registry before you can update a cluster in a disconnected environment. You can also use this procedure in connected environments to ensure your clusters run only approved container images that have satisfied your organizational controls for external content. Note Your mirror registry must be running at all times while the cluster is running. The following steps outline the high-level workflow on how to mirror images to a mirror registry: Install the OpenShift CLI ( oc ) on all devices being used to retrieve and push release images. Download the registry pull secret and add it to your cluster. If you use the oc-mirror OpenShift CLI ( oc ) plugin : Install the oc-mirror plugin on all devices being used to retrieve and push release images. Create an image set configuration file for the plugin to use when determining which release images to mirror. You can edit this configuration file later to change which release images that the plugin mirrors. Mirror your targeted release images directly to a mirror registry, or to removable media and then to a mirror registry. Configure your cluster to use the resources generated by the oc-mirror plugin. Repeat these steps as needed to update your mirror registry. If you use the oc adm release mirror command : Set environment variables that correspond to your environment and the release images you want to mirror. Mirror your targeted release images directly to a mirror registry, or to removable media and then to a mirror registry. Repeat these steps as needed to update your mirror registry. Compared to using the oc adm release mirror command, the oc-mirror plugin has the following advantages: It can mirror content other than container images. After mirroring images for the first time, it is easier to update images in the registry. The oc-mirror plugin provides an automated way to mirror the release payload from Quay, and also builds the latest graph data image for the OpenShift Update Service running in the disconnected environment. 3.6.2.1. Mirroring resources using the oc-mirror plugin You can use the oc-mirror OpenShift CLI ( oc ) plugin to mirror images to a mirror registry in your fully or partially disconnected environments. You must run oc-mirror from a system with internet connectivity to download the required images from the official Red Hat registries. See Mirroring images for a disconnected installation using the oc-mirror plugin for additional details. 3.6.2.2. Mirroring images using the oc adm release mirror command You can use the oc adm release mirror command to mirror images to your mirror registry. 3.6.2.2.1. Prerequisites You must have a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as Red Hat Quay. Note If you use Red Hat Quay, you must use version 3.6 or later with the oc-mirror plugin. If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Quay Operator . If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat Support. If you do not have an existing solution for a container image registry, the mirror registry for Red Hat OpenShift is included in OpenShift Container Platform subscriptions. The mirror registry for Red Hat OpenShift is a small-scale container registry that you can use to mirror OpenShift Container Platform container images in disconnected installations and updates. 3.6.2.2.2. Preparing your mirror host Before you perform the mirror procedure, you must prepare the host to retrieve content and push it to the remote location. 3.6.2.2.2.1. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . If you are updating a cluster in a disconnected environment, install the oc version that you plan to update to. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> Additional resources Installing and using CLI plugins 3.6.2.2.2.2. Configuring credentials that allow images to be mirrored Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror. Warning Do not use this image registry credentials file as the pull secret when you install a cluster. If you provide this file when you install cluster, all of the machines in the cluster will have write access to your mirror registry. Warning This process requires that you have write access to a container image registry on the mirror registry and adds the credentials to a registry pull secret. Prerequisites You configured a mirror registry to use in your disconnected environment. You identified an image repository location on your mirror registry to mirror images into. You provisioned a mirror registry account that allows images to be uploaded to that image repository. Procedure Complete the following steps on the installation host: Download your registry.redhat.io pull secret from Red Hat OpenShift Cluster Manager . Make a copy of your pull secret in JSON format: USD cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1 1 Specify the path to the folder to store the pull secret in and a name for the JSON file that you create. The contents of the file resemble the following example: { "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } Optional: If using the oc-mirror plugin, save the file as either ~/.docker/config.json or USDXDG_RUNTIME_DIR/containers/auth.json : If the .docker or USDXDG_RUNTIME_DIR/containers directories do not exist, create one by entering the following command: USD mkdir -p <directory_name> Where <directory_name> is either ~/.docker or USDXDG_RUNTIME_DIR/containers . Copy the pull secret to the appropriate directory by entering the following command: USD cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file> Where <directory_name> is either ~/.docker or USDXDG_RUNTIME_DIR/containers , and <auth_file> is either config.json or auth.json . Generate the base64-encoded user name and password or token for your mirror registry: USD echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs= 1 For <user_name> and <password> , specify the user name and password that you configured for your registry. Edit the JSON file and add a section that describes your registry to it: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 For <mirror_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443 2 For <credentials> , specify the base64-encoded user name and password for the mirror registry. The file resembles the following example: { "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "[email protected]" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } 3.6.2.2.3. Mirroring images to a mirror registry Important To avoid excessive memory usage by the OpenShift Update Service application, you must mirror release images to a separate repository as described in the following procedure. Prerequisites You configured a mirror registry to use in your disconnected environment and can access the certificate and credentials that you configured. You downloaded the pull secret from Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. If you use self-signed certificates, you have specified a Subject Alternative Name in the certificates. Procedure Use the Red Hat OpenShift Container Platform Update Graph visualizer and update planner to plan an update from one version to another. The OpenShift Update Graph provides channel graphs and a way to confirm that there is an update path between your current and intended cluster versions. Set the required environment variables: Export the release version: USD export OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to which you want to update, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . If you are using the OpenShift Update Service, export an additional local repository name to contain the release images: USD LOCAL_RELEASE_IMAGES_REPOSITORY='<local_release_images_repository_name>' For <local_release_images_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4-release-images . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Note If your cluster uses an ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your cluster: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Mirror the version images to the mirror registry. If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Mirror the images and configuration manifests to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Note This command also generates and saves the mirrored release image signature config map onto the removable media. Take the media to the disconnected environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. Use oc command-line interface (CLI) to log in to the cluster that you are updating. Apply the mirrored release image signature config map to the connected cluster: USD oc apply -f USD{REMOVABLE_MEDIA_PATH}/mirror/config/<image_signature_file> 1 1 For <image_signature_file> , specify the path and name of the file, for example, signature-sha256-81154f5c03294534.yaml . If you are using the OpenShift Update Service, mirror the release image to a separate repository: USD oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} If the local container registry and the cluster are connected to the mirror host, take the following actions: Directly push the release images to the local registry and apply the config map to the cluster by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --apply-release-image-signature Note If you include the --apply-release-image-signature option, do not create the config map for image signature verification. If you are using the OpenShift Update Service, mirror the release image to a separate repository: USD oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} 3.6.3. Updating a cluster in a disconnected environment using the OpenShift Update Service To get an update experience similar to connected clusters, you can use the following procedures to install and configure the OpenShift Update Service (OSUS) in a disconnected environment. The following steps outline the high-level workflow on how to update a cluster in a disconnected environment using OSUS: Configure access to a secured registry. Update the global cluster pull secret to access your mirror registry. Install the OSUS Operator. Create a graph data container image for the OpenShift Update Service. Install the OSUS application and configure your clusters to use the OpenShift Update Service in your environment. Perform a supported update procedure from the documentation as you would with a connected cluster. 3.6.3.1. Using the OpenShift Update Service in a disconnected environment The OpenShift Update Service (OSUS) provides update recommendations to OpenShift Container Platform clusters. Red Hat publicly hosts the OpenShift Update Service, and clusters in a connected environment can connect to the service through public APIs to retrieve update recommendations. However, clusters in a disconnected environment cannot access these public APIs to retrieve update information. To have a similar update experience in a disconnected environment, you can install and configure the OpenShift Update Service so that it is available within the disconnected environment. A single OSUS instance is capable of serving recommendations to thousands of clusters. OSUS can be scaled horizontally to cater to more clusters by changing the replica value. So for most disconnected use cases, one OSUS instance is enough. For example, Red Hat hosts just one OSUS instance for the entire fleet of connected clusters. If you want to keep update recommendations separate in different environments, you can run one OSUS instance for each environment. For example, in a case where you have separate test and stage environments, you might not want a cluster in a stage environment to receive update recommendations to version A if that version has not been tested in the test environment yet. The following sections describe how to install an OSUS instance and configure it to provide update recommendations to a cluster. Additional resources About the OpenShift Update Service Understanding update channels and releases 3.6.3.2. Prerequisites You must have the oc command-line interface (CLI) tool installed. You must provision a container image registry in your environment with the container images for your update, as described in Mirroring OpenShift Container Platform images . 3.6.3.3. Configuring access to a secured registry for the OpenShift Update Service If the release images are contained in a registry whose HTTPS X.509 certificate is signed by a custom certificate authority, complete the steps in Configuring additional trust stores for image registry access along with following changes for the update service. The OpenShift Update Service Operator needs the config map key name updateservice-registry in the registry CA cert. Image registry CA config map example for the update service apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: updateservice-registry: | 1 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- registry-with-port.example.com..5000: | 2 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- 1 The OpenShift Update Service Operator requires the config map key name updateservice-registry in the registry CA cert. 2 If the registry has the port, such as registry-with-port.example.com:5000 , : should be replaced with .. . 3.6.3.4. Updating the global cluster pull secret You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. The procedure is required when users use a separate registry to store images than the registry used during installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Optional: To append a new pull secret to the existing pull secret, complete the following steps: Enter the following command to download the pull secret: USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location> 1 1 Provide the path to the pull secret file. Enter the following command to add the new pull secret: USD oc registry login --registry="<registry>" \ 1 --auth-basic="<username>:<password>" \ 2 --to=<pull_secret_location> 3 1 Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>" . 2 Provide the credentials of the new registry. 3 Provide the path to the pull secret file. Alternatively, you can perform a manual update to the pull secret file. Enter the following command to update the global pull secret for your cluster: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. This update is rolled out to all nodes, which can take some time depending on the size of your cluster. Note As of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot. 3.6.3.5. Installing the OpenShift Update Service Operator To install the OpenShift Update Service, you must first install the OpenShift Update Service Operator by using the OpenShift Container Platform web console or CLI. Note For clusters that are installed in disconnected environments, also known as disconnected clusters, Operator Lifecycle Manager by default cannot access the Red Hat-provided OperatorHub sources hosted on remote registries because those remote sources require full internet connectivity. For more information, see Using Operator Lifecycle Manager on restricted networks . 3.6.3.5.1. Installing the OpenShift Update Service Operator by using the web console You can use the web console to install the OpenShift Update Service Operator. Procedure In the web console, click Operators OperatorHub . Note Enter Update Service into the Filter by keyword... field to find the Operator faster. Choose OpenShift Update Service from the list of available Operators, and click Install . Select an Update channel . Select a Version . Select A specific namespace on the cluster under Installation Mode . Select a namespace for Installed Namespace or accept the recommended namespace openshift-update-service . Select an Update approval strategy: The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a cluster administrator to approve the Operator update. Click Install . Go to Operators Installed Operators and verify that the OpenShift Update Service Operator is installed. Ensure that OpenShift Update Service is listed in the correct namespace with a Status of Succeeded . 3.6.3.5.2. Installing the OpenShift Update Service Operator by using the CLI You can use the OpenShift CLI ( oc ) to install the OpenShift Update Service Operator. Procedure Create a namespace for the OpenShift Update Service Operator: Create a Namespace object YAML file, for example, update-service-namespace.yaml , for the OpenShift Update Service Operator: apiVersion: v1 kind: Namespace metadata: name: openshift-update-service annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 1 1 Set the openshift.io/cluster-monitoring label to enable Operator-recommended cluster monitoring on this namespace. Create the namespace: USD oc create -f <filename>.yaml For example: USD oc create -f update-service-namespace.yaml Install the OpenShift Update Service Operator by creating the following objects: Create an OperatorGroup object YAML file, for example, update-service-operator-group.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: update-service-operator-group namespace: openshift-update-service spec: targetNamespaces: - openshift-update-service Create an OperatorGroup object: USD oc -n openshift-update-service create -f <filename>.yaml For example: USD oc -n openshift-update-service create -f update-service-operator-group.yaml Create a Subscription object YAML file, for example, update-service-subscription.yaml : Example Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: update-service-subscription namespace: openshift-update-service spec: channel: v1 installPlanApproval: "Automatic" source: "redhat-operators" 1 sourceNamespace: "openshift-marketplace" name: "cincinnati-operator" 1 Specify the name of the catalog source that provides the Operator. For clusters that do not use a custom Operator Lifecycle Manager (OLM), specify redhat-operators . If your OpenShift Container Platform cluster is installed in a disconnected environment, specify the name of the CatalogSource object created when you configured Operator Lifecycle Manager (OLM). Create the Subscription object: USD oc create -f <filename>.yaml For example: USD oc -n openshift-update-service create -f update-service-subscription.yaml The OpenShift Update Service Operator is installed to the openshift-update-service namespace and targets the openshift-update-service namespace. Verify the Operator installation: USD oc -n openshift-update-service get clusterserviceversions Example output NAME DISPLAY VERSION REPLACES PHASE update-service-operator.v4.6.0 OpenShift Update Service 4.6.0 Succeeded ... If the OpenShift Update Service Operator is listed, the installation was successful. The version number might be different than shown. Additional resources Installing Operators in your namespace . 3.6.3.6. Creating the OpenShift Update Service graph data container image The OpenShift Update Service requires a graph data container image, from which the OpenShift Update Service retrieves information about channel membership and blocked update edges. Graph data is typically fetched directly from the update graph data repository. In environments where an internet connection is unavailable, loading this information from an init container is another way to make the graph data available to the OpenShift Update Service. The role of the init container is to provide a local copy of the graph data, and during pod initialization, the init container copies the data to a volume that is accessible by the service. Note The oc-mirror OpenShift CLI ( oc ) plugin creates this graph data container image in addition to mirroring release images. If you used the oc-mirror plugin to mirror your release images, you can skip this procedure. Procedure Create a Dockerfile, for example, ./Dockerfile , containing the following: FROM registry.access.redhat.com/ubi9/ubi:latest RUN curl -L -o cincinnati-graph-data.tar.gz https://api.openshift.com/api/upgrades_info/graph-data RUN mkdir -p /var/lib/cincinnati-graph-data && tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/cincinnati-graph-data/ --no-overwrite-dir --no-same-owner CMD ["/bin/bash", "-c" ,"exec cp -rp /var/lib/cincinnati-graph-data/* /var/lib/cincinnati/graph-data"] Use the docker file created in the above step to build a graph data container image, for example, registry.example.com/openshift/graph-data:latest : USD podman build -f ./Dockerfile -t registry.example.com/openshift/graph-data:latest Push the graph data container image created in the step to a repository that is accessible to the OpenShift Update Service, for example, registry.example.com/openshift/graph-data:latest : USD podman push registry.example.com/openshift/graph-data:latest Note To push a graph data image to a registry in a disconnected environment, copy the graph data container image created in the step to a repository that is accessible to the OpenShift Update Service. Run oc image mirror --help for available options. 3.6.3.7. Creating an OpenShift Update Service application You can create an OpenShift Update Service application by using the OpenShift Container Platform web console or CLI. 3.6.3.7.1. Creating an OpenShift Update Service application by using the web console You can use the OpenShift Container Platform web console to create an OpenShift Update Service application by using the OpenShift Update Service Operator. Prerequisites The OpenShift Update Service Operator has been installed. The OpenShift Update Service graph data container image has been created and pushed to a repository that is accessible to the OpenShift Update Service. The current release and update target releases have been mirrored to a registry in the disconnected environment. Procedure In the web console, click Operators Installed Operators . Choose OpenShift Update Service from the list of installed Operators. Click the Update Service tab. Click Create UpdateService . Enter a name in the Name field, for example, service . Enter the local pullspec in the Graph Data Image field to the graph data container image created in "Creating the OpenShift Update Service graph data container image", for example, registry.example.com/openshift/graph-data:latest . In the Releases field, enter the registry and repository created to contain the release images in "Mirroring the OpenShift Container Platform image repository", for example, registry.example.com/ocp4/openshift4-release-images . Enter 2 in the Replicas field. Click Create to create the OpenShift Update Service application. Verify the OpenShift Update Service application: From the UpdateServices list in the Update Service tab, click the Update Service application just created. Click the Resources tab. Verify each application resource has a status of Created . 3.6.3.7.2. Creating an OpenShift Update Service application by using the CLI You can use the OpenShift CLI ( oc ) to create an OpenShift Update Service application. Prerequisites The OpenShift Update Service Operator has been installed. The OpenShift Update Service graph data container image has been created and pushed to a repository that is accessible to the OpenShift Update Service. The current release and update target releases have been mirrored to a registry in the disconnected environment. Procedure Configure the OpenShift Update Service target namespace, for example, openshift-update-service : USD NAMESPACE=openshift-update-service The namespace must match the targetNamespaces value from the operator group. Configure the name of the OpenShift Update Service application, for example, service : USD NAME=service Configure the registry and repository for the release images as configured in "Mirroring the OpenShift Container Platform image repository", for example, registry.example.com/ocp4/openshift4-release-images : USD RELEASE_IMAGES=registry.example.com/ocp4/openshift4-release-images Set the local pullspec for the graph data image to the graph data container image created in "Creating the OpenShift Update Service graph data container image", for example, registry.example.com/openshift/graph-data:latest : USD GRAPH_DATA_IMAGE=registry.example.com/openshift/graph-data:latest Create an OpenShift Update Service application object: USD oc -n "USD{NAMESPACE}" create -f - <<EOF apiVersion: updateservice.operator.openshift.io/v1 kind: UpdateService metadata: name: USD{NAME} spec: replicas: 2 releases: USD{RELEASE_IMAGES} graphDataImage: USD{GRAPH_DATA_IMAGE} EOF Verify the OpenShift Update Service application: Use the following command to obtain a policy engine route: USD while sleep 1; do POLICY_ENGINE_GRAPH_URI="USD(oc -n "USD{NAMESPACE}" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{"\n"}' updateservice "USD{NAME}")"; SCHEME="USD{POLICY_ENGINE_GRAPH_URI%%:*}"; if test "USD{SCHEME}" = http -o "USD{SCHEME}" = https; then break; fi; done You might need to poll until the command succeeds. Retrieve a graph from the policy engine. Be sure to specify a valid version for channel . For example, if running in OpenShift Container Platform 4.15, use stable-4.15 : USD while sleep 10; do HTTP_CODE="USD(curl --header Accept:application/json --output /dev/stderr --write-out "%{http_code}" "USD{POLICY_ENGINE_GRAPH_URI}?channel=stable-4.6")"; if test "USD{HTTP_CODE}" -eq 200; then break; fi; echo "USD{HTTP_CODE}"; done This polls until the graph request succeeds; however, the resulting graph might be empty depending on which release images you have mirrored. Note The policy engine route name must not be more than 63 characters based on RFC-1123. If you see ReconcileCompleted status as false with the reason CreateRouteFailed caused by host must conform to DNS 1123 naming convention and must be no more than 63 characters , try creating the Update Service with a shorter name. 3.6.3.8. Configuring the Cluster Version Operator (CVO) After the OpenShift Update Service Operator has been installed and the OpenShift Update Service application has been created, the Cluster Version Operator (CVO) can be updated to pull graph data from the OpenShift Update Service installed in your environment. Prerequisites The OpenShift Update Service Operator has been installed. The OpenShift Update Service graph data container image has been created and pushed to a repository that is accessible to the OpenShift Update Service. The current release and update target releases have been mirrored to a registry in the disconnected environment. The OpenShift Update Service application has been created. Procedure Set the OpenShift Update Service target namespace, for example, openshift-update-service : USD NAMESPACE=openshift-update-service Set the name of the OpenShift Update Service application, for example, service : USD NAME=service Obtain the policy engine route: USD POLICY_ENGINE_GRAPH_URI="USD(oc -n "USD{NAMESPACE}" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{"\n"}' updateservice "USD{NAME}")" Set the patch for the pull graph data: USD PATCH="{\"spec\":{\"upstream\":\"USD{POLICY_ENGINE_GRAPH_URI}\"}}" Patch the CVO to use the OpenShift Update Service in your environment: USD oc patch clusterversion version -p USDPATCH --type merge Note See Configuring the cluster-wide proxy to configure the CA to trust the update server. 3.6.3.9. steps Before updating your cluster, confirm that the following conditions are met: The Cluster Version Operator (CVO) is configured to use your installed OpenShift Update Service application. The release image signature config map for the new release is applied to your cluster. Note The Cluster Version Operator (CVO) uses release image signatures to ensure that release images have not been modified, by verifying that the release image signatures match the expected result. The current release and update target release images are mirrored to a registry in the disconnected environment. A recent graph data container image has been mirrored to your registry. A recent version of the OpenShift Update Service Operator is installed. Note If you have not recently installed or updated the OpenShift Update Service Operator, there might be a more recent version available. See Using Operator Lifecycle Manager on restricted networks for more information about how to update your OLM catalog in a disconnected environment. After you configure your cluster to use the installed OpenShift Update Service and local mirror registry, you can use any of the following update methods: Updating a cluster using the web console Updating a cluster using the CLI Performing a Control Plane Only update Performing a canary rollout update Updating a cluster that includes RHEL compute machines 3.6.4. Updating a cluster in a disconnected environment without the OpenShift Update Service Use the following procedures to update a cluster in a disconnected environment without access to the OpenShift Update Service. 3.6.4.1. Prerequisites You must have the oc command-line interface (CLI) tool installed. You must provision a local container image registry with the container images for your update, as described in Mirroring OpenShift Container Platform images . You must have access to the cluster as a user with admin privileges. See Using RBAC to define and apply permissions . You must have a recent etcd backup in case your update fails and you must restore your cluster to a state . You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See Updating installed Operators for more information on how to check compatibility and, if necessary, update the installed Operators. You must ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see Preparing to update a cluster with manually maintained credentials . If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If minAvailable is set to 1 in PodDisruptionBudget , the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. Note If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If minAvailable is set to 1 in PodDisruptionBudget , the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. 3.6.4.2. Pausing a MachineHealthCheck resource During the update process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the MachineHealthCheck resources before updating the cluster. Prerequisites Install the OpenShift CLI ( oc ). Procedure To list all the available MachineHealthCheck resources that you want to pause, run the following command: USD oc get machinehealthcheck -n openshift-machine-api To pause the machine health checks, add the cluster.x-k8s.io/paused="" annotation to the MachineHealthCheck resource. Run the following command: USD oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused="" The annotated MachineHealthCheck resource resembles the following YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: "" spec: selector: matchLabels: role: worker unhealthyConditions: - type: "Ready" status: "Unknown" timeout: "300s" - type: "Ready" status: "False" timeout: "300s" maxUnhealthy: "40%" status: currentHealthy: 5 expectedMachines: 5 Important Resume the machine health checks after updating the cluster. To resume the check, remove the pause annotation from the MachineHealthCheck resource by running the following command: USD oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused- 3.6.4.3. Retrieving a release image digest In order to update a cluster in a disconnected environment using the oc adm upgrade command with the --to-image option, you must reference the sha256 digest that corresponds to your targeted release image. Procedure Run the following command on a device that is connected to the internet: USD oc adm release info -o 'jsonpath={.digest}{"\n"}' quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_VERSION}-USD{ARCHITECTURE} For {OCP_RELEASE_VERSION} , specify the version of OpenShift Container Platform to which you want to update, such as 4.10.16 . For {ARCHITECTURE} , specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Example output sha256:a8bfba3b6dddd1a2fbbead7dac65fe4fb8335089e4e7cae327f3bad334add31d Copy the sha256 digest for use when updating your cluster. 3.6.4.4. Updating the disconnected cluster Update the disconnected cluster to the OpenShift Container Platform version that you downloaded the release images for. Note If you have a local OpenShift Update Service, you can update by using the connected web console or CLI instructions instead of this procedure. Prerequisites You mirrored the images for the new release to your registry. You applied the release image signature ConfigMap for the new release to your cluster. Note The release image signature config map allows the Cluster Version Operator (CVO) to ensure the integrity of release images by verifying that the actual image signatures match the expected signatures. You obtained the sha256 digest for your targeted release image. You installed the OpenShift CLI ( oc ). You paused all MachineHealthCheck resources. Procedure Update the cluster: USD oc adm upgrade --allow-explicit-upgrade --to-image <defined_registry>/<defined_repository>@<digest> Where: <defined_registry> Specifies the name of the mirror registry you mirrored your images to. <defined_repository> Specifies the name of the image repository you want to use on the mirror registry. <digest> Specifies the sha256 digest for the targeted release image, for example, sha256:81154f5c03294534e1eaf0319bef7a601134f891689ccede5d705ef659aa8c92 . Note See "Mirroring OpenShift Container Platform images" to review how your mirror registry and repository names are defined. If you used an ImageContentSourcePolicy or ImageDigestMirrorSet , you can use the canonical registry and repository names instead of the names you defined. The canonical registry name is quay.io and the canonical repository name is openshift-release-dev/ocp-release . You can only configure global pull secrets for clusters that have an ImageContentSourcePolicy , ImageDigestMirrorSet , or ImageTagMirrorSet object. You cannot add a pull secret to a project. Additional resources Mirroring OpenShift Container Platform images 3.6.4.5. Understanding image registry repository mirroring Setting up container registry repository mirroring enables you to perform the following tasks: Configure your OpenShift Container Platform cluster to redirect requests to pull images from a repository on a source image registry and have it resolved by a repository on a mirrored image registry. Identify multiple mirrored repositories for each target repository, to make sure that if one mirror is down, another can be used. Repository mirroring in OpenShift Container Platform includes the following attributes: Image pulls are resilient to registry downtimes. Clusters in disconnected environments can pull images from critical locations, such as quay.io, and have registries behind a company firewall provide the requested images. A particular order of registries is tried when an image pull request is made, with the permanent registry typically being the last one tried. The mirror information you enter is added to the /etc/containers/registries.conf file on every node in the OpenShift Container Platform cluster. When a node makes a request for an image from the source repository, it tries each mirrored repository in turn until it finds the requested content. If all mirrors fail, the cluster tries the source repository. If successful, the image is pulled to the node. Setting up repository mirroring can be done in the following ways: At OpenShift Container Platform installation: By pulling container images needed by OpenShift Container Platform and then bringing those images behind your company's firewall, you can install OpenShift Container Platform into a datacenter that is in a disconnected environment. After OpenShift Container Platform installation: If you did not configure mirroring during OpenShift Container Platform installation, you can do so postinstallation by using any of the following custom resource (CR) objects: ImageDigestMirrorSet (IDMS). This object allows you to pull images from a mirrored registry by using digest specifications. The IDMS CR enables you to set a fall back policy that allows or stops continued attempts to pull from the source registry if the image pull fails. ImageTagMirrorSet (ITMS). This object allows you to pull images from a mirrored registry by using image tags. The ITMS CR enables you to set a fall back policy that allows or stops continued attempts to pull from the source registry if the image pull fails. ImageContentSourcePolicy (ICSP). This object allows you to pull images from a mirrored registry by using digest specifications. The ICSP CR always falls back to the source registry if the mirrors do not work. Important Using an ImageContentSourcePolicy (ICSP) object to configure repository mirroring is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. If you have existing YAML files that you used to create ImageContentSourcePolicy objects, you can use the oc adm migrate icsp command to convert those files to an ImageDigestMirrorSet YAML file. For more information, see "Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring" in the following section. Each of these custom resource objects identify the following information: The source of the container image repository you want to mirror. A separate entry for each mirror repository you want to offer the content requested from the source repository. For new clusters, you can use IDMS, ITMS, and ICSP CRs objects as desired. However, using IDMS and ITMS is recommended. If you upgraded a cluster, any existing ICSP objects remain stable, and both IDMS and ICSP objects are supported. Workloads using ICSP objects continue to function as expected. However, if you want to take advantage of the fallback policies introduced in the IDMS CRs, you can migrate current workloads to IDMS objects by using the oc adm migrate icsp command as shown in the Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring section that follows. Migrating to IDMS objects does not require a cluster reboot. Note If your cluster uses an ImageDigestMirrorSet , ImageTagMirrorSet , or ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. 3.6.4.5.1. Configuring image registry repository mirroring You can create postinstallation mirror configuration custom resources (CR) to redirect image pull requests from a source image registry to a mirrored image registry. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Configure mirrored repositories, by either: Setting up a mirrored repository with Red Hat Quay, as described in Red Hat Quay Repository Mirroring . Using Red Hat Quay allows you to copy images from one repository to another and also automatically sync those repositories repeatedly over time. Using a tool such as skopeo to copy images manually from the source repository to the mirrored repository. For example, after installing the skopeo RPM package on a Red Hat Enterprise Linux (RHEL) 7 or RHEL 8 system, use the skopeo command as shown in this example: USD skopeo copy \ docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... \ docker://example.io/example/ubi-minimal In this example, you have a container image registry that is named example.io with an image repository named example to which you want to copy the ubi9/ubi-minimal image from registry.access.redhat.com . After you create the mirrored registry, you can configure your OpenShift Container Platform cluster to redirect requests made of the source repository to the mirrored repository. Log in to your OpenShift Container Platform cluster. Create a postinstallation mirror configuration CR, by using one of the following examples: Create an ImageDigestMirrorSet or ImageTagMirrorSet CR, as needed, replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.example.com/redhat 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.example.com 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource 1 Indicates the API to use with this CR. This must be config.openshift.io/v1 . 2 Indicates the kind of object according to the pull type: ImageDigestMirrorSet : Pulls a digest reference image. ImageTagMirrorSet : Pulls a tag reference image. 3 Indicates the type of image pull method, either: imageDigestMirrors : Use for an ImageDigestMirrorSet CR. imageTagMirrors : Use for an ImageTagMirrorSet CR. 4 Indicates the name of the mirrored image registry and repository. 5 Optional: Indicates a secondary mirror repository for each target repository. If one mirror is down, the target repository can use another mirror. 6 Indicates the registry and repository source, which is the repository that is referred to in image pull specifications. 7 Optional: Indicates the fallback policy if the image pull fails: AllowContactingSource : Allows continued attempts to pull the image from the source repository. This is the default. NeverContactSource : Prevents continued attempts to pull the image from the source repository. 8 Optional: Indicates a namespace inside a registry, which allows you to use any image in that namespace. If you use a registry domain as a source, the object is applied to all repositories from the registry. 9 Optional: Indicates a registry, which allows you to use any image in that registry. If you specify a registry name, the object is applied to all repositories from a source registry to a mirror registry. 10 Pulls the image registry.example.com/example/myimage@sha256:... from the mirror mirror.example.net/image@sha256:.. . 11 Pulls the image registry.example.com/example/image@sha256:... in the source registry namespace from the mirror mirror.example.net/image@sha256:... . 12 Pulls the image registry.example.com/myimage@sha256 from the mirror registry example.net/registry-example-com/myimage@sha256:... . Create an ImageContentSourcePolicy custom resource, replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 Specifies the name of the mirror image registry and repository. 2 Specifies the online registry and repository containing the content that is mirrored. Create the new object: USD oc create -f registryrepomirror.yaml After the object is created, the Machine Config Operator (MCO) drains the nodes for ImageTagMirrorSet objects only. The MCO does not drain the nodes for ImageDigestMirrorSet and ImageContentSourcePolicy objects. To check that the mirrored configuration settings are applied, do the following on one of the nodes. List your nodes: USD oc get node Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.28.5 ip-10-0-138-148.ec2.internal Ready master 11m v1.28.5 ip-10-0-139-122.ec2.internal Ready master 11m v1.28.5 ip-10-0-147-35.ec2.internal Ready worker 7m v1.28.5 ip-10-0-153-12.ec2.internal Ready worker 7m v1.28.5 ip-10-0-154-10.ec2.internal Ready master 11m v1.28.5 Start the debugging process to access the node: USD oc debug node/ip-10-0-147-35.ec2.internal Example output Starting pod/ip-10-0-147-35ec2internal-debug ... To use host binaries, run `chroot /host` Change your root directory to /host : sh-4.2# chroot /host Check the /etc/containers/registries.conf file to make sure the changes were made: sh-4.2# cat /etc/containers/registries.conf The following output represents a registries.conf file where postinstallation mirror configuration CRs were applied. The final two entries are marked digest-only and tag-only respectively. Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] short-name-mode = "" [[registry]] prefix = "" location = "registry.access.redhat.com/ubi9/ubi-minimal" 1 [[registry.mirror]] location = "example.io/example/ubi-minimal" 2 pull-from-mirror = "digest-only" 3 [[registry.mirror]] location = "example.com/example/ubi-minimal" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com" [[registry.mirror]] location = "mirror.example.net/registry-example-com" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com/example" [[registry.mirror]] location = "mirror.example.net" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com/example/myimage" [[registry.mirror]] location = "mirror.example.net/image" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com" [[registry.mirror]] location = "mirror.example.com" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com/redhat" [[registry.mirror]] location = "mirror.example.com/redhat" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.access.redhat.com/ubi9/ubi-minimal" blocked = true 4 [[registry.mirror]] location = "example.io/example/ubi-minimal-tag" pull-from-mirror = "tag-only" 5 1 Indicates the repository that is referred to in a pull spec. 2 Indicates the mirror for that repository. 3 Indicates that the image pull from the mirror is a digest reference image. 4 Indicates that the NeverContactSource parameter is set for this repository. 5 Indicates that the image pull from the mirror is a tag reference image. Pull an image to the node from the source and check if it is resolved by the mirror. sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf... Troubleshooting repository mirroring If the repository mirroring procedure does not work as described, use the following information about how repository mirroring works to help troubleshoot the problem. The first working mirror is used to supply the pulled image. The main registry is only used if no other mirror works. From the system context, the Insecure flags are used as fallback. The format of the /etc/containers/registries.conf file has changed recently. It is now version 2 and in TOML format. 3.6.4.5.2. Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring Using an ImageContentSourcePolicy (ICSP) object to configure repository mirroring is a deprecated feature. This functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. ICSP objects are being replaced by ImageDigestMirrorSet and ImageTagMirrorSet objects to configure repository mirroring. If you have existing YAML files that you used to create ImageContentSourcePolicy objects, you can use the oc adm migrate icsp command to convert those files to an ImageDigestMirrorSet YAML file. The command updates the API to the current version, changes the kind value to ImageDigestMirrorSet , and changes spec.repositoryDigestMirrors to spec.imageDigestMirrors . The rest of the file is not changed. Because the migration does not change the registries.conf file, the cluster does not need to reboot. For more information about ImageDigestMirrorSet or ImageTagMirrorSet objects, see "Configuring image registry repository mirroring" in the section. Prerequisites Access to the cluster as a user with the cluster-admin role. Ensure that you have ImageContentSourcePolicy objects on your cluster. Procedure Use the following command to convert one or more ImageContentSourcePolicy YAML files to an ImageDigestMirrorSet YAML file: USD oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory> where: <file_name> Specifies the name of the source ImageContentSourcePolicy YAML. You can list multiple file names. --dest-dir Optional: Specifies a directory for the output ImageDigestMirrorSet YAML. If unset, the file is written to the current directory. For example, the following command converts the icsp.yaml and icsp-2.yaml file and saves the new YAML files to the idms-files directory. USD oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files Example output wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml Create the CR object by running the following command: USD oc create -f <path_to_the_directory>/<file-name>.yaml where: <path_to_the_directory> Specifies the path to the directory, if you used the --dest-dir flag. <file_name> Specifies the name of the ImageDigestMirrorSet YAML. Remove the ICSP objects after the IDMS objects are rolled out. 3.6.4.6. Widening the scope of the mirror image catalog to reduce the frequency of cluster node reboots You can scope the mirrored image catalog at the repository level or the wider registry level. A widely scoped ImageContentSourcePolicy resource reduces the number of times the nodes need to reboot in response to changes to the resource. To widen the scope of the mirror image catalog in the ImageContentSourcePolicy resource, perform the following procedure. Prerequisites Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Configure a mirrored image catalog for use in your disconnected cluster. Procedure Run the following command, specifying values for <local_registry> , <pull_spec> , and <pull_secret_file> : USD oc adm catalog mirror <local_registry>/<pull_spec> <local_registry> -a <pull_secret_file> --icsp-scope=registry where: <local_registry> is the local registry you have configured for your disconnected cluster, for example, local.registry:5000 . <pull_spec> is the pull specification as configured in your disconnected registry, for example, redhat/redhat-operator-index:v4.15 <pull_secret_file> is the registry.redhat.io pull secret in .json file format. You can download the pull secret from Red Hat OpenShift Cluster Manager . The oc adm catalog mirror command creates a /redhat-operator-index-manifests directory and generates imageContentSourcePolicy.yaml , catalogSource.yaml , and mapping.txt files. Apply the new ImageContentSourcePolicy resource to the cluster: USD oc apply -f imageContentSourcePolicy.yaml Verification Verify that oc apply successfully applied the change to ImageContentSourcePolicy : USD oc get ImageContentSourcePolicy -o yaml Example output apiVersion: v1 items: - apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.openshift.io/v1alpha1","kind":"ImageContentSourcePolicy","metadata":{"annotations":{},"name":"redhat-operator-index"},"spec":{"repositoryDigestMirrors":[{"mirrors":["local.registry:5000"],"source":"registry.redhat.io"}]}} ... After you update the ImageContentSourcePolicy resource, OpenShift Container Platform deploys the new settings to each node and the cluster starts using the mirrored repository for requests to the source repository. 3.6.4.7. Additional resources Using Operator Lifecycle Manager on restricted networks Machine Config Overview 3.6.5. Uninstalling the OpenShift Update Service from a cluster To remove a local copy of the OpenShift Update Service (OSUS) from your cluster, you must first delete the OSUS application and then uninstall the OSUS Operator. 3.6.5.1. Deleting an OpenShift Update Service application You can delete an OpenShift Update Service application by using the OpenShift Container Platform web console or CLI. 3.6.5.1.1. Deleting an OpenShift Update Service application by using the web console You can use the OpenShift Container Platform web console to delete an OpenShift Update Service application by using the OpenShift Update Service Operator. Prerequisites The OpenShift Update Service Operator has been installed. Procedure In the web console, click Operators Installed Operators . Choose OpenShift Update Service from the list of installed Operators. Click the Update Service tab. From the list of installed OpenShift Update Service applications, select the application to be deleted and then click Delete UpdateService . From the Delete UpdateService? confirmation dialog, click Delete to confirm the deletion. 3.6.5.1.2. Deleting an OpenShift Update Service application by using the CLI You can use the OpenShift CLI ( oc ) to delete an OpenShift Update Service application. Procedure Get the OpenShift Update Service application name using the namespace the OpenShift Update Service application was created in, for example, openshift-update-service : USD oc get updateservice -n openshift-update-service Example output NAME AGE service 6s Delete the OpenShift Update Service application using the NAME value from the step and the namespace the OpenShift Update Service application was created in, for example, openshift-update-service : USD oc delete updateservice service -n openshift-update-service Example output updateservice.updateservice.operator.openshift.io "service" deleted 3.6.5.2. Uninstalling the OpenShift Update Service Operator You can uninstall the OpenShift Update Service Operator by using the OpenShift Container Platform web console or CLI. 3.6.5.2.1. Uninstalling the OpenShift Update Service Operator by using the web console You can use the OpenShift Container Platform web console to uninstall the OpenShift Update Service Operator. Prerequisites All OpenShift Update Service applications have been deleted. Procedure In the web console, click Operators Installed Operators . Select OpenShift Update Service from the list of installed Operators and click Uninstall Operator . From the Uninstall Operator? confirmation dialog, click Uninstall to confirm the uninstallation. 3.6.5.2.2. Uninstalling the OpenShift Update Service Operator by using the CLI You can use the OpenShift CLI ( oc ) to uninstall the OpenShift Update Service Operator. Prerequisites All OpenShift Update Service applications have been deleted. Procedure Change to the project containing the OpenShift Update Service Operator, for example, openshift-update-service : USD oc project openshift-update-service Example output Now using project "openshift-update-service" on server "https://example.com:6443". Get the name of the OpenShift Update Service Operator operator group: USD oc get operatorgroup Example output NAME AGE openshift-update-service-fprx2 4m41s Delete the operator group, for example, openshift-update-service-fprx2 : USD oc delete operatorgroup openshift-update-service-fprx2 Example output operatorgroup.operators.coreos.com "openshift-update-service-fprx2" deleted Get the name of the OpenShift Update Service Operator subscription: USD oc get subscription Example output NAME PACKAGE SOURCE CHANNEL update-service-operator update-service-operator updateservice-index-catalog v1 Using the Name value from the step, check the current version of the subscribed OpenShift Update Service Operator in the currentCSV field: USD oc get subscription update-service-operator -o yaml | grep " currentCSV" Example output currentCSV: update-service-operator.v0.0.1 Delete the subscription, for example, update-service-operator : USD oc delete subscription update-service-operator Example output subscription.operators.coreos.com "update-service-operator" deleted Delete the CSV for the OpenShift Update Service Operator using the currentCSV value from the step: USD oc delete clusterserviceversion update-service-operator.v0.0.1 Example output clusterserviceversion.operators.coreos.com "update-service-operator.v0.0.1" deleted 3.7. Updating hardware on nodes running on vSphere You must ensure that your nodes running in vSphere are running on the hardware version supported by OpenShift Container Platform. Currently, hardware version 15 or later is supported for vSphere virtual machines in a cluster. You can update your virtual hardware immediately or schedule an update in vCenter. Important Version 4.15 of OpenShift Container Platform requires VMware virtual hardware version 15 or later. Before upgrading OpenShift 4.12 to OpenShift 4.13, you must update vSphere to v7.0.2 or later ; otherwise, the OpenShift 4.12 cluster is marked un-upgradeable . 3.7.1. Updating virtual hardware on vSphere To update the hardware of your virtual machines (VMs) on VMware vSphere, update your virtual machines separately to reduce the risk of downtime for your cluster. Important As of OpenShift Container Platform 4.13, VMware virtual hardware version 13 is no longer supported. You need to update to VMware version 15 or later for supporting functionality. 3.7.1.1. Updating the virtual hardware for control plane nodes on vSphere To reduce the risk of downtime, it is recommended that control plane nodes be updated serially. This ensures that the Kubernetes API remains available and etcd retains quorum. Prerequisites You have cluster administrator permissions to execute the required permissions in the vCenter instance hosting your OpenShift Container Platform cluster. Your vSphere ESXi hosts are version 7.0U2 or later. Procedure List the control plane nodes in your cluster. USD oc get nodes -l node-role.kubernetes.io/master Example output NAME STATUS ROLES AGE VERSION control-plane-node-0 Ready master 75m v1.28.5 control-plane-node-1 Ready master 75m v1.28.5 control-plane-node-2 Ready master 75m v1.28.5 Note the names of your control plane nodes. Mark the control plane node as unschedulable. USD oc adm cordon <control_plane_node> Shut down the virtual machine (VM) associated with the control plane node. Do this in the vSphere client by right-clicking the VM and selecting Power Shut Down Guest OS . Do not shut down the VM using Power Off because it might not shut down safely. Update the VM in the vSphere client. Follow Upgrade the Compatibility of a Virtual Machine Manually in the VMware documentation for more information. Power on the VM associated with the control plane node. Do this in the vSphere client by right-clicking the VM and selecting Power On . Wait for the node to report as Ready : USD oc wait --for=condition=Ready node/<control_plane_node> Mark the control plane node as schedulable again: USD oc adm uncordon <control_plane_node> Repeat this procedure for each control plane node in your cluster. 3.7.1.2. Updating the virtual hardware for compute nodes on vSphere To reduce the risk of downtime, it is recommended that compute nodes be updated serially. Note Multiple compute nodes can be updated in parallel given workloads are tolerant of having multiple nodes in a NotReady state. It is the responsibility of the administrator to ensure that the required compute nodes are available. Prerequisites You have cluster administrator permissions to execute the required permissions in the vCenter instance hosting your OpenShift Container Platform cluster. Your vSphere ESXi hosts are version 7.0U2 or later. Procedure List the compute nodes in your cluster. USD oc get nodes -l node-role.kubernetes.io/worker Example output NAME STATUS ROLES AGE VERSION compute-node-0 Ready worker 30m v1.28.5 compute-node-1 Ready worker 30m v1.28.5 compute-node-2 Ready worker 30m v1.28.5 Note the names of your compute nodes. Mark the compute node as unschedulable: USD oc adm cordon <compute_node> Evacuate the pods from the compute node. There are several ways to do this. For example, you can evacuate all or selected pods on a node: USD oc adm drain <compute_node> [--pod-selector=<pod_selector>] See the "Understanding how to evacuate pods on nodes" section for other options to evacuate pods from a node. Shut down the virtual machine (VM) associated with the compute node. Do this in the vSphere client by right-clicking the VM and selecting Power Shut Down Guest OS . Do not shut down the VM using Power Off because it might not shut down safely. Update the VM in the vSphere client. Follow Upgrade the Compatibility of a Virtual Machine Manually in the VMware documentation for more information. Power on the VM associated with the compute node. Do this in the vSphere client by right-clicking the VM and selecting Power On . Wait for the node to report as Ready : USD oc wait --for=condition=Ready node/<compute_node> Mark the compute node as schedulable again: USD oc adm uncordon <compute_node> Repeat this procedure for each compute node in your cluster. 3.7.1.3. Updating the virtual hardware for template on vSphere Prerequisites You have cluster administrator permissions to execute the required permissions in the vCenter instance hosting your OpenShift Container Platform cluster. Your vSphere ESXi hosts are version 7.0U2 or later. Procedure If the RHCOS template is configured as a vSphere template follow Convert a Template to a Virtual Machine in the VMware documentation prior to the step. Note Once converted from a template, do not power on the virtual machine. Update the virtual machine (VM) in the VMware vSphere client. Complete the steps outlined in Upgrade the Compatibility of a Virtual Machine Manually (VMware vSphere documentation). Convert the VM in the vSphere client to a template by right-clicking on the VM and then selecting Template Convert to Template . Important The steps for converting a VM to a template might change in future vSphere documentation versions. Additional resources Understanding how to evacuate pods on nodes 3.7.2. Scheduling an update for virtual hardware on vSphere Virtual hardware updates can be scheduled to occur when a virtual machine is powered on or rebooted. You can schedule your virtual hardware updates exclusively in vCenter by following Schedule a Compatibility Upgrade for a Virtual Machine in the VMware documentation. When scheduling an update prior to performing an update of OpenShift Container Platform, the virtual hardware update occurs when the nodes are rebooted during the course of the OpenShift Container Platform update. 3.8. Migrating to a cluster with multi-architecture compute machines You can migrate your current cluster with single-architecture compute machines to a cluster with multi-architecture compute machines by updating to a multi-architecture, manifest-listed payload. This allows you to add mixed architecture compute nodes to your cluster. For information about configuring your multi-architecture compute machines, see Configuring multi-architecture compute machines on an OpenShift Container Platform cluster . Important Migration from a multi-architecture payload to a single-architecture payload is not supported. Once a cluster has transitioned to using a multi-architecture payload, it can no longer accept a single-architecture update payload. 3.8.1. Migrating to a cluster with multi-architecture compute machines using the CLI Prerequisites You have access to the cluster as a user with the cluster-admin role. Your OpenShift Container Platform version is up to date to at least version 4.13.0. For more information on how to update your cluster version, see Updating a cluster using the web console or Updating a cluster using the CLI . You have installed the OpenShift CLI ( oc ) that matches the version for your current cluster. Your oc client is updated to at least verion 4.13.0. Your OpenShift Container Platform cluster is installed on AWS, Azure, GCP, bare metal or IBM P/Z platforms. For more information on selecting a supported platform for your cluster installation, see Selecting a cluster installation type . Procedure Verify that the RetrievedUpdates condition is True in the Cluster Version Operator (CVO) by running the following command: USD oc get clusterversion/version -o=jsonpath="{.status.conditions[?(.type=='RetrievedUpdates')].status}" If the RetrievedUpates condition is False , you can find supplemental information regarding the failure by using the following command: USD oc adm upgrade For more information about cluster version condition types, see Understanding cluster version condition types . If the condition RetrievedUpdates is False , change the channel to stable-<4.y> or fast-<4.y> with the following command: USD oc adm upgrade channel <channel> After setting the channel, verify if RetrievedUpdates is True . For more information about channels, see Understanding update channels and releases . Migrate to the multi-architecture payload with following command: USD oc adm upgrade --to-multi-arch Verification You can monitor the migration by running the following command: USD oc adm upgrade Important Machine launches may fail as the cluster settles into the new state. To notice and recover when machines fail to launch, we recommend deploying machine health checks. For more information about machine health checks and how to deploy them, see About machine health checks . The migrations must be complete and all the cluster operators must be stable before you can add compute machine sets with different architectures to your cluster. Additional resources Configuring multi-architecture compute machines on an OpenShift Container Platform cluster Updating a cluster using the web console Updating a cluster using the CLI Understanding cluster version condition types Understanding update channels and releases Selecting a cluster installation type About machine health checks 3.9. Updating hosted control planes On hosted control planes for OpenShift Container Platform, updates are decoupled between the control plane and the nodes. Your service cluster provider, which is the user that hosts the cluster control planes, can manage the updates as needed. The hosted cluster handles control plane updates, and node pools handle node updates. 3.9.1. Updates for the hosted cluster The spec.release value dictates the version of the control plane. The HostedCluster object transmits the intended spec.release value to the HostedControlPlane.spec.release value and runs the appropriate Control Plane Operator version. The hosted control plane manages the rollout of the new version of the control plane components along with any OpenShift Container Platform components through the new version of the Cluster Version Operator (CVO). Important In hosted control planes, the NodeHealthCheck resource cannot detect the status of the CVO. A cluster administrator must manually pause the remediation triggered by NodeHealthCheck , before performing critical operations, such as updating the cluster, to prevent new remediation actions from interfering with cluster updates. To pause the remediation, enter the array of strings, for example, pause-test-cluster , as a value of the pauseRequests field in the NodeHealthCheck resource. For more information, see About the Node Health Check Operator . After the cluster update is complete, you can edit or delete the remediation. Navigate to the Compute NodeHealthCheck page, click your node health check, and then click Actions , which shows a drop-down list. 3.9.2. Updates for node pools With node pools, you can configure the software that is running in the nodes by exposing the spec.release and spec.config values. You can start a rolling node pool update in the following ways: Changing the spec.release or spec.config values. Changing any platform-specific field, such as the AWS instance type. The result is a set of new instances with the new type. Changing the cluster configuration, if the change propagates to the node. Node pools support replace updates and in-place updates. The nodepool.spec.release value dictates the version of any particular node pool. A NodePool object completes a replace or an in-place rolling update according to the .spec.management.upgradeType value. After you create a node pool, you cannot change the update type. If you want to change the update type, you must create a node pool and delete the other one. 3.9.2.1. Replace updates for node pools A replace update creates instances in the new version while it removes old instances from the version. This update type is effective in cloud environments where this level of immutability is cost effective. Replace updates do not preserve any manual changes because the node is entirely re-provisioned. 3.9.2.2. In place updates for node pools An in-place update directly updates the operating systems of the instances. This type is suitable for environments where the infrastructure constraints are higher, such as bare metal. In-place updates can preserve manual changes, but will report errors if you make manual changes to any file system or operating system configuration that the cluster directly manages, such as kubelet certificates. 3.9.3. Configuring node pools for hosted control planes On hosted control planes, you can configure node pools by creating a MachineConfig object inside of a config map in the management cluster. Procedure To create a MachineConfig object inside of a config map in the management cluster, enter the following information: apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:... mode: 420 overwrite: true path: USD{PATH} 1 1 Sets the path on the node where the MachineConfig object is stored. After you add the object to the config map, you can apply the config map to the node pool as follows: USD oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace> apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: # ... name: nodepool-1 namespace: clusters # ... spec: config: - name: <configmap_name> 1 # ... 1 Replace <configmap_name> with the name of your config map. 3.10. Updating the boot loader on RHCOS nodes using bootupd To update the boot loader on RHCOS nodes using bootupd , you must either run the bootupctl update command on RHCOS machines manually or provide a machine config with a systemd unit. Unlike grubby or other boot loader tools, bootupd does not manage kernel space configuration such as passing kernel arguments. To configure kernel arguments, see Adding kernel arguments to nodes . Note You can use bootupd to update the boot loader to protect against the BootHole vulnerability. 3.10.1. Updating the boot loader manually You can manually inspect the status of the system and update the boot loader by using the bootupctl command-line tool. Inspect the system status: # bootupctl status Example output for x86_64 Component EFI Installed: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64 Update: At latest version Example output for aarch64 Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 Update: At latest version OpenShift Container Platform clusters initially installed on version 4.4 and older require an explicit adoption phase. If the system status is Adoptable , perform the adoption: # bootupctl adopt-and-update Example output Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64 If an update is available, apply the update so that the changes take effect on the reboot: # bootupctl update Example output Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64 3.10.2. Updating the bootloader automatically via a machine config Another way to automatically update the boot loader with bootupd is to create a systemd service unit that will update the boot loader as needed on every boot. This unit will run the bootupctl update command during the boot process and will be installed on the nodes via a machine config. Note This configuration is not enabled by default as unexpected interruptions of the update operation may lead to unbootable nodes. If you enable this configuration, make sure to avoid interrupting nodes during the boot process while the bootloader update is in progress. The boot loader update operation generally completes quickly thus the risk is low. Create a Butane config file, 99-worker-bootupctl-update.bu , including the contents of the bootupctl-update.service systemd unit. Note See "Creating machine configs with Butane" for information about Butane. Example output variant: openshift version: 4.15.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 systemd: units: - name: bootupctl-update.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target 1 2 On control plane nodes, substitute master for worker in both of these locations. Use Butane to generate a MachineConfig object file, 99-worker-bootupctl-update.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-bootupctl-update.bu -o 99-worker-bootupctl-update.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-bootupctl-update.yaml
|
[
"oc get machinehealthcheck -n openshift-machine-api",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused=\"\"",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: \"\" spec: selector: matchLabels: role: worker unhealthyConditions: - type: \"Ready\" status: \"Unknown\" timeout: \"300s\" - type: \"Ready\" status: \"False\" timeout: \"300s\" maxUnhealthy: \"40%\" status: currentHealthy: 5 expectedMachines: 5",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused-",
"oc adm upgrade",
"Cluster version is 4.13.10 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.13 (available channels: candidate-4.13, candidate-4.14, fast-4.13, stable-4.13) Recommended updates: VERSION IMAGE 4.13.14 quay.io/openshift-release-dev/ocp-release@sha256:406fcc160c097f61080412afcfa7fd65284ac8741ac7ad5b480e304aba73674b 4.13.13 quay.io/openshift-release-dev/ocp-release@sha256:d62495768e335c79a215ba56771ff5ae97e3cbb2bf49ed8fb3f6cefabcdc0f17 4.13.12 quay.io/openshift-release-dev/ocp-release@sha256:73946971c03b43a0dc6f7b0946b26a177c2f3c9d37105441315b4e3359373a55 4.13.11 quay.io/openshift-release-dev/ocp-release@sha256:e1c2377fdae1d063aaddc753b99acf25972b6997ab9a0b7e80cfef627b9ef3dd",
"oc adm upgrade channel <channel>",
"oc adm upgrade channel stable-4.15",
"oc adm upgrade --to-latest=true 1",
"oc adm upgrade --to=<version> 1",
"oc adm upgrade",
"oc adm upgrade",
"Cluster version is <version> Upstream is unset, so the cluster will use an appropriate default. Channel: stable-<version> (available channels: candidate-<version>, eus-<version>, fast-<version>, stable-<version>) No updates available. You may force an update to a specific release image, but doing so might not be supported and might result in downtime or data loss.",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.28.5 ip-10-0-170-223.ec2.internal Ready master 82m v1.28.5 ip-10-0-179-95.ec2.internal Ready worker 70m v1.28.5 ip-10-0-182-134.ec2.internal Ready worker 70m v1.28.5 ip-10-0-211-16.ec2.internal Ready master 82m v1.28.5 ip-10-0-250-100.ec2.internal Ready worker 69m v1.28.5",
"oc adm upgrade --include-not-recommended",
"oc adm upgrade --allow-not-recommended --to <version> <.>",
"oc patch clusterversion/version --patch '{\"spec\":{\"upstream\":\"<update-server-url>\"}}' --type=merge",
"clusterversion.config.openshift.io/version patched",
"spec: clusterID: db93436d-7b05-42cc-b856-43e11ad2d31a upstream: '<update-server-url>' 1",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING master rendered-master-ecbb9582781c1091e1c9f19d50cf836c True False worker rendered-worker-00a3f0c68ae94e747193156b491553d5 True False",
"oc adm upgrade channel eus-<4.y+2>",
"oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":true}}'",
"oc adm upgrade --to-latest",
"Updating to latest version <4.y+1.z>",
"oc adm upgrade",
"Cluster version is <4.y+1.z>",
"oc adm upgrade --to-latest",
"oc adm upgrade",
"Cluster version is <4.y+2.z>",
"oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":false}}'",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING master rendered-master-52da4d2760807cb2b96a3402179a9a4c True False worker rendered-worker-4756f60eccae96fb9dcb4c392c69d497 True False",
"oc get -l 'node-role.kubernetes.io/master!=' -o 'jsonpath={range .items[*]}{.metadata.name}{\"\\n\"}{end}' nodes",
"ci-ln-pwnll6b-f76d1-s8t9n-worker-a-s75z4 ci-ln-pwnll6b-f76d1-s8t9n-worker-b-dglj2 ci-ln-pwnll6b-f76d1-s8t9n-worker-c-lldbm",
"oc label node <node_name> node-role.kubernetes.io/<custom_label>=",
"oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary=",
"node/ci-ln-gtrwm8t-f76d1-spbl7-worker-a-xk76k labeled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: workerpool-canary 1 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,workerpool-canary] 2 } nodeSelector: matchLabels: node-role.kubernetes.io/workerpool-canary: \"\" 3",
"oc create -f <file_name>",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary created",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-b0bb90c4921860f2a5d8a2f8137c1867 True False False 3 3 3 0 97m workerpool-canary rendered-workerpool-canary-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 1 1 1 0 2m42s worker rendered-worker-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 2 2 2 0 97m",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf] } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf: \"\"",
"oc create -f machineConfigPool.yaml",
"machineconfigpool.machineconfiguration.openshift.io/worker-perf created",
"oc label node worker-a node-role.kubernetes.io/worker-perf=''",
"oc label node worker-b node-role.kubernetes.io/worker-perf=''",
"oc label node worker-c node-role.kubernetes.io/worker-perf=''",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker-perf name: 06-kdump-enable-worker-perf spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M",
"oc create -f new-machineconfig.yaml",
"oc label node worker-a node-role.kubernetes.io/worker-perf-canary=''",
"oc label node worker-a node-role.kubernetes.io/worker-perf-",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf-canary spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf,worker-perf-canary] 1 } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf-canary: \"\"",
"oc create -f machineConfigPool-Canary.yaml",
"machineconfigpool.machineconfiguration.openshift.io/worker-perf-canary created",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2bf1379b39e22bae858ea1a3ff54b2ac True False False 3 3 3 0 5d16h worker rendered-worker-b9576d51e030413cfab12eb5b9841f34 True False False 0 0 0 0 5d16h worker-perf rendered-worker-perf-b98a1f62485fa702c4329d17d9364f6a True False False 2 2 2 0 56m worker-perf-canary rendered-worker-perf-canary-b98a1f62485fa702c4329d17d9364f6a True False False 1 1 1 0 44m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION worker-a Ready worker,worker-perf-canary 5d15h v1.27.13+e709aa5 worker-b Ready worker,worker-perf 5d15h v1.27.13+e709aa5 worker-c Ready worker,worker-perf 5d15h v1.27.13+e709aa5",
"systemctl status kdump.service",
"NAME STATUS ROLES AGE VERSION kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; preset: disabled) Active: active (exited) since Tue 2024-09-03 12:44:43 UTC; 10s ago Process: 4151139 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCESS) Main PID: 4151139 (code=exited, status=0/SUCCESS)",
"cat /proc/cmdline",
"crashkernel=512M",
"oc label node worker-a node-role.kubernetes.io/worker-perf=''",
"oc label node worker-a node-role.kubernetes.io/worker-perf-canary-",
"oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":true}}' --type=merge",
"oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":true}}' --type=merge",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched",
"oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":false}}' --type=merge",
"oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":false}}' --type=merge",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched",
"oc get machineconfigpools",
"oc label node <node_name> node-role.kubernetes.io/<custom_label>-",
"oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary-",
"node/ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz labeled",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-1203f157d053fd987c7cbd91e3fbc0ed True False False 3 3 3 0 61m workerpool-canary rendered-mcp-noupdate-5ad4791166c468f3a35cd16e734c9028 True False False 0 0 0 0 21m worker rendered-worker-5ad4791166c468f3a35cd16e734c9028 True False False 3 3 3 0 61m",
"oc delete mcp <mcp_name>",
"--- Trivial example forcing an operator to acknowledge the start of an upgrade file=/home/user/openshift-ansible/hooks/pre_compute.yml - name: note the start of a compute machine update debug: msg: \"Compute machine upgrade of {{ inventory_hostname }} is about to start\" - name: require the user agree to start an upgrade pause: prompt: \"Press Enter to start the compute machine update\"",
"[all:vars] openshift_node_pre_upgrade_hook=/home/user/openshift-ansible/hooks/pre_node.yml openshift_node_post_upgrade_hook=/home/user/openshift-ansible/hooks/post_node.yml",
"systemctl disable --now firewalld.service",
"subscription-manager repos --disable=rhocp-4.14-for-rhel-8-x86_64-rpms --enable=rhocp-4.15-for-rhel-8-x86_64-rpms",
"yum swap ansible ansible-core",
"yum update openshift-ansible openshift-clients",
"subscription-manager repos --disable=rhocp-4.14-for-rhel-8-x86_64-rpms --enable=rhocp-4.15-for-rhel-8-x86_64-rpms",
"[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com mycluster-rhel8-2.example.com mycluster-rhel8-3.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/upgrade.yml 1",
"oc get node",
"NAME STATUS ROLES AGE VERSION mycluster-control-plane-0 Ready master 145m v1.28.5 mycluster-control-plane-1 Ready master 145m v1.28.5 mycluster-control-plane-2 Ready master 145m v1.28.5 mycluster-rhel8-0 Ready worker 98m v1.28.5 mycluster-rhel8-1 Ready worker 98m v1.28.5 mycluster-rhel8-2 Ready worker 98m v1.28.5 mycluster-rhel8-3 Ready worker 98m v1.28.5",
"yum update",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"mkdir -p <directory_name>",
"cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file>",
"echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=",
"\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },",
"{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"export OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"LOCAL_RELEASE_IMAGES_REPOSITORY='<local_release_images_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<cluster_architecture> 1",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc apply -f USD{REMOVABLE_MEDIA_PATH}/mirror/config/<image_signature_file> 1",
"oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --apply-release-image-signature",
"oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: updateservice-registry: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 2 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1",
"oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1",
"apiVersion: v1 kind: Namespace metadata: name: openshift-update-service annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 1",
"oc create -f <filename>.yaml",
"oc create -f update-service-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: update-service-operator-group namespace: openshift-update-service spec: targetNamespaces: - openshift-update-service",
"oc -n openshift-update-service create -f <filename>.yaml",
"oc -n openshift-update-service create -f update-service-operator-group.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: update-service-subscription namespace: openshift-update-service spec: channel: v1 installPlanApproval: \"Automatic\" source: \"redhat-operators\" 1 sourceNamespace: \"openshift-marketplace\" name: \"cincinnati-operator\"",
"oc create -f <filename>.yaml",
"oc -n openshift-update-service create -f update-service-subscription.yaml",
"oc -n openshift-update-service get clusterserviceversions",
"NAME DISPLAY VERSION REPLACES PHASE update-service-operator.v4.6.0 OpenShift Update Service 4.6.0 Succeeded",
"FROM registry.access.redhat.com/ubi9/ubi:latest RUN curl -L -o cincinnati-graph-data.tar.gz https://api.openshift.com/api/upgrades_info/graph-data RUN mkdir -p /var/lib/cincinnati-graph-data && tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/cincinnati-graph-data/ --no-overwrite-dir --no-same-owner CMD [\"/bin/bash\", \"-c\" ,\"exec cp -rp /var/lib/cincinnati-graph-data/* /var/lib/cincinnati/graph-data\"]",
"podman build -f ./Dockerfile -t registry.example.com/openshift/graph-data:latest",
"podman push registry.example.com/openshift/graph-data:latest",
"NAMESPACE=openshift-update-service",
"NAME=service",
"RELEASE_IMAGES=registry.example.com/ocp4/openshift4-release-images",
"GRAPH_DATA_IMAGE=registry.example.com/openshift/graph-data:latest",
"oc -n \"USD{NAMESPACE}\" create -f - <<EOF apiVersion: updateservice.operator.openshift.io/v1 kind: UpdateService metadata: name: USD{NAME} spec: replicas: 2 releases: USD{RELEASE_IMAGES} graphDataImage: USD{GRAPH_DATA_IMAGE} EOF",
"while sleep 1; do POLICY_ENGINE_GRAPH_URI=\"USD(oc -n \"USD{NAMESPACE}\" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{\"\\n\"}' updateservice \"USD{NAME}\")\"; SCHEME=\"USD{POLICY_ENGINE_GRAPH_URI%%:*}\"; if test \"USD{SCHEME}\" = http -o \"USD{SCHEME}\" = https; then break; fi; done",
"while sleep 10; do HTTP_CODE=\"USD(curl --header Accept:application/json --output /dev/stderr --write-out \"%{http_code}\" \"USD{POLICY_ENGINE_GRAPH_URI}?channel=stable-4.6\")\"; if test \"USD{HTTP_CODE}\" -eq 200; then break; fi; echo \"USD{HTTP_CODE}\"; done",
"NAMESPACE=openshift-update-service",
"NAME=service",
"POLICY_ENGINE_GRAPH_URI=\"USD(oc -n \"USD{NAMESPACE}\" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{\"\\n\"}' updateservice \"USD{NAME}\")\"",
"PATCH=\"{\\\"spec\\\":{\\\"upstream\\\":\\\"USD{POLICY_ENGINE_GRAPH_URI}\\\"}}\"",
"oc patch clusterversion version -p USDPATCH --type merge",
"oc get machinehealthcheck -n openshift-machine-api",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused=\"\"",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: \"\" spec: selector: matchLabels: role: worker unhealthyConditions: - type: \"Ready\" status: \"Unknown\" timeout: \"300s\" - type: \"Ready\" status: \"False\" timeout: \"300s\" maxUnhealthy: \"40%\" status: currentHealthy: 5 expectedMachines: 5",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused-",
"oc adm release info -o 'jsonpath={.digest}{\"\\n\"}' quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_VERSION}-USD{ARCHITECTURE}",
"sha256:a8bfba3b6dddd1a2fbbead7dac65fe4fb8335089e4e7cae327f3bad334add31d",
"oc adm upgrade --allow-explicit-upgrade --to-image <defined_registry>/<defined_repository>@<digest>",
"skopeo copy docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... docker://example.io/example/ubi-minimal",
"apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.example.com/redhat 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.example.com 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"oc create -f registryrepomirror.yaml",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.28.5 ip-10-0-138-148.ec2.internal Ready master 11m v1.28.5 ip-10-0-139-122.ec2.internal Ready master 11m v1.28.5 ip-10-0-147-35.ec2.internal Ready worker 7m v1.28.5 ip-10-0-153-12.ec2.internal Ready worker 7m v1.28.5 ip-10-0-154-10.ec2.internal Ready master 11m v1.28.5",
"oc debug node/ip-10-0-147-35.ec2.internal",
"Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" 1 [[registry.mirror]] location = \"example.io/example/ubi-minimal\" 2 pull-from-mirror = \"digest-only\" 3 [[registry.mirror]] location = \"example.com/example/ubi-minimal\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" [[registry.mirror]] location = \"mirror.example.net\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" [[registry.mirror]] location = \"mirror.example.net/image\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/redhat\" [[registry.mirror]] location = \"mirror.example.com/redhat\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" blocked = true 4 [[registry.mirror]] location = \"example.io/example/ubi-minimal-tag\" pull-from-mirror = \"tag-only\" 5",
"sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf",
"oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory>",
"oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files",
"wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml",
"oc create -f <path_to_the_directory>/<file-name>.yaml",
"oc adm catalog mirror <local_registry>/<pull_spec> <local_registry> -a <pull_secret_file> --icsp-scope=registry",
"oc apply -f imageContentSourcePolicy.yaml",
"oc get ImageContentSourcePolicy -o yaml",
"apiVersion: v1 items: - apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.openshift.io/v1alpha1\",\"kind\":\"ImageContentSourcePolicy\",\"metadata\":{\"annotations\":{},\"name\":\"redhat-operator-index\"},\"spec\":{\"repositoryDigestMirrors\":[{\"mirrors\":[\"local.registry:5000\"],\"source\":\"registry.redhat.io\"}]}}",
"oc get updateservice -n openshift-update-service",
"NAME AGE service 6s",
"oc delete updateservice service -n openshift-update-service",
"updateservice.updateservice.operator.openshift.io \"service\" deleted",
"oc project openshift-update-service",
"Now using project \"openshift-update-service\" on server \"https://example.com:6443\".",
"oc get operatorgroup",
"NAME AGE openshift-update-service-fprx2 4m41s",
"oc delete operatorgroup openshift-update-service-fprx2",
"operatorgroup.operators.coreos.com \"openshift-update-service-fprx2\" deleted",
"oc get subscription",
"NAME PACKAGE SOURCE CHANNEL update-service-operator update-service-operator updateservice-index-catalog v1",
"oc get subscription update-service-operator -o yaml | grep \" currentCSV\"",
"currentCSV: update-service-operator.v0.0.1",
"oc delete subscription update-service-operator",
"subscription.operators.coreos.com \"update-service-operator\" deleted",
"oc delete clusterserviceversion update-service-operator.v0.0.1",
"clusterserviceversion.operators.coreos.com \"update-service-operator.v0.0.1\" deleted",
"oc get nodes -l node-role.kubernetes.io/master",
"NAME STATUS ROLES AGE VERSION control-plane-node-0 Ready master 75m v1.28.5 control-plane-node-1 Ready master 75m v1.28.5 control-plane-node-2 Ready master 75m v1.28.5",
"oc adm cordon <control_plane_node>",
"oc wait --for=condition=Ready node/<control_plane_node>",
"oc adm uncordon <control_plane_node>",
"oc get nodes -l node-role.kubernetes.io/worker",
"NAME STATUS ROLES AGE VERSION compute-node-0 Ready worker 30m v1.28.5 compute-node-1 Ready worker 30m v1.28.5 compute-node-2 Ready worker 30m v1.28.5",
"oc adm cordon <compute_node>",
"oc adm drain <compute_node> [--pod-selector=<pod_selector>]",
"oc wait --for=condition=Ready node/<compute_node>",
"oc adm uncordon <compute_node>",
"oc get clusterversion/version -o=jsonpath=\"{.status.conditions[?(.type=='RetrievedUpdates')].status}\"",
"oc adm upgrade",
"oc adm upgrade channel <channel>",
"oc adm upgrade --to-multi-arch",
"oc adm upgrade",
"apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data: mode: 420 overwrite: true path: USD{PATH} 1",
"oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>",
"apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: config: - name: <configmap_name> 1",
"# bootupctl status",
"Component EFI Installed: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64 Update: At latest version",
"Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 Update: At latest version",
"# bootupctl adopt-and-update",
"Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64",
"# bootupctl update",
"Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64",
"variant: openshift version: 4.15.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 systemd: units: - name: bootupctl-update.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target",
"butane 99-worker-bootupctl-update.bu -o 99-worker-bootupctl-update.yaml",
"oc apply -f ./99-worker-bootupctl-update.yaml"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/updating_clusters/performing-a-cluster-update
|
Chapter 101. KafkaUserTlsClientAuthentication schema reference
|
Chapter 101. KafkaUserTlsClientAuthentication schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes use of the KafkaUserTlsClientAuthentication type from KafkaUserTlsExternalClientAuthentication , KafkaUserScramSha512ClientAuthentication . It must have the value tls for the type KafkaUserTlsClientAuthentication . Property Description type Must be tls . string
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaUserTlsClientAuthentication-reference
|
Chapter 51. Securing passwords with a keystore
|
Chapter 51. Securing passwords with a keystore You can use a keystore to encrypt passwords that are used for communication between Business Central and KIE Server. You should encrypt both controller and KIE Server passwords. If Business Central and KIE Server are deployed to different application servers, then both application servers should use the keystore. Use Java Cryptography Extension KeyStore (JCEKS) for your keystore because it supports symmetric keys. Note If KIE Server is not configured with JCEKS, KIE Server passwords are stored in system properties in plain text form. Prerequisites KIE Server is installed in IBM WebSphere Application Server. A KIE Server user with the kie-server role has been created, as described in Section 48.1, "Creating the KIE Server group and role" . Java 8 or higher is installed. Procedure Create a JCEKS keystore. When prompted, enter the password for the KIE Server user that you created. Set the system properties listed in the following table: Table 51.1. System properties used to load a KIE Server JCEKS System property Placeholder Description kie.keystore.keyStoreURL <KEYSTORE_URL> URL for the JCEKS that you want to use, for example file:///home/kie/keystores/keystore.jceks kie.keystore.keyStorePwd <KEYSTORE_PWD> Password for the JCEKS kie.keystore.key.server.alias <KEY_SERVER_ALIAS> Alias of the key for REST services where the password is stored kie.keystore.key.server.pwd <KEY_SERVER_PWD> Password of the alias for REST services with the stored password kie.keystore.key.ctrl.alias <KEY_CONTROL_ALIAS> Alias of the key for default REST Process Automation Controller where the password is stored kie.keystore.key.ctrl.pwd <KEY_CONTROL_PWD> Password of the alias for default REST Process Automation Controller with the stored password Start KIE Server to verify the configuration.
| null |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/securing-passwords-was-proc_kie-server-on-was
|
Providing feedback on JBoss EAP documentation
|
Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Please include the Document URL , the section number and describe the issue . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuration_guide/proc_providing-feedback-on-red-hat-documentation_default
|
5.7. Logging HAProxy Messages to rsyslog
|
5.7. Logging HAProxy Messages to rsyslog You can configure your system to log HAProxy messages to rsyslog by writing to the /dev/log socket. Alternately you can target the TCP loopback address, however this results in slower performance. The following procedure configures HAProxy to log messages to rsyslog . In the global section of the HAProxy configuration file, use the log directive to target the /dev/log socket. Update the frontend , backend , and listen proxies to send messages to the rsyslog service you configured in the global section of the HAProxy configuration file. To do this, add a log global directive to the defaults section of the configuration file, as shown. If you are running HAProxy within a chrooted environment, or you let HAProxy create a chroot directory for you by using the chroot configuration directive, then the socket must be made available within that chroot directory. You can do this by modifying the rsyslog configuration to create a new listening socket within the chroot filesystem. To do this, add the following lines to your rsyslog configuration file. To customize what and where HAProxy log messages will appear, you can use rsyslog filters as described in Basic Configuration of Rsyslog in the System Administrator's Guide .
|
[
"log /dev/log local0",
"defaults log global option httplog",
"USDModLoad imuxsock USDAddUnixListenSocket PATH_TO_CHROOT /dev/log"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-haproxy-logging
|
Chapter 33. InlineLogging schema reference
|
Chapter 33. InlineLogging schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the InlineLogging type from ExternalLogging . It must have the value inline for the type InlineLogging . Property Property type Description type string Must be inline . loggers map A Map from logger name to logger level.
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-InlineLogging-reference
|
probe::nfsd.open
|
probe::nfsd.open Name probe::nfsd.open - NFS server opening a file for client Synopsis nfsd.open Values fh file handle (the first part is the length of the file handle) type type of file (regular file or dir) access indicates the type of open (read/write/commit/readdir...) client_ip the ip address of client
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfsd-open
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.