title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
β | url
stringlengths 79
342
|
---|---|---|---|
2.3. Checking the Status of NetworkManager | 2.3. Checking the Status of NetworkManager To check whether NetworkManager is running: Note that the systemctl status command displays Active: inactive (dead) when NetworkManager is not running. | [
"~]USD systemctl status NetworkManager NetworkManager.service - Network Manager Loaded: loaded (/lib/systemd/system/NetworkManager.service; enabled) Active: active (running) since Fri, 08 Mar 2013 12:50:04 +0100; 3 days ago"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-checking_the_status_of_networkmanager |
Chapter 4. Using Machine Deletion Remediation | Chapter 4. Using Machine Deletion Remediation You can use the Machine Deletion Remediation Operator to reprovision unhealthy nodes using the Machine API. You can use the Machine Deletion Remediation Operator in conjunction with the Node Health Check Operator. 4.1. About the Machine Deletion Remediation Operator The Machine Deletion Remediation (MDR) operator works with the NodeHealthCheck controller, to reprovision unhealthy nodes using the Machine API. MDR follows the annotation on the node to the associated machine object, confirms that it has an owning controller (for example, MachineSetController ), and deletes it. Once the machine CR is deleted, the owning controller creates a replacement. The prerequisites for MDR include: a Machine API-based cluster that is able to programmatically destroy and create cluster nodes, nodes that are associated with machines, and declaratively managed machines. You can then modify the NodeHealthCheck CR to use MDR as its remediator. An example MDR template object and NodeHealthCheck configuration are provided in the documentation. The MDR process works as follows: the Node Health Check Operator detects an unhealthy node and creates a MDR CR. the MDR Operator watches for the MDR CR associated with the unhealthy node and deletes it, if the machine has an owning controller. when the node is healthy again, the MDR CR is deleted by the NodeHealthCheck controller. 4.2. Installing the Machine Deletion Remediation Operator by using the web console You can use the Red Hat OpenShift web console to install the Machine Deletion Remediation Operator. Prerequisites Log in as a user with cluster-admin privileges. Procedure In the Red Hat OpenShift web console, navigate to Operators OperatorHub . Select the Machine Deletion Remediation Operator, or MDR, from the list of available Operators, and then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator is installed to the openshift-workload-availability namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the openshift-workload-availability namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the log of the pod in the openshift-workload-availability project for any reported issues. 4.3. Installing the Machine Deletion Remediation Operator by using the CLI You can use the OpenShift CLI ( oc ) to install the Machine Deletion Remediation Operator. You can install the Machine Deletion Remediation Operator in your own namespace or in the openshift-workload-availability namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a Namespace custom resource (CR) for the Machine Deletion Remediation Operator: Define the Namespace CR and save the YAML file, for example, workload-availability-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: openshift-workload-availability To create the Namespace CR, run the following command: USD oc create -f workload-availability-namespace.yaml Create an OperatorGroup CR: Define the OperatorGroup CR and save the YAML file, for example, workload-availability-operator-group.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: workload-availability-operator-group namespace: openshift-workload-availability To create the OperatorGroup CR, run the following command: USD oc create -f workload-availability-operator-group.yaml Create a Subscription CR: Define the Subscription CR and save the YAML file, for example, machine-deletion-remediation-subscription.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: machine-deletion-remediation-operator namespace: openshift-workload-availability 1 spec: channel: stable name: machine-deletion-remediation-operator source: redhat-operators sourceNamespace: openshift-marketplace package: machine-deletion-remediation 1 Specify the Namespace where you want to install the Machine Deletion Remediation Operator. When installing the Machine Deletion Remediation Operator in the openshift-workload-availability Subscription CR, the Namespace and OperatorGroup CRs will already exist. To create the Subscription CR, run the following command: USD oc create -f machine-deletion-remediation-subscription.yaml Verification Verify that the installation succeeded by inspecting the CSV resource: USD oc get csv -n openshift-workload-availability Example output NAME DISPLAY VERSION REPLACES PHASE machine-deletion-remediation.v0.3.0 Machine Deletion Remediation Operator 0.3.0 machine-deletion-remediation.v0.2.1 Succeeded 4.4. Configuring the Machine Deletion Remediation Operator You can use the Machine Deletion Remediation Operator, with the Node Health Check Operator, to create the MachineDeletionRemediationTemplate Custom Resource (CR). This CR defines the remediation strategy for the nodes. The MachineDeletionRemediationTemplate CR resembles the following YAML file: apiVersion: machine-deletion-remediation.medik8s.io/v1alpha1 kind: MachineDeletionRemediationTemplate metadata: name: machinedeletionremediationtemplate-sample namespace: openshift-workload-availability spec: template: spec: {} 4.5. Troubleshooting the Machine Deletion Remediation Operator 4.5.1. General troubleshooting Issue You want to troubleshoot issues with the Machine Deletion Remediation Operator. Resolution Check the Operator logs. USD oc logs <machine-deletion-remediation-controller-manager-name> -c manager -n <namespace-name> 4.5.2. Unsuccessful remediation Issue An unhealthy node was not remediated. Resolution Verify that the MachineDeletionRemediation CR was created by running the following command: USD oc get mdr -A If the NodeHealthCheck controller did not create the MachineDeletionRemediation CR when the node turned unhealthy, check the logs of the NodeHealthCheck controller. Additionally, ensure that the NodeHealthCheck CR includes the required specification to use the remediation template. If the MachineDeletionRemediation CR was created, ensure that its name matches the unhealthy node object. 4.5.3. Machine Deletion Remediation Operator resources exist even after uninstalling the Operator Issue The Machine Deletion Remediation Operator resources, such as the remediation CR and the remediation template CR, exist even after uninstalling the Operator. Resolution To remove the Machine Deletion Remediation Operator resources, you can delete the resources by selecting the Delete all operand instances for this operator checkbox before uninstalling. This checkbox feature is only available in Red Hat OpenShift since version 4.13. For all versions of Red Hat OpenShift, you can delete the resources by running the following relevant command for each resource type: USD oc delete mdr <machine-deletion-remediation> -n <namespace> USD oc delete mdrt <machine-deletion-remediation-template> -n <namespace> The remediation CR mdr must be created and deleted by the same entity, for example, NHC. If the remediation CR mdr is still present, it is deleted, together with the MDR operator. The remediation template CR mdrt only exists if you use MDR with NHC. When the MDR operator is deleted using the web console, the remediation template CR mdrt is also deleted. 4.6. Gathering data about the Machine Deletion Remediation Operator To collect debugging information about the Machine Deletion Remediation Operator, use the must-gather tool. For information about the must-gather image for the Machine Deletion Remediation Operator, see Gathering data about specific features . 4.7. Additional resources Using Operator Lifecycle Manager on restricted networks . Deleting Operators from a cluster | [
"apiVersion: v1 kind: Namespace metadata: name: openshift-workload-availability",
"oc create -f workload-availability-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: workload-availability-operator-group namespace: openshift-workload-availability",
"oc create -f workload-availability-operator-group.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: machine-deletion-remediation-operator namespace: openshift-workload-availability 1 spec: channel: stable name: machine-deletion-remediation-operator source: redhat-operators sourceNamespace: openshift-marketplace package: machine-deletion-remediation",
"oc create -f machine-deletion-remediation-subscription.yaml",
"oc get csv -n openshift-workload-availability",
"NAME DISPLAY VERSION REPLACES PHASE machine-deletion-remediation.v0.3.0 Machine Deletion Remediation Operator 0.3.0 machine-deletion-remediation.v0.2.1 Succeeded",
"apiVersion: machine-deletion-remediation.medik8s.io/v1alpha1 kind: MachineDeletionRemediationTemplate metadata: name: machinedeletionremediationtemplate-sample namespace: openshift-workload-availability spec: template: spec: {}",
"oc logs <machine-deletion-remediation-controller-manager-name> -c manager -n <namespace-name>",
"oc get mdr -A",
"oc delete mdr <machine-deletion-remediation> -n <namespace>",
"oc delete mdrt <machine-deletion-remediation-template> -n <namespace>"
] | https://docs.redhat.com/en/documentation/workload_availability_for_red_hat_openshift/24.3/html/remediation_fencing_and_maintenance/machine-deletion-remediation-operator-remediate-nodes |
14.19. Setting Schedule Parameters | 14.19. Setting Schedule Parameters schedinfo allows scheduler parameters to be passed to guest virtual machines. The following command format should be used: Each parameter is explained below: domain - this is the guest virtual machine domain --set - the string placed here is the controller or action that is to be called. Additional parameters or values if required should be added as well. --current - when used with --set , will use the specified set string as the current scheduler information. When used without will display the current scheduler information. --config - - when used with --set , will use the specified set string on the reboot. When used without will display the scheduler information that is saved in the configuration file. --live - when used with --set , will use the specified set string on a guest virtual machine that is currently running. When used without will display the configuration setting currently used by the running virtual machine The scheduler can be set with any of the following parameters: cpu_shares , vcpu_period and vcpu_quota . Example 14.5. schedinfo show This example shows the shell guest virtual machine's schedule information Example 14.6. schedinfo set In this example, the cpu_shares is changed to 2046. This effects the current state and not the configuration file. | [
"virsh schedinfo domain --set --weight --cap --current --config --live",
"virsh schedinfo shell Scheduler : posix cpu_shares : 1024 vcpu_period : 100000 vcpu_quota : -1",
"virsh schedinfo --set cpu_shares=2046 shell Scheduler : posix cpu_shares : 2046 vcpu_period : 100000 vcpu_quota : -1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-managing_guest_virtual_machines_with_virsh-setting_schedule_parameters |
Chapter 2. Metrics solution components | Chapter 2. Metrics solution components Red Hat recommends using the Performance Co-Pilot to collect and archive Satellite metrics. Performance Co-Pilot (PCP) Performance Co-Pilot is a suite of tools and libraries for acquiring, storing, and analyzing system-level performance measurements. You can use PCP to analyze live and historical metrics in the CLI. Performance Metric Domain Agents (PMDA) A Performance Metric Domain Agent is a PCP add-on which enables access to metrics of an application or service. To gather all metrics relevant to Satellite, you have to install PMDA for Apache HTTP Server and PostgreSQL. Grafana A web application that visualizes metrics collected by PCP. To analyze metrics in the web UI, you have to install Grafana and the Grafana PCP plugin. | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/monitoring_satellite_performance/metrics-solution-components_monitoring |
Chapter 9. Image configuration resources | Chapter 9. Image configuration resources Use the following procedure to configure image registries. 9.1. Image controller configuration parameters The image.config.openshift.io/cluster resource holds cluster-wide information about how to handle images. The canonical, and only valid name is cluster . Its spec offers the following configuration parameters. Note Parameters such as DisableScheduledImport , MaxImagesBulkImportedPerRepository , MaxScheduledImportsPerMinute , ScheduledImageImportMinimumIntervalSeconds , InternalRegistryHostname are not configurable. Parameter Description allowedRegistriesForImport Limits the container image registries from which normal users can import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or ImageStreamMappings from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions. Every element of this list contains a location of the registry specified by the registry domain name. domainName : Specifies a domain name for the registry. If the registry uses a non-standard 80 or 443 port, the port should be included in the domain name as well. insecure : Insecure indicates whether the registry is secure or insecure. By default, if not otherwise specified, the registry is assumed to be secure. additionalTrustedCA A reference to a config map containing additional CAs that should be trusted during image stream import , pod image pull , openshift-image-registry pullthrough , and builds. The namespace for this config map is openshift-config . The format of the config map is to use the registry hostname as the key, and the PEM-encoded certificate as the value, for each additional registry CA to trust. externalRegistryHostnames Provides the hostnames for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The first value is used in publicDockerImageRepository field in image streams. The value must be in hostname[:port] format. registrySources Contains configuration that determines how the container runtime should treat individual registries when accessing images for builds and pods. For instance, whether or not to allow insecure access. It does not contain configuration for the internal cluster registry. insecureRegistries : Registries which do not have a valid TLS certificate or only support HTTP connections. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . blockedRegistries : Registries for which image pull and push actions are denied. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . All other registries are allowed. allowedRegistries : Registries for which image pull and push actions are allowed. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . All other registries are blocked. containerRuntimeSearchRegistries : Registries for which image pull and push actions are allowed using image short names. All other registries are blocked. Either blockedRegistries or allowedRegistries can be set, but not both. Warning When the allowedRegistries parameter is defined, all registries, including registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. When using the parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. The status field of the image.config.openshift.io/cluster resource holds observed values from the cluster. Parameter Description internalRegistryHostname Set by the Image Registry Operator, which controls the internalRegistryHostname . It sets the hostname for the default OpenShift image registry. The value must be in hostname[:port] format. For backward compatibility, you can still use the OPENSHIFT_DEFAULT_REGISTRY environment variable, but this setting overrides the environment variable. externalRegistryHostnames Set by the Image Registry Operator, provides the external hostnames for the image registry when it is exposed externally. The first value is used in publicDockerImageRepository field in image streams. The values must be in hostname[:port] format. 9.2. Configuring image registry settings You can configure image registry settings by editing the image.config.openshift.io/cluster custom resource (CR). When changes to the registry are applied to the image.config.openshift.io/cluster CR, the Machine Config Operator (MCO) performs the following sequential actions: Cordons the node Applies changes by restarting CRI-O Uncordons the node Note The MCO does not restart nodes when it detects changes. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR: apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Image : Holds cluster-wide information about how to handle images. The canonical, and only valid name is cluster . 2 allowedRegistriesForImport : Limits the container image registries from which normal users may import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or ImageStreamMappings from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions. 3 additionalTrustedCA : A reference to a config map containing additional certificate authorities (CA) that are trusted during image stream import, pod image pull, openshift-image-registry pullthrough, and builds. The namespace for this config map is openshift-config . The format of the config map is to use the registry hostname as the key, and the PEM certificate as the value, for each additional registry CA to trust. 4 registrySources : Contains configuration that determines whether the container runtime allows or blocks individual registries when accessing images for builds and pods. Either the allowedRegistries parameter or the blockedRegistries parameter can be set, but not both. You can also define whether or not to allow access to insecure registries or registries that allow registries that use image short names. This example uses the allowedRegistries parameter, which defines the registries that are allowed to be used. The insecure registry insecure.com is also allowed. The registrySources parameter does not contain configuration for the internal cluster registry. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, you must add the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. Do not add the registry.redhat.io and quay.io registries to the blockedRegistries list. When using the allowedRegistries , blockedRegistries , or insecureRegistries parameter, you can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . Insecure external registries should be avoided to reduce possible security risks. To check that the changes are applied, list your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.27.3 ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.27.3 ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.27.3 ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.27.3 ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.27.3 ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.27.3 9.2.1. Adding specific registries You can add a list of registries, and optionally an individual repository within a registry, that are permitted for image pull and push actions by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. When pulling or pushing images, the container runtime searches the registries listed under the registrySources parameter in the image.config.openshift.io/cluster CR. If you created a list of registries under the allowedRegistries parameter, the container runtime searches only those registries. Registries not in the list are blocked. Warning When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, add the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR with an allowed list: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Contains configurations that determine how the container runtime should treat individual registries when accessing images for builds and pods. It does not contain configuration for the internal cluster registry. 2 Specify registries, and optionally a repository in that registry, to use for image pull and push actions. All other registries are blocked. Note Either the allowedRegistries parameter or the blockedRegistries parameter can be set, but not both. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster resource for any changes to the registries. When the MCO detects a change, it triggers a rollout on nodes in machine config pool (MCP). The allowed registries list is used to update the image signature policy in the /etc/containers/policy.json file on each node. Changes to the /etc/containers/policy.json file do not require the node to drain. Verification Enter the following command to obtain a list of your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b Run the following command to enter debug mode on the node: USD oc debug node/<node_name> When prompted, enter chroot /host into the terminal: sh-4.4# chroot /host Enter the following command to check that the registries have been added to the policy file: sh-5.1# cat /etc/containers/policy.json | jq '.' The following policy indicates that only images from the example.com, quay.io, and registry.redhat.io registries are permitted for image pulls and pushes: Example 9.1. Example image signature policy file { "default":[ { "type":"reject" } ], "transports":{ "atomic":{ "example.com":[ { "type":"insecureAcceptAnything" } ], "image-registry.openshift-image-registry.svc:5000":[ { "type":"insecureAcceptAnything" } ], "insecure.com":[ { "type":"insecureAcceptAnything" } ], "quay.io":[ { "type":"insecureAcceptAnything" } ], "reg4.io/myrepo/myapp:latest":[ { "type":"insecureAcceptAnything" } ], "registry.redhat.io":[ { "type":"insecureAcceptAnything" } ] }, "docker":{ "example.com":[ { "type":"insecureAcceptAnything" } ], "image-registry.openshift-image-registry.svc:5000":[ { "type":"insecureAcceptAnything" } ], "insecure.com":[ { "type":"insecureAcceptAnything" } ], "quay.io":[ { "type":"insecureAcceptAnything" } ], "reg4.io/myrepo/myapp:latest":[ { "type":"insecureAcceptAnything" } ], "registry.redhat.io":[ { "type":"insecureAcceptAnything" } ] }, "docker-daemon":{ "":[ { "type":"insecureAcceptAnything" } ] } } } Note If your cluster uses the registrySources.insecureRegistries parameter, ensure that any insecure registries are included in the allowed list. For example: spec: registrySources: insecureRegistries: - insecure.com allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com - image-registry.openshift-image-registry.svc:5000 9.2.2. Blocking specific registries You can block any registry, and optionally an individual repository within a registry, by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. When pulling or pushing images, the container runtime searches the registries listed under the registrySources parameter in the image.config.openshift.io/cluster CR. If you created a list of registries under the blockedRegistries parameter, the container runtime does not search those registries. All other registries are allowed. Warning To prevent pod failure, do not add the registry.redhat.io and quay.io registries to the blockedRegistries list, as they are required by payload images within your environment. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR with a blocked list: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 blockedRegistries: 2 - untrusted.com - reg1.io/myrepo/myapp:latest status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Contains configurations that determine how the container runtime should treat individual registries when accessing images for builds and pods. It does not contain configuration for the internal cluster registry. 2 Specify registries, and optionally a repository in that registry, that should not be used for image pull and push actions. All other registries are allowed. Note Either the blockedRegistries registry or the allowedRegistries registry can be set, but not both. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster resource for any changes to the registries. When the MCO detects a change, it drains the nodes, applies the change, and uncordons the nodes. After the nodes return to the Ready state, changes to the blocked registries appear in the /etc/containers/registries.conf file on each node. Verification Enter the following command to obtain a list of your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b Run the following command to enter debug mode on the node: USD oc debug node/<node_name> When prompted, enter chroot /host into the terminal: sh-4.4# chroot /host Enter the following command to check that the registries have been added to the policy file: sh-5.1# cat etc/containers/registries.conf The following example indicates that images from the untrusted.com registry are prevented for image pulls and pushes: Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "untrusted.com" blocked = true 9.2.2.1. Blocking a payload registry In a mirroring configuration, you can block upstream payload registries in a disconnected environment using a ImageContentSourcePolicy (ICSP) object. The following example procedure demonstrates how to block the quay.io/openshift-payload payload registry. Procedure Create the mirror configuration using an ImageContentSourcePolicy (ICSP) object to mirror the payload to a registry in your instance. The following example ICSP file mirrors the payload internal-mirror.io/openshift-payload : apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: my-icsp spec: repositoryDigestMirrors: - mirrors: - internal-mirror.io/openshift-payload source: quay.io/openshift-payload After the object deploys onto your nodes, verify that the mirror configuration is set by checking the /etc/containers/registries.conf file: Example output [[registry]] prefix = "" location = "quay.io/openshift-payload" mirror-by-digest-only = true [[registry.mirror]] location = "internal-mirror.io/openshift-payload" Use the following command to edit the image.config.openshift.io custom resource file: USD oc edit image.config.openshift.io cluster To block the payload registry, add the following configuration to the image.config.openshift.io custom resource file: spec: registrySources: blockedRegistries: - quay.io/openshift-payload Verification Verify that the upstream payload registry is blocked by checking the /etc/containers/registries.conf file on the node. Example output [[registry]] prefix = "" location = "quay.io/openshift-payload" blocked = true mirror-by-digest-only = true [[registry.mirror]] location = "internal-mirror.io/openshift-payload" 9.2.3. Allowing insecure registries You can add insecure registries, and optionally an individual repository within a registry, by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. Registries that do not use valid SSL certificates or do not require HTTPS connections are considered insecure. Warning Insecure external registries should be avoided to reduce possible security risks. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR with an insecure registries list: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 insecureRegistries: 2 - insecure.com - reg4.io/myrepo/myapp:latest allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com 3 - reg4.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Contains configurations that determine how the container runtime should treat individual registries when accessing images for builds and pods. It does not contain configuration for the internal cluster registry. 2 Specify an insecure registry. You can specify a repository in that registry. 3 Ensure that any insecure registries are included in the allowedRegistries list. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster CR for any changes to the registries, then drains and uncordons the nodes when it detects changes. After the nodes return to the Ready state, changes to the insecure and blocked registries appear in the /etc/containers/registries.conf file on each node. Verification To check that the registries have been added to the policy file, use the following command on a node: USD cat /etc/containers/registries.conf The following example indicates that images from the insecure.com registry is insecure and is allowed for image pulls and pushes. Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "insecure.com" insecure = true 9.2.4. Adding registries that allow image short names You can add registries to search for an image short name by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. An image short name enables you to search for images without including the fully qualified domain name in the pull spec. For example, you could use rhel7/etcd instead of registry.access.redhat.com/rhe7/etcd . You might use short names in situations where using the full path is not practical. For example, if your cluster references multiple internal registries whose DNS changes frequently, you would need to update the fully qualified domain names in your pull specs with each change. In this case, using an image short name might be beneficial. When pulling or pushing images, the container runtime searches the registries listed under the registrySources parameter in the image.config.openshift.io/cluster CR. If you created a list of registries under the containerRuntimeSearchRegistries parameter, when pulling an image with a short name, the container runtime searches those registries. Warning Using image short names with public registries is strongly discouraged because the image might not deploy if the public registry requires authentication. Use fully-qualified image names with public registries. Red Hat internal or private registries typically support the use of image short names. If you list public registries under the containerRuntimeSearchRegistries parameter (including the registry.redhat.io , docker.io , and quay.io registries), you expose your credentials to all the registries on the list, and you risk network and registry attacks. Because you can only have one pull secret for pulling images, as defined by the global pull secret, that secret is used to authenticate against every registry in that list. Therefore, if you include public registries in the list, you introduce a security risk. You cannot list multiple public registries under the containerRuntimeSearchRegistries parameter if each public registry requires different credentials and a cluster does not list the public registry in the global pull secret. For a public registry that requires authentication, you can use an image short name only if the registry has its credentials stored in the global pull secret. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster resource for any changes to the registries. When the MCO detects a change, it drains the nodes, applies the change, and uncordons the nodes. After the nodes return to the Ready state, if the containerRuntimeSearchRegistries parameter is added, the MCO creates a file in the /etc/containers/registries.conf.d directory on each node with the listed registries. The file overrides the default list of unqualified search registries in the /etc/containers/registries.conf file. There is no way to fall back to the default list of unqualified search registries. The containerRuntimeSearchRegistries parameter works only with the Podman and CRI-O container engines. The registries in the list can be used only in pod specs, not in builds and image streams. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: - domainName: quay.io insecure: false additionalTrustedCA: name: myconfigmap registrySources: containerRuntimeSearchRegistries: 1 - reg1.io - reg2.io - reg3.io allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io - reg2.io - reg3.io - image-registry.openshift-image-registry.svc:5000 ... status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Specify registries to use with image short names. You should use image short names with only internal or private registries to reduce possible security risks. 2 Ensure that any registries listed under containerRuntimeSearchRegistries are included in the allowedRegistries list. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use this parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. Verification Enter the following command to obtain a list of your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b Run the following command to enter debug mode on the node: USD oc debug node/<node_name> When prompted, enter chroot /host into the terminal: sh-4.4# chroot /host Enter the following command to check that the registries have been added to the policy file: sh-5.1# cat /etc/containers/registries.conf.d/01-image-searchRegistries.conf Example output unqualified-search-registries = ['reg1.io', 'reg2.io', 'reg3.io'] 9.2.5. Configuring additional trust stores for image registry access The image.config.openshift.io/cluster custom resource can contain a reference to a config map that contains additional certificate authorities to be trusted during image registry access. Prerequisites The certificate authorities (CA) must be PEM-encoded. Procedure You can create a config map in the openshift-config namespace and use its name in AdditionalTrustedCA in the image.config.openshift.io custom resource to provide additional CAs that should be trusted when contacting external registries. The config map key is the hostname of a registry with the port for which this CA is to be trusted, and the PEM certificate content is the value, for each additional registry CA to trust. Image registry CA config map example apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- 1 If the registry has the port, such as registry-with-port.example.com:5000 , : should be replaced with .. . You can configure additional CAs with the following procedure. To configure an additional CA: USD oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config USD oc edit image.config.openshift.io cluster spec: additionalTrustedCA: name: registry-config 9.3. Understanding image registry repository mirroring Setting up container registry repository mirroring enables you to perform the following tasks: Configure your OpenShift Container Platform cluster to redirect requests to pull images from a repository on a source image registry and have it resolved by a repository on a mirrored image registry. Identify multiple mirrored repositories for each target repository, to make sure that if one mirror is down, another can be used. Repository mirroring in OpenShift Container Platform includes the following attributes: Image pulls are resilient to registry downtimes. Clusters in disconnected environments can pull images from critical locations, such as quay.io, and have registries behind a company firewall provide the requested images. A particular order of registries is tried when an image pull request is made, with the permanent registry typically being the last one tried. The mirror information you enter is added to the /etc/containers/registries.conf file on every node in the OpenShift Container Platform cluster. When a node makes a request for an image from the source repository, it tries each mirrored repository in turn until it finds the requested content. If all mirrors fail, the cluster tries the source repository. If successful, the image is pulled to the node. Setting up repository mirroring can be done in the following ways: At OpenShift Container Platform installation: By pulling container images needed by OpenShift Container Platform and then bringing those images behind your company's firewall, you can install OpenShift Container Platform into a datacenter that is in a disconnected environment. After OpenShift Container Platform installation: If you did not configure mirroring during OpenShift Container Platform installation, you can do so postinstallation by using any of the following custom resource (CR) objects: ImageDigestMirrorSet (IDMS). This object allows you to pull images from a mirrored registry by using digest specifications. The IDMS CR enables you to set a fall back policy that allows or stops continued attempts to pull from the source registry if the image pull fails. ImageTagMirrorSet (ITMS). This object allows you to pull images from a mirrored registry by using image tags. The ITMS CR enables you to set a fall back policy that allows or stops continued attempts to pull from the source registry if the image pull fails. ImageContentSourcePolicy (ICSP). This object allows you to pull images from a mirrored registry by using digest specifications. An ICSP always falls back to the source registry if the mirrors do not work. Important Using an ImageContentSourcePolicy (ICSP) object to configure repository mirroring is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. If you have existing YAML files that you used to create ImageContentSourcePolicy objects, you can use the oc adm migrate icsp command to convert those files to an ImageDigestMirrorSet YAML file. For more information, see "Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring" in the following section. Each of these custom resource objects identify the following information: The source of the container image repository you want to mirror. A separate entry for each mirror repository you want to offer the content requested from the source repository. For new clusters, you can use IDMS, ITMS, and ICSP CRs objects as desired. However, using IDMS and ITMS is recommended. If you upgraded a cluster, any existing ICSP objects remain stable, and both IDMS and ICSP objects are supported. Workloads using ICSP objects continue to function as expected. However, if you want to take advantage of the fallback policies introduced in the IDMS CRs, you can migrate current workloads to IDMS objects by using the oc adm migrate icsp command as shown in the Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring section that follows. Migrating to IDMS objects does not require a cluster reboot. Note If your cluster uses an ImageDigestMirrorSet , ImageTagMirrorSet , or ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. Additional resources For more information about global pull secrets, see Updating the global cluster pull secret . 9.3.1. Configuring image registry repository mirroring You can create postinstallation mirror configuration custom resources (CR) to redirect image pull requests from a source image registry to a mirrored image registry. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Configure mirrored repositories, by either: Setting up a mirrored repository with Red Hat Quay, as described in Red Hat Quay Repository Mirroring . Using Red Hat Quay allows you to copy images from one repository to another and also automatically sync those repositories repeatedly over time. Using a tool such as skopeo to copy images manually from the source repository to the mirrored repository. For example, after installing the skopeo RPM package on a Red Hat Enterprise Linux (RHEL) 7 or RHEL 8 system, use the skopeo command as shown in this example: USD skopeo copy \ docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... \ docker://example.io/example/ubi-minimal In this example, you have a container image registry that is named example.io with an image repository named example to which you want to copy the ubi9/ubi-minimal image from registry.access.redhat.com . After you create the mirrored registry, you can configure your OpenShift Container Platform cluster to redirect requests made of the source repository to the mirrored repository. Log in to your OpenShift Container Platform cluster. Create a postinstallation mirror configuration CR, by using one of the following examples: Create an ImageDigestMirrorSet or ImageTagMirrorSet CR, as needed, replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.redhat.io 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource 1 Indicates the API to use with this CR. This must be config.openshift.io/v1 . 2 Indicates the kind of object according to the pull type: ImageDigestMirrorSet : Pulls a digest reference image. ImageTagMirrorSet : Pulls a tag reference image. 3 Indicates the type of image pull method, either: imageDigestMirrors : Use for an ImageDigestMirrorSet CR. imageTagMirrors : Use for an ImageTagMirrorSet CR. 4 Indicates the name of the mirrored image registry and repository. 5 Optional: Indicates a secondary mirror repository for each target repository. If one mirror is down, the target repository can use another mirror. 6 Indicates the registry and repository source, which is the repository that is referred to in image pull specifications. 7 Optional: Indicates the fallback policy if the image pull fails: AllowContactingSource : Allows continued attempts to pull the image from the source repository. This is the default. NeverContactSource : Prevents continued attempts to pull the image from the source repository. 8 Optional: Indicates a namespace inside a registry, which allows you to use any image in that namespace. If you use a registry domain as a source, the object is applied to all repositories from the registry. 9 Optional: Indicates a registry, which allows you to use any image in that registry. If you specify a registry name, the object is applied to all repositories from a source registry to a mirror registry. 10 Pulls the image registry.example.com/example/myimage@sha256:... from the mirror mirror.example.net/image@sha256:.. . 11 Pulls the image registry.example.com/example/image@sha256:... in the source registry namespace from the mirror mirror.example.net/image@sha256:... . 12 Pulls the image registry.example.com/myimage@sha256 from the mirror registry example.net/registry-example-com/myimage@sha256:... . Create an ImageContentSourcePolicy custom resource, replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 Specifies the name of the mirror image registry and repository. 2 Specifies the online registry and repository containing the content that is mirrored. Create the new object: USD oc create -f registryrepomirror.yaml After the object is created, the Machine Config Operator (MCO) drains the nodes for ImageTagMirrorSet objects only. The MCO does not drain the nodes for ImageDigestMirrorSet and ImageContentSourcePolicy objects. To check that the mirrored configuration settings are applied, do the following on one of the nodes. List your nodes: USD oc get node Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.28.5 ip-10-0-138-148.ec2.internal Ready master 11m v1.28.5 ip-10-0-139-122.ec2.internal Ready master 11m v1.28.5 ip-10-0-147-35.ec2.internal Ready worker 7m v1.28.5 ip-10-0-153-12.ec2.internal Ready worker 7m v1.28.5 ip-10-0-154-10.ec2.internal Ready master 11m v1.28.5 Start the debugging process to access the node: USD oc debug node/ip-10-0-147-35.ec2.internal Example output Starting pod/ip-10-0-147-35ec2internal-debug ... To use host binaries, run `chroot /host` Change your root directory to /host : sh-4.2# chroot /host Check the /etc/containers/registries.conf file to make sure the changes were made: sh-4.2# cat /etc/containers/registries.conf The following output represents a registries.conf file where postinstallation mirror configuration CRs were applied. The final two entries are marked digest-only and tag-only respectively. Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] short-name-mode = "" [[registry]] prefix = "" location = "registry.access.redhat.com/ubi9/ubi-minimal" 1 [[registry.mirror]] location = "example.io/example/ubi-minimal" 2 pull-from-mirror = "digest-only" 3 [[registry.mirror]] location = "example.com/example/ubi-minimal" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com" [[registry.mirror]] location = "mirror.example.net/registry-example-com" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com/example" [[registry.mirror]] location = "mirror.example.net" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com/example/myimage" [[registry.mirror]] location = "mirror.example.net/image" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.redhat.io" [[registry.mirror]] location = "mirror.example.com" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.redhat.io/openshift4" [[registry.mirror]] location = "mirror.example.com/redhat" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.access.redhat.com/ubi9/ubi-minimal" blocked = true 4 [[registry.mirror]] location = "example.io/example/ubi-minimal-tag" pull-from-mirror = "tag-only" 5 1 Indicates the repository that is referred to in a pull spec. 2 Indicates the mirror for that repository. 3 Indicates that the image pull from the mirror is a digest reference image. 4 Indicates that the NeverContactSource parameter is set for this repository. 5 Indicates that the image pull from the mirror is a tag reference image. Pull an image to the node from the source and check if it is resolved by the mirror. sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf... Troubleshooting repository mirroring If the repository mirroring procedure does not work as described, use the following information about how repository mirroring works to help troubleshoot the problem. The first working mirror is used to supply the pulled image. The main registry is only used if no other mirror works. From the system context, the Insecure flags are used as fallback. The format of the /etc/containers/registries.conf file has changed recently. It is now version 2 and in TOML format. 9.3.2. Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring Using an ImageContentSourcePolicy (ICSP) object to configure repository mirroring is a deprecated feature. This functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. ICSP objects are being replaced by ImageDigestMirrorSet and ImageTagMirrorSet objects to configure repository mirroring. If you have existing YAML files that you used to create ImageContentSourcePolicy objects, you can use the oc adm migrate icsp command to convert those files to an ImageDigestMirrorSet YAML file. The command updates the API to the current version, changes the kind value to ImageDigestMirrorSet , and changes spec.repositoryDigestMirrors to spec.imageDigestMirrors . The rest of the file is not changed. Because the migration does not change the registries.conf file, the cluster does not need to reboot. For more information about ImageDigestMirrorSet or ImageTagMirrorSet objects, see "Configuring image registry repository mirroring" in the section. Prerequisites Access to the cluster as a user with the cluster-admin role. Ensure that you have ImageContentSourcePolicy objects on your cluster. Procedure Use the following command to convert one or more ImageContentSourcePolicy YAML files to an ImageDigestMirrorSet YAML file: USD oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory> where: <file_name> Specifies the name of the source ImageContentSourcePolicy YAML. You can list multiple file names. --dest-dir Optional: Specifies a directory for the output ImageDigestMirrorSet YAML. If unset, the file is written to the current directory. For example, the following command converts the icsp.yaml and icsp-2.yaml file and saves the new YAML files to the idms-files directory. USD oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files Example output wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml Create the CR object by running the following command: USD oc create -f <path_to_the_directory>/<file-name>.yaml where: <path_to_the_directory> Specifies the path to the directory, if you used the --dest-dir flag. <file_name> Specifies the name of the ImageDigestMirrorSet YAML. Remove the ICSP objects after the IDMS objects are rolled out. | [
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.27.3 ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.27.3 ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.27.3 ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.27.3 ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.27.3 ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.27.3",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-5.1# cat /etc/containers/policy.json | jq '.'",
"{ \"default\":[ { \"type\":\"reject\" } ], \"transports\":{ \"atomic\":{ \"example.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"image-registry.openshift-image-registry.svc:5000\":[ { \"type\":\"insecureAcceptAnything\" } ], \"insecure.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"quay.io\":[ { \"type\":\"insecureAcceptAnything\" } ], \"reg4.io/myrepo/myapp:latest\":[ { \"type\":\"insecureAcceptAnything\" } ], \"registry.redhat.io\":[ { \"type\":\"insecureAcceptAnything\" } ] }, \"docker\":{ \"example.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"image-registry.openshift-image-registry.svc:5000\":[ { \"type\":\"insecureAcceptAnything\" } ], \"insecure.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"quay.io\":[ { \"type\":\"insecureAcceptAnything\" } ], \"reg4.io/myrepo/myapp:latest\":[ { \"type\":\"insecureAcceptAnything\" } ], \"registry.redhat.io\":[ { \"type\":\"insecureAcceptAnything\" } ] }, \"docker-daemon\":{ \"\":[ { \"type\":\"insecureAcceptAnything\" } ] } } }",
"spec: registrySources: insecureRegistries: - insecure.com allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com - image-registry.openshift-image-registry.svc:5000",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 blockedRegistries: 2 - untrusted.com - reg1.io/myrepo/myapp:latest status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-5.1# cat etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"untrusted.com\" blocked = true",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: my-icsp spec: repositoryDigestMirrors: - mirrors: - internal-mirror.io/openshift-payload source: quay.io/openshift-payload",
"[[registry]] prefix = \"\" location = \"quay.io/openshift-payload\" mirror-by-digest-only = true [[registry.mirror]] location = \"internal-mirror.io/openshift-payload\"",
"oc edit image.config.openshift.io cluster",
"spec: registrySources: blockedRegistries: - quay.io/openshift-payload",
"[[registry]] prefix = \"\" location = \"quay.io/openshift-payload\" blocked = true mirror-by-digest-only = true [[registry.mirror]] location = \"internal-mirror.io/openshift-payload\"",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 insecureRegistries: 2 - insecure.com - reg4.io/myrepo/myapp:latest allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com 3 - reg4.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"insecure.com\" insecure = true",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: - domainName: quay.io insecure: false additionalTrustedCA: name: myconfigmap registrySources: containerRuntimeSearchRegistries: 1 - reg1.io - reg2.io - reg3.io allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io - reg2.io - reg3.io - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-5.1# cat /etc/containers/registries.conf.d/01-image-searchRegistries.conf",
"unqualified-search-registries = ['reg1.io', 'reg2.io', 'reg3.io']",
"apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config",
"oc edit image.config.openshift.io cluster",
"spec: additionalTrustedCA: name: registry-config",
"skopeo copy docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... docker://example.io/example/ubi-minimal",
"apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.redhat.io 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"oc create -f registryrepomirror.yaml",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.28.5 ip-10-0-138-148.ec2.internal Ready master 11m v1.28.5 ip-10-0-139-122.ec2.internal Ready master 11m v1.28.5 ip-10-0-147-35.ec2.internal Ready worker 7m v1.28.5 ip-10-0-153-12.ec2.internal Ready worker 7m v1.28.5 ip-10-0-154-10.ec2.internal Ready master 11m v1.28.5",
"oc debug node/ip-10-0-147-35.ec2.internal",
"Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" 1 [[registry.mirror]] location = \"example.io/example/ubi-minimal\" 2 pull-from-mirror = \"digest-only\" 3 [[registry.mirror]] location = \"example.com/example/ubi-minimal\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" [[registry.mirror]] location = \"mirror.example.net\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" [[registry.mirror]] location = \"mirror.example.net/image\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.redhat.io\" [[registry.mirror]] location = \"mirror.example.com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.redhat.io/openshift4\" [[registry.mirror]] location = \"mirror.example.com/redhat\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" blocked = true 4 [[registry.mirror]] location = \"example.io/example/ubi-minimal-tag\" pull-from-mirror = \"tag-only\" 5",
"sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf",
"oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory>",
"oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files",
"wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml",
"oc create -f <path_to_the_directory>/<file-name>.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/images/image-configuration |
Chapter 6. Installation configuration parameters for IBM Z and IBM LinuxONE | Chapter 6. Installation configuration parameters for IBM Z and IBM LinuxONE Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. 6.1. Available installation configuration parameters for IBM Z The following tables specify the required, optional, and IBM Z-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you use the Red Hat OpenShift Networking OpenShift SDN network plugin, only the IPv4 address family is supported. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 6.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are s390x (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are s390x (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_ibm_z_and_ibm_linuxone/installation-config-parameters-ibm-z |
Chapter 12. Additional Installation Options | Chapter 12. Additional Installation Options All Red Hat Certificate System instances created with pkispawn make certain assumptions about the instances being installed, such as the default signing algorithm to use for CA signing certificates and whether to allow IPv6 addresses for hosts. This chapter describes additional configuration options that impact the installation and configuration for new instances, so many of these procedures occur before the instance is created. 12.1. Lightweight Sub-CAs Using the default settings, you are able to create lightweight sub-CAs. They enable you to configure services, like virtual private network (VPN) gateways, to accept only certificates issued by one sub-CA. At the same time, you can configure other services to accept only certificates issued by a different sub-CA or the root CA. If you revoke the intermediate certificate of a sub-CA, all certificates issued by this sub-CA are automatically invalid. If you set up the CA subsystem in Certificate System, it is automatically the root CA. All sub-CAs you create, are subordinated to this root CA. 12.1.1. Setting up a Lightweight Sub-CA Depending on your environment, the installation of a sub-CA differs between Internal CAs and External CAs. For more information, see Section 7.8, "Setting up Subsystems with an External CA" . 12.1.2. Disabling the Creation of Lightweight Sub-CAs In certain situations, administrators want to disable lightweight sub-CAs. To prevent adding, modifying, or removing sub-CAs, enter the following command on the Directory Server instance used by Certificate System: This command removes the default Access Control List (ACL) entries, which grant the permissions to manage sub-CAs. Note If any ACLs related to lightweight sub-CA creation have been modified or added, remove the relevant values. 12.1.3. Re-enabling the Creation of Lightweight Sub-CAs If you previously disabled the creation of lightweight sub-CAs, you can re-enable the feature by entering the following command on the Directory Server instance used by Certificate System: This command adds the Access Control List (ACL) entries, which grant the permissions to manage sub-CAs. | [
"ldapmodify -D \"cn=Directory Manager\" -W -x -h server.example.com dn: cn=aclResources,o= instance_name changetype: modify delete: resourceACLS resourceACLS: certServer.ca.authorities:create,modify:allow (create,modify) group=\"Administrators\":Administrators may create and modify lightweight authorities delete: resourceACLS resourceACLS: certServer.ca.authorities:delete:allow (delete) group=\"Administrators\":Administrators may delete lightweight authorities",
"ldapmodify -D \"cn=Directory Manager\" -W -x -h server.example.com dn: cn=aclResources,o= instance_name changetype: modify add: resourceACLS resourceACLS: certServer.ca.authorities:create,modify:allow (create,modify) group=\"Administrators\":Administrators may create and modify lightweight authorities resourceACLS: certServer.ca.authorities:delete:allow (delete) group=\"Administrators\":Administrators may delete lightweight authorities"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/additionalinstalloptions |
6.3. Rebooting a Virtual Machine | 6.3. Rebooting a Virtual Machine Rebooting a Virtual Machine Click Compute Virtual Machines and select a running virtual machine. Click Reboot . Click OK in the Reboot Virtual Machine(s) confirmation window. The Status of the virtual machine changes to Reboot In Progress before returning to Up . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/rebooting_a_virtual_machine |
Chapter 3. Working in code-server | Chapter 3. Working in code-server Code-server is a web-based interactive development environment supporting multiple programming languages, including Python, for working with Jupyter notebooks. With the code-server workbench image, you can customize your workbench environment to meet your needs using a variety of extensions to add new languages, themes, debuggers, and connect to additional services. For more information, see code-server in GitHub . Note Elyra-based pipelines are not available with the code-server workbench image. 3.1. Creating code-server workbenches You can create a blank Jupyter notebook or import a Jupyter notebook in code-server from several different sources. 3.1.1. Creating a workbench When you create a workbench, you specify an image (an IDE, packages, and other dependencies). You can also configure connections, cluster storage, and add container storage. Prerequisites You have logged in to Red Hat OpenShift AI. If you use OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You created a project. If you created a Simple Storage Service (S3) account outside of Red Hat OpenShift AI and you want to create connections to your existing S3 storage buckets, you have the following credential information for the storage buckets: Endpoint URL Access key Secret key Region Bucket name For more information, see Working with data in an S3-compatible object store . Procedure From the OpenShift AI dashboard, click Data Science Projects . The Data Science Projects page opens. Click the name of the project that you want to add the workbench to. A project details page opens. Click the Workbenches tab. Click Create workbench . The Create workbench page opens. In the Name field, enter a unique name for your workbench. Optional: If you want to change the default resource name for your workbench, click Edit resource name . The resource name is what your resource is labeled in OpenShift. Valid characters include lowercase letters, numbers, and hyphens (-). The resource name cannot exceed 30 characters, and it must start with a letter and end with a letter or number. Note: You cannot change the resource name after the workbench is created. You can edit only the display name and the description. Optional: In the Description field, enter a description for your workbench. In the Notebook image section, complete the fields to specify the workbench image to use with your workbench. From the Image selection list, select a workbench image that suits your use case. A workbench image includes an IDE and Python packages (reusable code). Optionally, click View package information to view a list of packages that are included in the image that you selected. If the workbench image has multiple versions available, select the workbench image version to use from the Version selection list. To use the latest package versions, Red Hat recommends that you use the most recently added image. Note You can change the workbench image after you create the workbench. In the Deployment size section, from the Container size list, select a container size for your server. The container size specifies the number of CPUs and the amount of memory allocated to the container, setting the guaranteed minimum (request) and maximum (limit) for both. Optional: In the Environment variables section, select and specify values for any environment variables. Setting environment variables during the workbench configuration helps you save time later because you do not need to define them in the body of your notebooks, or with the IDE command line interface. If you are using S3-compatible storage, add these recommended environment variables: AWS_ACCESS_KEY_ID specifies your Access Key ID for Amazon Web Services. AWS_SECRET_ACCESS_KEY specifies your Secret access key for the account specified in AWS_ACCESS_KEY_ID . OpenShift AI stores the credentials as Kubernetes secrets in a protected namespace if you select Secret when you add the variable. In the Cluster storage section, configure the storage for your workbench. Select one of the following options: Create new persistent storage to create storage that is retained after you shut down your workbench. Complete the relevant fields to define the storage: Enter a name for the cluster storage. Enter a description for the cluster storage. Select a storage class for the cluster storage. Note You cannot change the storage class after you add the cluster storage to the workbench. Under Persistent storage size , enter a new size in gibibytes or mebibytes. Use existing persistent storage to reuse existing storage and select the storage from the Persistent storage list. Optional: You can add a connection to your workbench. A connection is a resource that contains the configuration parameters needed to connect to a data source or sink, such as an object storage bucket. You can use storage buckets for storing data, models, and pipeline artifacts. You can also use a connection to specify the location of a model that you want to deploy. In the Connections section, use an existing connection or create a new connection: Use an existing connection as follows: Click Attach existing connections . From the Connection list, select a connection that you previously defined. Create a new connection as follows: Click Create connection . The Add connection dialog appears. From the Connection type drop-down list, select the type of connection. The Connection details section appears. If you selected S3 compatible object storage in the preceding step, configure the connection details: In the Connection name field, enter a unique name for the connection. Optional: In the Description field, enter a description for the connection. In the Access key field, enter the access key ID for the S3-compatible object storage provider. In the Secret key field, enter the secret access key for the S3-compatible object storage account that you specified. In the Endpoint field, enter the endpoint of your S3-compatible object storage bucket. In the Region field, enter the default region of your S3-compatible object storage account. In the Bucket field, enter the name of your S3-compatible object storage bucket. Click Create . If you selected URI in the preceding step, configure the connection details: In the Connection name field, enter a unique name for the connection. Optional: In the Description field, enter a description for the connection. In the URI field, enter the Uniform Resource Identifier (URI). Click Create . Click Create workbench . Verification The workbench that you created appears on the Workbenches tab for the project. Any cluster storage that you associated with the workbench during the creation process appears on the Cluster storage tab for the project. The Status column on the Workbenches tab displays a status of Starting when the workbench server is starting, and Running when the workbench has successfully started. Optional: Click the Open link to open the IDE in a new window. 3.1.2. Uploading an existing notebook file to code-server from local storage You can load an existing notebook from local storage into code-server to continue work, or adapt a project for a new use case. Prerequisites You have a running code-server workbench. You have a notebook file in your local storage. Procedure In your code-server window, from the Activity Bar, select the menu icon ( ) File Open File . In the Open File dialog, click the Show Local button. Locate and select the notebook file and then click Open . The file is displayed in the code-server window. Save the file and then push the changes to your repository. Verification The notebook file appears in the code-server Explorer view. You can open the notebook file in the code-server window. 3.2. Collaborating on workbenches in code-server by using Git If your notebooks or other files are stored in Git version control, you can clone a Git repository to work with them in code-server. When you are ready, you can push your changes back to the Git repository so that others can review or use your models. 3.2.1. Uploading an existing notebook file from a Git repository by using code-server You can use the code-server user interface to clone a Git repository into your workspace to continue your work or integrate files from an external project. Prerequisites You have a running code-server workbench. You have read access for the Git repository you want to clone. Procedure Copy the HTTPS URL for the Git repository. In GitHub, click β€ Code HTTPS and then click the Copy URL to clipboard icon. In GitLab, click Code and then click the Copy URL icon under Clone with HTTPS . In your code-server window, from the Activity Bar, select the menu icon ( ) View Command Palette . In the Command Palette, enter Git: Clone , and then select Git: Clone from the list. Paste the HTTPS URL of the repository that contains your notebook, and then press Enter. If prompted, enter your username and password for the Git repository. Select a folder to clone the repository into, and then click OK . When the repository is cloned, a dialog appears asking if you want to open the cloned repository. Click Open in the dialog. Verification Check that the contents of the repository are visible in the code-server Explorer view, or run the ls command in the terminal to verify that the repository shows as a directory. 3.2.2. Uploading an existing notebook file to code-server from a Git repository by using the CLI You can use the command line interface to clone a Git repository into your workspace to continue your work or integrate files from an external project. Prerequisites You have a running code-server workbench. Procedure Copy the HTTPS URL for the Git repository. In GitHub, click β€ Code HTTPS and then click the Copy URL to clipboard icon. In GitLab, click Code and then click the Copy URL icon under Clone with HTTPS . In your code-server window, from the Activity Bar, select the menu icon ( ) Terminal New Terminal to open a terminal window. Enter the git clone command: Replace <git-clone-URL> with the HTTPS URL, for example: Verification Check that the contents of the repository are visible in the code-server Explorer view, or run the ls command in the terminal to verify that the repository shows as a directory. 3.2.3. Updating your project in code-server with changes from a remote Git repository You can pull changes made by other users into your workbench from a remote Git repository. Prerequisites You have configured the remote Git repository. You have imported the Git repository into code-server, and the contents of the repository are visible in the Explorer view in code-server. You have permissions to pull files from the remote Git repository to your local repository. You have a running code-server workbench. Procedure In your code-server window, from the Activity Bar, click the Source Control icon ( ). Click the Views and More Actions button ( ... ), and then select Pull . Verification You can view the changes pulled from the remote repository in the Source Control pane. 3.2.4. Pushing project changes in code-server to a Git repository To build and deploy your application in a production environment, upload your work to a remote Git repository. Prerequisites You have a running code-server workbench. You have added the relevant Git repository in code-server. You have permission to push changes to the relevant Git repository. You have installed the Git version control extension. Procedure In your code-server window, from the Activity Bar, select the menu icon ( ) File Save All to save any unsaved changes. Click the Source Control icon ( ) to open the Source Control pane. Confirm that your changed files appear under Changes . to the Changes heading, click the Stage All Changes button (+). The staged files move to the Staged Changes section. In the Message field, enter a brief description of the changes you made. to the Commit button, click the More Actions... button, and then click Commit & Sync . If prompted, enter your Git credentials and click OK . Verification Your most recently pushed changes are visible in the remote Git repository. 3.3. Managing Python packages in code-server In code-server, you can view the Python packages that are installed on your workbench image and install additional packages. 3.3.1. Viewing Python packages installed on your code-server workbench You can check which Python packages are installed on your workbench and which version of the package you have by running the pip tool in a terminal window. Prerequisites You have a running code-server workbench. Procedure In your code-server window, from the Activity Bar, select the menu icon ( ) Terminal New Terminal to open a terminal window. Enter the pip list command. Verification The output shows an alphabetical list of all installed Python packages and their versions. For example, if you use the pip list command immediately after creating a notebook server that uses the Minimal image, the first packages shown are similar to the following: 3.3.2. Installing Python packages on your code-server workbench You can install Python packages that are not part of the default workbench image by adding the package and the version to a requirements.txt file and then running the pip install command in a terminal window. Note Although you can install packages directly, it is recommended that you use a requirements.txt file so that the packages stated in the file can be easily re-used across different notebooks. Prerequisites You have a running code-server workbench. Procedure In your code-server window, from the Activity Bar, select the menu icon ( ) File New Text File to create a new text file. Add the packages to install to the text file. You can specify the exact version to install by using the == (equal to) operator, for example: Note Red Hat recommends specifying exact package versions to enhance the stability of your workbench over time. New package versions can introduce undesirable or unexpected changes in your environment's behavior. To install multiple packages at the same time, place each package on a separate line. Save the text file as requirements.txt . From the Activity Bar, select the menu icon ( ) Terminal New Terminal to open a terminal window. Install the packages in requirements.txt to your server by using the following command: Important The pip install command installs the package on your workbench. However, you must run the import statement to use the package in your code. Verification Confirm that the packages in the requirements.txt file appear in the list of packages installed on the workbench. See Viewing Python packages installed on your code-server workbench for details. 3.4. Installing extensions with code-server With the code-server workbench image, you can customize your code-server environment by using extensions to add new languages, themes, and debuggers, and to connect to additional services. You can also enhance the efficiency of your data science work with extensions for syntax highlighting, auto-indentation, and bracket matching. For details about the third-party extensions that you can install with code-server, see the Open VSX Registry . Prerequisites You are logged in to Red Hat OpenShift AI. You have created a data science project that has a code-server workbench. Procedure From the OpenShift AI dashboard, click Data Science Projects . The Data Science Projects page opens. Click the name of the project containing the code-server workbench you want to start. A project details page opens. Click the Workbenches tab. If the status of the workbench that you want to use is Running , skip to the step. If the status of the workbench is Stopped , in the Status column for the workbench, click Start . The Status column changes from Stopped to Starting when the workbench server is starting, and then to Running when the workbench has successfully started. Click the Open link to the workbench. The code-server window opens. In the Activity Bar, click the Extensions icon ( ). Search for the name of the extension you want to install. Click Install to add the extension to your code-server environment. Verification In the Browser - Installed list on the Extensions panel, confirm that the extension you installed is listed. | [
"git clone <git-clone-URL>",
"git clone https://github.com/example/myrepo.git Cloning into myrepo remote: Enumerating objects: 11, done. remote: Counting objects: 100% (11/11), done. remote: Compressing objects: 100% (10/10), done. remote: Total 2821 (delta 1), reused 5 (delta 1), pack-reused 2810 Receiving objects: 100% (2821/2821), 39.17 MiB | 23.89 MiB/s, done. Resolving deltas: 100% (1416/1416), done.",
"pip list",
"Package Version ------------------------ ---------- asttokens 2.4.1 boto3 1.34.162 botocore 1.34.162 cachetools 5.5.0 certifi 2024.8.30 charset-normalizer 3.4.0 comm 0.2.2 contourpy 1.3.0 cycler 0.12.1 debugpy 1.8.7",
"altair",
"altair==4.1.0",
"pip install -r requirements.txt",
"import altair"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_in_your_data_science_ide/working_in_code_server |
Preface | Preface Red Hat Developer Hub (Developer Hub) 1.4 is now generally available. Developer Hub is a fully supported, enterprise-grade productized version of upstream Backstage v1.32.6. You can access and download the Red Hat Developer Hub application from the Red Hat Customer Portal or from the Ecosystem Catalog . | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/release_notes/pr01 |
Chapter 14. Idling applications | Chapter 14. Idling applications Cluster administrators can idle applications to reduce resource consumption. This is useful when the cluster is deployed on a public cloud where cost is related to resource consumption. If any scalable resources are not in use, OpenShift Dedicated discovers and idles them by scaling their replicas to 0 . The time network traffic is directed to the resources, the resources are unidled by scaling up the replicas, and normal operation continues. Applications are made of services, as well as other scalable resources, such as deployment configs. The action of idling an application involves idling all associated resources. 14.1. Idling applications Idling an application involves finding the scalable resources (deployment configurations, replication controllers, and others) associated with a service. Idling an application finds the service and marks it as idled, scaling down the resources to zero replicas. You can use the oc idle command to idle a single service, or use the --resource-names-file option to idle multiple services. 14.1.1. Idling a single service Procedure To idle a single service, run: USD oc idle <service> 14.1.2. Idling multiple services Idling multiple services is helpful if an application spans across a set of services within a project, or when idling multiple services in conjunction with a script to idle multiple applications in bulk within the same project. Procedure Create a file containing a list of the services, each on their own line. Idle the services using the --resource-names-file option: USD oc idle --resource-names-file <filename> Note The idle command is limited to a single project. For idling applications across a cluster, run the idle command for each project individually. 14.2. Unidling applications Application services become active again when they receive network traffic and are scaled back up their state. This includes both traffic to the services and traffic passing through routes. Applications can also be manually unidled by scaling up the resources. Procedure To scale up a DeploymentConfig, run: USD oc scale --replicas=1 dc <dc_name> Note Automatic unidling by a router is currently only supported by the default HAProxy router. | [
"oc idle <service>",
"oc idle --resource-names-file <filename>",
"oc scale --replicas=1 dc <dc_name>"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/building_applications/idling-applications |
3.2. Is Your Hardware Compatible? | 3.2. Is Your Hardware Compatible? Hardware compatibility is particularly important if you have an older system or a system that you built yourself. Red Hat Enterprise Linux 6.9 should be compatible with most hardware in systems that were factory built within the last two years. However, hardware specifications change almost daily, so it is difficult to guarantee that your hardware is 100% compatible. One consistent requirement is your processor. Red Hat Enterprise Linux 6.9 supports, at minimum, all 32-bit and 64-bit implementations of Intel microarchitecture from P6 and onwards and AMD microarchitecture from Athlon and onwards. The most recent list of supported hardware can be found at: | [
"https://hardware.redhat.com/"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-is_your_hardware_compatible-x86 |
Preface | Preface The Package manifest document provides a package listing for Red Hat Enterprise Linux 9.0. Capabilities and limits of RHEL 9 as compared to other versions of the system are available in the Knowledgebase article Red Hat Enterprise Linux technology capabilities and limits . Information regarding the RHEL life cycle is provided in the Red Hat Enterprise Linux Life Cycle document. Detailed changes in each minor release of RHEL are documented in the Release notes . Changes to packages between RHEL 8 and RHEL 9, as well as changes between minor releases of RHEL 9, are listed in Considerations in adopting RHEL 9 . Application compatibility levels are explained in the Red Hat Enterprise Linux 9: Application Compatibility Guide document. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/package_manifest/preface |
Chapter 8. Multicloud Object Gateway bucket and bucket class replication | Chapter 8. Multicloud Object Gateway bucket and bucket class replication Data replication between buckets provides higher resiliency and better collaboration options. These buckets can be either data buckets backed by any supported storage solution (S3, Azure, and so on), or namespace buckets (where PV Pool and GCP are not supported). For more information on how to create a backingstore, see Adding storage resources for hybrid or Multicloud using the MCG command line interface . For more information on how to create a namespacestore, see Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML . A bucket replication policy is composed of a list of replication rules. Each rule defines the destination bucket, and can specify a filter based on an object key prefix. Configuring a complementing replication policy on a second bucket results in bi-directional replication. Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface. Important Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Power use the following command: Alternatively, you can install the mcg package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Important Choose the correct Product Variant according to your architecture. Note Certain MCG features are only available in certain MCG versions, and the appropriate MCG CLI tool version must be used to fully utilize MCG's features. To replicate a bucket, see Replicating a bucket to another bucket . To set a bucket class replication policy, see Setting a bucket class replication policy . 8.1. Replicating a bucket to another bucket You can set the bucket replication policy in two ways: Replicating a bucket to another bucket using the MCG command-line interface . Replicating a bucket to another bucket using a YAML . 8.1.1. Replicating a bucket to another bucket using the MCG command-line interface Applications that require a Multicloud Object Gateway (MCG) bucket to have a specific replication policy can create an Object Bucket Claim (OBC) and define the replication policy parameter in a JSON file. Procedure From the MCG command-line interface, run the following command to create an OBC with a specific replication policy: <bucket-claim-name> Specify the name of the bucket claim. /path/to/json-file.json Is the path to a JSON file which defines the replication policy. Example JSON file: "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . Example 8.1. Example 8.1.2. Replicating a bucket to another bucket using a YAML Applications that require a Multicloud Object Gateway (MCG) data bucket to have a specific replication policy can create an Object Bucket Claim (OBC) and add the spec.additionalConfig.replication-policy parameter to the OBC. Procedure Apply the following YAML: <desired-bucket-claim> Specify the name of the bucket claim. <desired-namespace> Specify the namespace. <desired-bucket-name> Specify the prefix of the bucket name. "rule_id" Specify the ID number of the rule, for example, {"rule_id": "rule-1"} . "destination_bucket" Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"} . "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . Additional information For more information about OBCs, see Object Bucket Claim . 8.2. Setting a bucket class replication policy It is possible to set up a replication policy that automatically applies to all the buckets created under a certain bucket class. You can do this in two ways: Setting a bucket class replication policy using the MCG command-line interface . Setting a bucket class replication policy using a YAML . 8.2.1. Setting a bucket class replication policy using the MCG command-line interface Applications that require a Multicloud Object Gateway (MCG) bucket class to have a specific replication policy can create a bucketclass and define the replication-policy parameter in a JSON file. It is possible to set a bucket class replication policy for two types of bucket classes: Placement Namespace Procedure From the MCG command-line interface, run the following command: <bucketclass-name> Specify the name of the bucket class. <backingstores> Specify the name of a backingstore. It is possible to pass several backingstores separated by commas. /path/to/json-file.json Is the path to a JSON file which defines the replication policy. Example JSON file: "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . Example 8.2. Example This example creates a placement bucket class with a specific replication policy defined in the JSON file. 8.2.2. Setting a bucket class replication policy using a YAML Applications that require a Multicloud Object Gateway (MCG) bucket class to have a specific replication policy can create a bucket class using the spec.replicationPolicy field. Procedure Apply the following YAML: This YAML is an example that creates a placement bucket class. Each Object bucket claim (OBC) object that is uploaded to the bucket is filtered based on the prefix and is replicated to first.bucket . <desired-app-label> Specify a label for the app. <desired-bucketclass-name> Specify the bucket class name. <desired-namespace> Specify the namespace in which the bucket class gets created. <backingstore> Specify the name of a backingstore. It is possible to pass several backingstores. "rule_id" Specify the ID number of the rule, for example, `{"rule_id": "rule-1"} . "destination_bucket" Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"} . "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . | [
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms",
"yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"noobaa obc create <bucket-claim-name> -n openshift-storage --replication-policy /path/to/json-file.json",
"[{ \"rule_id\": \"rule-1\", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \"repl\"}}]",
"noobaa obc create my-bucket-claim -n openshift-storage --replication-policy /path/to/json-file.json",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <desired-bucket-claim> namespace: <desired-namespace> spec: generateBucketName: <desired-bucket-name> storageClassName: openshift-storage.noobaa.io additionalConfig: replication-policy: [{ \"rule_id\": \" <rule id> \", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \" <object name prefix> \"}}]",
"noobaa -n openshift-storage bucketclass create placement-bucketclass <bucketclass-name> --backingstores <backingstores> --replication-policy=/path/to/json-file.json",
"[{ \"rule_id\": \"rule-1\", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \"repl\"}}]",
"noobaa -n openshift-storage bucketclass create placement-bucketclass bc --backingstores azure-blob-ns --replication-policy=/path/to/json-file.json",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: <desired-app-label> name: <desired-bucketclass-name> namespace: <desired-namespace> spec: placementPolicy: tiers: - backingstores: - <backingstore> placement: Spread replicationPolicy: [{ \"rule_id\": \" <rule id> \", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \" <object name prefix> \"}}]"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_hybrid_and_multicloud_resources/multicloud_object_gateway_bucket_and_bucket_class_replication |
Chapter 4. Managing build output | Chapter 4. Managing build output Use the following sections for an overview of and instructions for managing build output. 4.1. Build output Builds that use the docker or source-to-image (S2I) strategy result in the creation of a new container image. The image is then pushed to the container image registry specified in the output section of the Build specification. If the output kind is ImageStreamTag , then the image will be pushed to the integrated OpenShift image registry and tagged in the specified imagestream. If the output is of type DockerImage , then the name of the output reference will be used as a docker push specification. The specification may contain a registry or will default to DockerHub if no registry is specified. If the output section of the build specification is empty, then the image will not be pushed at the end of the build. Output to an ImageStreamTag spec: output: to: kind: "ImageStreamTag" name: "sample-image:latest" Output to a docker Push Specification spec: output: to: kind: "DockerImage" name: "my-registry.mycompany.com:5000/myimages/myimage:tag" 4.2. Output image environment variables docker and source-to-image (S2I) strategy builds set the following environment variables on output images: Variable Description OPENSHIFT_BUILD_NAME Name of the build OPENSHIFT_BUILD_NAMESPACE Namespace of the build OPENSHIFT_BUILD_SOURCE The source URL of the build OPENSHIFT_BUILD_REFERENCE The Git reference used in the build OPENSHIFT_BUILD_COMMIT Source commit used in the build Additionally, any user-defined environment variable, for example those configured with S2I or docker strategy options, will also be part of the output image environment variable list. 4.3. Output image labels docker and source-to-image (S2I) builds set the following labels on output images: Label Description io.openshift.build.commit.author Author of the source commit used in the build io.openshift.build.commit.date Date of the source commit used in the build io.openshift.build.commit.id Hash of the source commit used in the build io.openshift.build.commit.message Message of the source commit used in the build io.openshift.build.commit.ref Branch or reference specified in the source io.openshift.build.source-location Source URL for the build You can also use the BuildConfig.spec.output.imageLabels field to specify a list of custom labels that will be applied to each image built from the build configuration. Custom labels for built images spec: output: to: kind: "ImageStreamTag" name: "my-image:latest" imageLabels: - name: "vendor" value: "MyCompany" - name: "authoritative-source-url" value: "registry.mycompany.com" | [
"spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\"",
"spec: output: to: kind: \"DockerImage\" name: \"my-registry.mycompany.com:5000/myimages/myimage:tag\"",
"spec: output: to: kind: \"ImageStreamTag\" name: \"my-image:latest\" imageLabels: - name: \"vendor\" value: \"MyCompany\" - name: \"authoritative-source-url\" value: \"registry.mycompany.com\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/builds_using_buildconfig/managing-build-output |
Chapter 42. limits | Chapter 42. limits This chapter describes the commands under the limits command. 42.1. limits show Show compute and block storage limits Usage: Table 42.1. Command arguments Value Summary -h, --help Show this help message and exit --absolute Show absolute limits --rate Show rate limits --reserved Include reservations count [only valid with --absolute] --project <project> Show limits for a specific project (name or id) [only valid with --absolute] --domain <domain> Domain the project belongs to (name or id) [only valid with --absolute] Table 42.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 42.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 42.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 42.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack limits show [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] (--absolute | --rate) [--reserved] [--project <project>] [--domain <domain>]"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/limits |
Chapter 17. Authentication and Interoperability | Chapter 17. Authentication and Interoperability SSSD fails to manage sudo rules from the IdM LDAP tree The System Security Services Daemon (SSSD) currently uses the IdM LDAP tree by default. As a consequence, it is not possible to assign sudo rules to non-POSIX groups. To work around this problem, modify the /etc/sssd/sssd.conf file to set your domain to use the compat tree again: As a result, SSSD will load sudo rules from the compat tree and you will be able to assign rules to non-POSIX groups. Note that Red Hat recommends to configure groups referenced in sudo rules as POSIX groups. (BZ#1336548) winbindd crashes when installing a new AD trust When configuring a new Active Directory (AD) trust on a newly installed system, the ipa-adtrust-install utility might report that the winbindd service terminated unexpectedly. Otherwise, ipa-adtrust-install completes successfully. If this problem occurs, restart the IdM services by using the ipactl restart command after running ipa-adtrust-install . This also restarts winbindd . Note that the full extent of the functional impact of this problem is still unknown. Some trust functionality might not work until winbindd is restarted. (BZ# 1399058 ) nslcd fails to resolve user or group identities when it is started before the network connection is fully up When nslcd , the local LDAP name service daemon, is started before the network connection is fully up, the daemon fails to connect to an LDAP server. As a consequence, resolving user or group identities does not work. To work around this problem, start nslcd after the network connection is up. (BZ# 1401632 ) | [
"[domain/EXAMPLE] ldap_sudo_search_base = ou=sudoers,dc=example,dc=com"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_release_notes/known_issues_authentication_and_interoperability |
6.4. Additional Resources | 6.4. Additional Resources For more information on how to register your system using Red Hat Subscription Management and associate it with subscriptions, see the resources listed below. Installed Documentation subscription-manager (8) - the manual page for Red Hat Subscription Management provides a complete list of supported options and commands. Related Books Red Hat Subscription Management collection of guides - These guides contain detailed information how to use Red Hat Subscription Management. Installation Guide - see the Firstboot chapter for detailed information on how to register during the firstboot process. Online Resources Red Hat Access Labs - The Red Hat Access Labs includes a " Registration Assistant " . See Also Chapter 4, Gaining Privileges documents how to gain administrative privileges by using the su and sudo commands. Chapter 8, Yum provides information about using the yum packages manager to install and update software. Chapter 9, PackageKit provides information about using the PackageKit package manager to install and update software. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-subscription_and_support-registering_a_system_and_managing_subscriptions-references |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of {osp_long} ({osp_acro}). When you create an issue for RHOSO or {osp_acro} documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_ipv6_networking_for_the_overcloud/proc_providing-feedback-on-red-hat-documentation |
7.63. gdb | 7.63. gdb 7.63.1. RHSA-2013:0522 - Moderate: gdb security and bug fix update Updated gdb packages that fix one security issue and three bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The GNU Debugger (GDB) allows debugging of programs written in C, C++, Java, and other languages by executing them in a controlled fashion and then printing out their data. Security Fix CVE-2011-4355 GDB tried to auto-load certain files (such as GDB scripts, Python scripts, and a thread debugging library) from the current working directory when debugging programs. This could result in the execution of arbitrary code with the user's privileges when GDB was run in a directory that has untrusted content. Note With this update, GDB no longer auto-loads files from the current directory and only trusts certain system directories by default. The list of trusted directories can be viewed and modified using the "show auto-load safe-path" and "set auto-load safe-path" GDB commands. Refer to the GDB manual for further information: http://sourceware.org/gdb/current/onlinedocs/gdb/Auto_002dloading-safe-path.html#Auto_002dloading-safe-path http://sourceware.org/gdb/current/onlinedocs/gdb/Auto_002dloading.html#Auto_002dloading Bug Fixes BZ#795424 When a struct member was at an offset greater than 256 MB, the resulting bit position within the struct overflowed and caused an invalid memory access by GDB. With this update, the code has been modified to ensure that GDB can access such positions. BZ# 811648 When a thread list of the core file became corrupted, GDB did not print this list but displayed the "Cannot find new threads: generic error" error message instead. With this update, GDB has been modified and it now prints the thread list of the core file as expected. BZ# 836966 GDB did not properly handle debugging of multiple binaries with the same build ID. This update modifies GDB to use symbolic links created for particular binaries so that debugging of binaries that share a build ID now proceeds as expected. Debugging of live programs and core files is now more user-friendly. All users of gdb are advised to upgrade to these updated packages, which contain backported patches to correct these issues. 7.63.2. RHBA-2013:0811 - gdb bug fix update Updated gdb packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The GNU Debugger (GDB) allows debugging of programs written in C, C++, Java, and other languages by executing them in a controlled fashion and then printing out their data. Bug Fixes BZ# 952090 When users tried to execute the "maintenance set python print-stack" command, gdb did not recognize it and issued an error stating the command was undefined. With this update, gdb now correctly recognizes and executes the command. BZ# 952100 When debugging a C++ program which declared a local static variable inside a class, gdb was unable to locate the local static variable. This caused problems when debugging some issues that required examining these kinds of variables. With this update, gdb now correctly identifies that the variable exists, and the debugging process functions normally. BZ# 954300 Previously, users experienced an internal error in the debugger when using a Thread Local Storage (TLS) modifier in a static variable declared inside a class on a C++ program, and asking gdb to print its value. This caused the debugging session to be compromised. With this update, gdb is now able to correctly deal with a static variable declared as a TLS inside a class and errors no longer occur in the described scenario. Users of gdb are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/gdb |
4.3. Converting a virtual machine | 4.3. Converting a virtual machine virt-v2v converts virtual machines from a foreign hypervisor to run on Red Hat Enterprise Virtualization. It automatically packages the virtual machine images and metadata, then uploads them to a Red Hat Enterprise Virtualization export storage domain. For more information on export storage domains, see Section 4.2, "Attaching an export storage domain" . virt-v2v always makes a copy of storage before conversion. Figure 4.2. Converting a virtual machine From the export storage domain, the virtual machine images can be imported into Red Hat Enterprise Virtualization using the Administration Portal. Figure 4.3. Importing a virtual machine 4.3.1. Preparing to convert a virtual machine Before a virtual machine can be converted, ensure that the following steps are completed: Procedure 4.2. Preparing to convert a virtual machine Create an NFS export domain. virt-v2v can transfer the converted virtual machine directly to an NFS export storage domain. From the export storage domain, the virtual machine can be imported into a Red Hat Enterprise Virtualization data center. The storage domain must be mountable by the machine running virt-v2v . When exporting to a Red Hat Enterprise Virtualization export domain, virt-v2v must run as root. Note The export storage domain is accessed as an NFS share. By default, Red Hat Enterprise Linux 6 uses NFSv4, which does not require further configuration. However, for NFSv2 and NFSv3 clients, the rpcbind and nfslock services must be running on the host used to run virt-v2v . The network must also be configured to allow NFS access to the storage server. For more details refer to the Red Hat Enterprise Linux Storage Administration Guide . Specify network mappings in virt-v2v.conf . This step is optional , and is not required for most use cases. If your virtual machine has multiple network interfaces, /etc/virt-v2v.conf must be edited to specify the network mapping for all interfaces. You can specify an alternative virt-v2v.conf file with the -f parameter. If you are converting to a virtual machine for output to both libvirt and Red Hat Enterprise Virtualization, separate virt-v2v.conf files should be used for each conversion. This is because a converted bridge will require different configuration depending on the output type (libvirt or Red Hat Enterprise Virtualization). If your virtual machine only has a single network interface, it is simpler to use the --network or --bridge parameters, rather than modifying virt-v2v.conf . Create a profile for the conversion in virt-v2v.conf . This step is optional . Profiles specify a conversion method, storage location, output format and allocation policy. When a profile is defined, it can be called using --profile rather than individually providing the -o , -os , -of and -oa parameters. See virt-v2v.conf (5) for details. 4.3.1.1. Preparing to convert a virtual machine running Linux The following is required when converting virtual machines which run Linux, regardless of which hypervisor they are being converted from. Procedure 4.3. Preparing to convert a virtual machine running Linux Obtain the software. As part of the conversion process, virt-v2v may install a new kernel and drivers on the virtual machine. If the virtual machine being converted is registered to Red Hat Subscription Management (RHSM), the required packages will be automatically downloaded. For environments where Red Hat Subscription Management (RHSM) is not available, the virt-v2v.conf file references a list of RPMs used for this purpose. The RPMs relevant to your virtual machine must be downloaded manually from the Red Hat Customer Portal and made available in the directory specified by the path-root configuration element, which by default is /var/lib/virt-v2v/software/ . virt-v2v will display an error similar to Example 3.1, "Missing Package error" if the software it depends upon for a particular conversion is not available. To obtain the relevant RPMs for your environment, repeat these steps for each missing package: Log in to the Red Hat Customer Portal: https://access.redhat.com/ . In the Red Hat Customer Portal, select Downloads > Product Downloads > Red Hat Enterprise Linux . Select the desired Product Variant , Version , and select the Packages tab. In the Filter field, type the package name exactly matching the one shown in the error message. For the example shown in Example 3.1, "Missing Package error" , the first package is kernel-2.6.32-128.el6.x86_64 A list of packages displays. Select the package name identical to the one in the error message. This opens the details page, which contains a detailed description of the package. Alternatively, to download the most recent version of a package, select Download Latest to the desired package. Save the downloaded package to the appropriate directory in /var/lib/virt-v2v/software . For Red Hat Enterprise Linux 6, the directory is /var/lib/virt-v2v/software/rhel/6 . 4.3.1.2. Preparing to convert a virtual machine running Windows Important virt-v2v does not support conversion of the Windows Recovery Console. If a virtual machine has a recovery console installed and VirtIO was enabled during conversion, attempting to boot the recovery console will result in a stop error. Windows XP x86 does not support the Windows Recovery Console on VirtIO systems, so there is no resolution to this. However, on Windows XP AMD64 and Windows 2003 (x86 and AMD64), the recovery console can be reinstalled after conversion. The re-installation procedure is the same as the initial installation procedure. It is not necessary to remove the recovery console first. Following re-installation, the recovery console will work as intended. Important When converting a virtual machine running Windows with multiple drives, for output to Red Hat Enterprise Virtualization, it is possible in certain circumstances that additional drives will not be displayed by default. Red Hat Enterprise Virtualization will always add a CD-ROM device to a converted virtual machine. If the virtual machine did not have a CD-ROM device before conversion, the new CD-ROM device may be assigned a drive letter which clashes with an existing drive on the virtual machine. This will render the existing device inaccessible. The occluded disk device can still be accessed by manually assigning it a new drive letter. It is also possible to maintain drive letter assignment by manually changing the drive letter assigned to the new CD-ROM device, then rebooting the virtual machine. The following is required when converting virtual machines running Windows, regardless of which hypervisor they are being converted from. The conversion procedure depends on post-processing by the Red Hat Enterprise Virtualization Manager for completion. See Section 7.2.2, "Configuration changes for Windows virtual machines" for details of the process. Procedure 4.4. Preparing to convert a virtual machine running Windows Before a virtual machine running Windows can be converted, ensure that the following steps are completed. Install the libguestfs-winsupport package on the host running virt-v2v . This package provides support for NTFS, which is used by many Windows systems. The libguestfs-winsupport package is provided by the RHEL V2VWIN (v. 6 for 64-bit x86_64) channel. Ensure your system is subscribed to this channel, then run the following command as root: If you attempt to convert a virtual machine using NTFS without the libguestfs-winsupport package installed, the conversion will fail. An error message similar to Example 4.1, "Error message when converting a Windows virtual machine without libguestfs-winsupport installed" will be shown: Example 4.1. Error message when converting a Windows virtual machine without libguestfs-winsupport installed Install the virtio-win package on the host running virt-v2v . This package provides paravirtualized block and network drivers for Windows guests. The virtio-win package is provided by the RHEL Server Supplementary (v. 6 64-bit x86_64) channel. Ensure your system is subscribed to this channel, then run the following command as root: If you attempt to convert a virtual machine running Windows without the virtio-win package installed, the conversion will fail. An error message similar to Example 3.3, "Error message when converting a Windows virtual machine without virtio-win installed" will be shown. Upload the guest tools ISO to the ISO Storage Domain. Note that the guest tools ISO is not required for the conversion process to succeed. However, it is recommended for all Windows virtual machines running on Red Hat Enterprise Virtualization. The Red Hat Enterprise Virtualization Manager installs Red Hat's Windows drivers on the guest virtual machine using the guest tools ISO after the virtual machines have been converted. See Section 7.2.2, "Configuration changes for Windows virtual machines" for details. Locate and upload the guest tools ISO as follows: Locate the guest tools ISO. The guest tools ISO is distributed using the Red Hat Customer Portal as rhev-guest-tools-iso.rpm , an RPM file installed on the Red Hat Enterprise Virtualization Manager. After installing the Red Hat Enterprise Virtualization Manager, the guest tools ISO can be found at /usr/share/rhev-guest-tools-iso/rhev-tools-setup.iso . Upload the guest tools ISO. Upload the guest tools ISO to the ISO Storage Domain using the ISO uploader. Refer to the Red Hat Enterprise Virtualization Administration Guide for more information on uploading ISO files, and installing guest agents and drivers. 4.3.1.3. Preparing to convert a local Xen virtual machine The following is required when converting virtual machines on a host which used to run Xen, but has been updated to run KVM. It is not required when converting a Xen virtual machine imported directly from a running libvirt/Xen instance. Procedure 4.5. Preparing to convert a local Xen virtual machine Obtain the XML for the virtual machine. virt-v2v uses a libvirt domain description to determine the current configuration of the virtual machine, including the location of its storage. Before starting the conversion, obtain this from the host running the virtual machine with the following command: This will require booting into a Xen kernel to obtain the XML, as libvirt needs to connect to a running Xen hypervisor to obtain its metadata. The conversion process is optimized for KVM, so obtaining domain data while running a Xen kernel, then performing the conversion using a KVM kernel will be more efficient than running the conversion on a Xen kernel. | [
"install libguestfs-winsupport",
"No operating system could be detected inside this disk image. This may be because the file is not a disk image, or is not a virtual machine image, or because the OS type is not understood by virt-inspector. If you feel this is an error, please file a bug report including as much information about the disk image as possible.",
"install virtio-win",
"virsh dumpxml guest_name > guest_name.xml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/sect-rhev_converting_a_virtual_machine |
Evaluating AMQ Streams on OpenShift | Evaluating AMQ Streams on OpenShift Red Hat AMQ 2020.Q4 For use with AMQ Streams 1.6 on OpenShift Container Platform | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/evaluating_amq_streams_on_openshift/index |
A.13. Workaround for Creating External Snapshots with libvirt | A.13. Workaround for Creating External Snapshots with libvirt There are two classes of snapshots for KVM guests: Internal snapshots are contained completely within a qcow2 file, and fully supported by libvirt , allowing for creating, deleting, and reverting of snapshots. This is the default setting used by libvirt when creating a snapshot, especially when no option is specified. This file type take slightly longer than others for creating the snapshot, and has the drawback of requiring qcow2 disks. Important Internal snapshots are not being actively developed, and Red Hat discourages their use. External snapshots work with any type of original disk image, can be taken with no guest downtime, and are more stable and reliable. As such, external snapshots are recommended for use on KVM guest virtual machines. However, external snapshots are currently not fully implemented on Red Hat Enterprise Linux 7, and are not available when using virt-manager . To create an external snapshot, use the snapshot-create-as with the --diskspec vda,snapshot=external option, or use the following disk line in the snapshot XML file: At the moment, external snapshots are a one-way operation as libvirt can create them but cannot do anything further with them. A workaround is described on libvirt upstream pages . | [
"<disk name='vda' snapshot='external'> <source file='/path/to,new'/> </disk>"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-troubleshooting-workaround_for_creating_external_snapshots_with_libvirt |
Chapter 17. Managing Subsystem Certificates | Chapter 17. Managing Subsystem Certificates This chapter gives an overview of using certificates: what types and formats are available, how to request and create them through the HTML end-entity forms and through the Certificate System Console, and how to install certificates in the Certificate System and on different clients. Additionally, there is information on managing certificates through the Console and configuring the servers to use them. 17.1. Required Subsystem Certificates Each subsystem has a defined set of certificates which must be issued to the subsystem instance for it to perform its operations. There are certain details of the certificate contents that are set during the Certificate Manager configuration, with different considerations for constraints, settings, and attributes depending on the types of certificates; planning the formats of certificates is covered in the Red Hat Certificate System Planning, Installation, and Deployment Guide . 17.1.1. Certificate Manager Certificates When a Certificate Manager is installed, the keys and requests for the CA signing certificate, SSL server certificate, and OCSP signing certificate are generated. The certificates are created before the configuration can be completed. The CA certificate request is either submitted as a self-signing request to the CA, which then issues the certificate and finishes creating the self-signed root CA, or is submitted to a third-party public CA or another Certificate System CA. When the external CA returns the certificate, the certificate is installed, and installation of the subordinate CA is completed. Section 17.1.1.1, "CA Signing Key Pair and Certificate" Section 17.1.1.2, "OCSP Signing Key Pair and Certificate" Section 17.1.1.3, "Subsystem Certificate" Section 17.1.1.4, "SSL Server Key Pair and Certificate" Section 17.1.1.5, "Audit Log Signing Key Pair and Certificate" 17.1.1.1. CA Signing Key Pair and Certificate Every Certificate Manager has a CA signing certificate with a public key corresponding to the private key the Certificate Manager uses to sign the certificates and CRLs it issues. This certificate is created and installed when the Certificate Manager is installed. The default nickname for the certificate is caSigningCert cert- instance_ID CA , where instance_ID identifies the Certificate Manager instance. The default validity period for the certificate is five years. The subject name of the CA signing certificate reflects the name of the CA that was set during installation. All certificates signed or issued by the Certificate Manager include this name to identify the issuer of the certificate. The Certificate Manager's status as a root or subordinate CA is determined by whether its CA signing certificate is self-signed or is signed by another CA, which affects the subject name on the certificates. If the Certificate Manager is a root CA, its CA signing certificate is self-signed, meaning the subject name and issuer name of the certificate are the same. If the Certificate Manager is a subordinate CA, its CA signing certificate is signed by another CA, usually the one that is a level above in the CA hierarchy (which may or may not be a root CA). The root CA's signing certificate must be imported into individual clients and servers before the Certificate Manager can be used to issue certificates to them. Note The CA name cannot be changed or all previously-issued certificates are invalidated. Similarly, reissuing a CA signing certificate with a new key pair invalidates all certificates that were signed by the old key pair. 17.1.1.2. OCSP Signing Key Pair and Certificate The subject name of the OCSP signing certificate is in the form cn=OCSP cert- instance_ID CA , and it contains extensions, such as OCSPSigning and OCSPNoCheck , required for signing OCSP responses. The default nickname for the OCSP signing certificate is ocspSigningCert cert- instance_ID , where instance_ID CA identifies the Certificate Manager instance. The OCSP private key, corresponding to the OCSP signing certificate's public key, is used by the Certificate Manager to sign the OCSP responses to the OCSP-compliant clients when queried about certificate revocation status. 17.1.1.3. Subsystem Certificate Every member of the security domain is issued a server certificate to use for communications among other domain members, which is separate from the server SSL certificate. This certificate is signed by the security domain CA; for the security domain CA itself, its subsystem certificate is signed by itself. The default nickname for the certificate is subsystemCert cert- instance_ID . 17.1.1.4. SSL Server Key Pair and Certificate Every Certificate Manager has at least one SSL server certificate that was first generated when the Certificate Manager was installed. The default nickname for the certificate is Server-Cert cert- instance_ID , where instance_ID identifies the Certificate Manager instance. By default, the Certificate Manager uses a single SSL server certificate for authentication. However, additional server certificates can be requested to use for different operations, such as configuring the Certificate Manager to use separate server certificates for authenticating to the end-entity services interface and agent services interface. If the Certificate Manager is configured for SSL-enabled communication with a publishing directory, it uses its SSL server certificate for client authentication to the publishing directory by default. The Certificate Manager can also be configured to use a different certificate for SSL client authentication. 17.1.1.5. Audit Log Signing Key Pair and Certificate The CA keeps a secure audit log of all events which occurred on the server. To guarantee that the audit log has not been tampered with, the log file is signed by a special log signing certificate. The audit log signing certificate is issued when the server is first configured. Note While other certificates can use ECC keys, the audit signing certificate must always use an RSA key. 17.1.2. Online Certificate Status Manager Certificates When the Online Certificate Status Manager is first configured, the keys for all required certificates are created, and the certificate requests for the OCSP signing, SSL server, audit log signing, and subsystem certificates are made. These certificate requests are submitted to a CA (either a Certificate System CA or a third-party CA) and must be installed in the Online Certificate Status Manager database to complete the configuration process. Section 17.1.2.2, "SSL Server Key Pair and Certificate" Section 17.1.2.3, "Subsystem Certificate" Section 17.1.2.4, "Audit Log Signing Key Pair and Certificate" Section 17.1.2.5, "Recognizing Online Certificate Status Manager Certificates" 17.1.2.1. OCSP Signing Key Pair and Certificate Every Online Certificate Status Manager has a certificate, the OCSP signing certificate, which has a public key corresponding to the private key the Online Certificate Status Manager uses to sign OCSP responses. The Online Certificate Status Manager's signature provides persistent proof that the Online Certificate Status Manager has processed the request. This certificate is generated when the Online Certificate Status Manager is configured. The default nickname for the certificate is ocspSigningCert cert- instance_ID , where instance_ID OSCP is the Online Certificate Status Manager instance name. 17.1.2.2. SSL Server Key Pair and Certificate Every Online Certificate Status Manager has at least one SSL server certificate which was generated when the Online Certificate Status Manager was configured. The default nickname for the certificate is Server-Cert cert- instance_ID , where instance_ID identifies the Online Certificate Status Manager instance name. The Online Certificate Status Manager uses its server certificate for server-side authentication for the Online Certificate Status Manager agent services page. The Online Certificate Status Manager uses a single server certificate for authentication purposes. Additional server certificates can be installed and used for different purposes. 17.1.2.3. Subsystem Certificate Every member of the security domain is issued a server certificate to use for communications among other domain members, which is separate from the server SSL certificate. This certificate is signed by the security domain CA. The default nickname for the certificate is subsystemCert cert- instance_ID . 17.1.2.4. Audit Log Signing Key Pair and Certificate The OCSP keeps a secure audit log of all events which occurred on the server. To guarantee that the audit log has not been tampered with, the log file is signed by a special log signing certificate. The audit log signing certificate is issued when the server is first configured. Note While other certificates can use ECC keys, the audit signing certificate must always use an RSA key. 17.1.2.5. Recognizing Online Certificate Status Manager Certificates Depending on the CA which signed the Online Certificate Status Manager's SSL server certificate, it may be necessary to get the certificate and issuing CA recognized by the Certificate Manager. If the Online Certificate Status Manager's server certificate is signed by the CA that is publishing CRLs, then nothing needs to be done. If the Online Certificate Status Manager's server certificate is signed by the same root CA that signed the subordinate Certificate Manager's certificates, then the root CA must be marked as a trusted CA in the subordinate Certificate Manager's certificate database. If the Online Certificate Status Manager's SSL server certificate is signed by a different root CA, then the root CA certificate must be imported into the subordinate Certificate Manager's certificate database and marked as a trusted CA. If the Online Certificate Status Manager's server certificate is signed by a CA within the selected security domain, the certificate chain is imported and marked when the Online Certificate Status Manager is configured. No other configuration is required. However, if the server certificate is signed by an external CA, the certificate chain has to be imported for the configuration to be completed. Note Not every CA within the security domain is automatically trusted by the OCSP Manager when it is configured. Every CA in the certificate chain of the CA configured in the CA panel is, however, trusted automatically by the OCSP Manager. Other CAs within the security domain but not in the certificate chain must be added manually. 17.1.3. Key Recovery Authority Certificates The KRA uses the following key pairs and certificates: Section 17.1.3.1, "Transport Key Pair and Certificate" Section 17.1.3.2, "Storage Key Pair" Section 17.1.3.3, "SSL Server Certificate" Section 17.1.3.4, "Subsystem Certificate" Section 17.1.3.5, "Audit Log Signing Key Pair and Certificate" 17.1.3.1. Transport Key Pair and Certificate Every KRA has a transport certificate. The public key of the key pair that is used to generate the transport certificate is used by the client software to encrypt an end entity's private encryption key before it is sent to the KRA for archival; only those clients capable of generating dual-key pairs use the transport certificate. 17.1.3.2. Storage Key Pair Every KRA has a storage key pair. The KRA uses the public component of this key pair to encrypt (or wrap) private encryption keys when archiving the keys. It uses the private component to decrypt (or unwrap) the archived key during recovery. For more information on how this key pair is used, see Chapter 4, Setting up Key Archival and Recovery . Keys encrypted with the storage key can be retrieved only by authorized key recovery agents. 17.1.3.3. SSL Server Certificate Every Certificate System KRA has at least one SSL server certificate. The first SSL server certificate is generated when the KRA is configured. The default nickname for the certificate is Server-Cert cert- instance_ID , where instance_id identifies the KRA instance is installed. The KRA's SSL server certificate was issued by the CA to which the certificate request was submitted, which can be a Certificate System CA or a third-party CA. To view the issuer name, open the certificate details in the System Keys and Certificates option in the KRA Console. The KRA uses its SSL server certificate for server-side authentication to the KRA agent services interface. By default, the Key Recovery Authority uses a single SSL server certificate for authentication. However, additional SSL server certificates can be requested and installed for the KRA. 17.1.3.4. Subsystem Certificate Every member of the security domain is issued a server certificate to use for communications among other domain members, which is separate from the server SSL certificate. This certificate is signed by the security domain CA. The default nickname for the certificate is subsystemCert cert- instance_ID . 17.1.3.5. Audit Log Signing Key Pair and Certificate The KRA keeps a secure audit log of all events which occurred on the server. To guarantee that the audit log has not been tampered with, the log file is signed by a special log signing certificate. The audit log signing certificate is issued when the server is first configured. Note While other certificates can use ECC keys, the audit signing certificate must always use an RSA key. 17.1.4. TKS Certificates The TKS has three certificates. The SSL server and subsystem certificates are used for standard operations. An additional signing certificate is used to protect audit logs. Section 17.1.4.1, "SSL Server Certificate" Section 17.1.4.2, "Subsystem Certificate" Section 17.1.4.3, "Audit Log Signing Key Pair and Certificate" 17.1.4.1. SSL Server Certificate Every Certificate System TKS has at least one SSL server certificate. The first SSL server certificate is generated when the TKS is configured. The default nickname for the certificate is Server-Cert cert- instance_ID . 17.1.4.2. Subsystem Certificate Every member of the security domain is issued a server certificate to use for communications among other domain members, which is separate from the server SSL certificate. This certificate is signed by the security domain CA. The default nickname for the certificate is subsystemCert cert- instance_ID . 17.1.4.3. Audit Log Signing Key Pair and Certificate The TKS keeps a secure audit log of all events which occurred on the server. To guarantee that the audit log has not been tampered with, the log file is signed by a special log signing certificate. The audit log signing certificate is issued when the server is first configured. Note While other certificates can use ECC keys, the audit signing certificate must always use an RSA key. 17.1.5. TPS Certificates The TPS only uses three certificates: a server certificate, subsystem certificate, and audit log signing certificate. Section 17.1.5.1, "SSL Server Certificate" Section 17.1.5.2, "Subsystem Certificate" Section 17.1.5.3, "Audit Log Signing Key Pair and Certificate" 17.1.5.1. SSL Server Certificate Every Certificate System TPS has at least one SSL server certificate. The first SSL server certificate is generated when the TPS is configured. The default nickname for the certificate is Server-Cert cert- instance_ID . 17.1.5.2. Subsystem Certificate Every member of the security domain is issued a server certificate to use for communications among other domain members, which is separate from the server SSL certificate. This certificate is signed by the security domain CA. The default nickname for the certificate is subsystemCert cert- instance_ID . 17.1.5.3. Audit Log Signing Key Pair and Certificate The TPS keeps a secure audit log of all events which occurred on the server. To guarantee that the audit log has not been tampered with, the log file is signed by a special log signing certificate. The audit log signing certificate is issued when the server is first configured. 17.1.6. About Subsystem Certificate Key Types When you create a new instance, you can specify the key type and key size in the configuration file passed to the pkispawn utility. Example 17.1. Key Type-related Configuration Parameters for a CA The following are key type-related parameters including example values. You can set these parameters in the configuration file which you pass to pkispawn when creating a new CA. Note The values in the example are for a CA. Other subsystems require different parameters. For further details, see: The Understanding the pkispawn Utility section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . The pki_default.cfg (5) man page for descriptions of the parameters and examples. 17.1.7. Using an HSM to Store Subsystem Certificates By default, keys and certificates are stored in locally-managed databases, key4.db and cert9.db , respectively, in the /var/lib/pki/ instance_name /alias directory. However, Red Hat Certificate System also supports hardware security modules (HSM), external devices which can store keys and certificates in a centralized place on the network. Using an HSM can make some functions, like cloning, easier because the keys and certificates for the instance are readily accessible. When an HSM is used to store certificates, then the HSM name is prepended to the certificate nickname, and the full name is used in the subsystem configuration, such as the server.xml file. For example: Note A single HSM can be used to store certificates and keys for mulitple subsystem instances, which may be installed on multiple hosts. When an HSM is used, any certificate nickname for a subsystem must be unique for every subsystem instance managed on the HSM. Certificate System supports two types of HSM, nCipher netHSM and Chrysalis LunaSA. | [
"pki_ocsp_signing_key_algorithm= SHA256withRSA pki_ocsp_signing_key_size= 2048 pki_ocsp_signing_key_type= rsa pki_ca_signing_key_algorithm= SHA256withRSA pki_ca_signing_key_size= 2048 pki_ca_signing_key_type= rsa pki_sslserver_key_algorithm= SHA256withRSA pki_sslserver_key_size= 2048 pki_sslserver_key_type= rsa pki_subsystem_key_algorithm= SHA256withRSA pki_subsystem_key_size= 2048 pki_subsystem_key_type= rsa pki_admin_keysize= 2048 pki_admin_key_size= 2048 pki_admin_key_type= rsa pki_audit_signing_key_algorithm= SHA256withRSA pki_audit_signing_key_size= 2048 pki_audit_signing_key_type= rsa",
"serverCert=\"nethsm:Server-Cert cert- instance_ID"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/certificatedatabase |
function::euid | function::euid Name function::euid - Return the effective uid of a target process Synopsis Arguments None Description Returns the effective user ID of the target process. | [
"euid:long()"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-euid |
Chapter 3. Configuring certificates | Chapter 3. Configuring certificates 3.1. Replacing the default ingress certificate 3.1.1. Understanding the default ingress certificate By default, OpenShift Container Platform uses the Ingress Operator to create an internal CA and issue a wildcard certificate that is valid for applications under the .apps sub-domain. Both the web console and CLI use this certificate as well. The internal infrastructure CA certificates are self-signed. While this process might be perceived as bad practice by some security or PKI teams, any risk here is minimal. The only clients that implicitly trust these certificates are other components within the cluster. Replacing the default wildcard certificate with one that is issued by a public CA already included in the CA bundle as provided by the container userspace allows external clients to connect securely to applications running under the .apps sub-domain. 3.1.2. Replacing the default ingress certificate You can replace the default ingress certificate for all applications under the .apps subdomain. After you replace the certificate, all applications, including the web console and CLI, will have encryption provided by specified certificate. Prerequisites You must have a wildcard certificate for the fully qualified .apps subdomain and its corresponding private key. Each should be in a separate PEM format file. The private key must be unencrypted. If your key is encrypted, decrypt it before importing it into OpenShift Container Platform. The certificate must include the subjectAltName extension showing *.apps.<clustername>.<domain> . The certificate file can contain one or more certificates in a chain. The wildcard certificate must be the first certificate in the file. It can then be followed with any intermediate certificates, and the file should end with the root CA certificate. Copy the root CA certificate into an additional PEM format file. Procedure Create a config map that includes only the root CA certificate used to sign the wildcard certificate: USD oc create configmap custom-ca \ --from-file=ca-bundle.crt=</path/to/example-ca.crt> \ 1 -n openshift-config 1 </path/to/example-ca.crt> is the path to the root CA certificate file on your local file system. Update the cluster-wide proxy configuration with the newly created config map: USD oc patch proxy/cluster \ --type=merge \ --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}' Create a secret that contains the wildcard certificate chain and key: USD oc create secret tls <secret> \ 1 --cert=</path/to/cert.crt> \ 2 --key=</path/to/cert.key> \ 3 -n openshift-ingress 1 <secret> is the name of the secret that will contain the certificate chain and private key. 2 </path/to/cert.crt> is the path to the certificate chain on your local file system. 3 </path/to/cert.key> is the path to the private key associated with this certificate. Update the Ingress Controller configuration with the newly created secret: USD oc patch ingresscontroller.operator default \ --type=merge -p \ '{"spec":{"defaultCertificate": {"name": "<secret>"}}}' \ 1 -n openshift-ingress-operator 1 Replace <secret> with the name used for the secret in the step. Additional resources Replacing the CA Bundle certificate Proxy certificate customization 3.2. Adding API server certificates The default API server certificate is issued by an internal OpenShift Container Platform cluster CA. Clients outside of the cluster will not be able to verify the API server's certificate by default. This certificate can be replaced by one that is issued by a CA that clients trust. 3.2.1. Add an API server named certificate The default API server certificate is issued by an internal OpenShift Container Platform cluster CA. You can add one or more alternative certificates that the API server will return based on the fully qualified domain name (FQDN) requested by the client, for example when a reverse proxy or load balancer is used. Prerequisites You must have a certificate for the FQDN and its corresponding private key. Each should be in a separate PEM format file. The private key must be unencrypted. If your key is encrypted, decrypt it before importing it into OpenShift Container Platform. The certificate must include the subjectAltName extension showing the FQDN. The certificate file can contain one or more certificates in a chain. The certificate for the API server FQDN must be the first certificate in the file. It can then be followed with any intermediate certificates, and the file should end with the root CA certificate. Warning Do not provide a named certificate for the internal load balancer (host name api-int.<cluster_name>.<base_domain> ). Doing so will leave your cluster in a degraded state. Procedure Login to the new API as the kubeadmin user. USD oc login -u kubeadmin -p <password> https://FQDN:6443 Get the kubeconfig file. USD oc config view --flatten > kubeconfig-newapi Create a secret that contains the certificate chain and private key in the openshift-config namespace. USD oc create secret tls <secret> \ 1 --cert=</path/to/cert.crt> \ 2 --key=</path/to/cert.key> \ 3 -n openshift-config 1 <secret> is the name of the secret that will contain the certificate chain and private key. 2 </path/to/cert.crt> is the path to the certificate chain on your local file system. 3 </path/to/cert.key> is the path to the private key associated with this certificate. Update the API server to reference the created secret. USD oc patch apiserver cluster \ --type=merge -p \ '{"spec":{"servingCerts": {"namedCertificates": [{"names": ["<FQDN>"], 1 "servingCertificate": {"name": "<secret>"}}]}}}' 2 1 Replace <FQDN> with the FQDN that the API server should provide the certificate for. 2 Replace <secret> with the name used for the secret in the step. Examine the apiserver/cluster object and confirm the secret is now referenced. USD oc get apiserver cluster -o yaml Example output ... spec: servingCerts: namedCertificates: - names: - <FQDN> servingCertificate: name: <secret> ... Check the kube-apiserver operator, and verify that a new revision of the Kubernetes API server rolls out. It may take a minute for the operator to detect the configuration change and trigger a new deployment. While the new revision is rolling out, PROGRESSING will report True . USD oc get clusteroperators kube-apiserver Do not continue to the step until PROGRESSING is listed as False , as shown in the following output: Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.10.0 True False False 145m If PROGRESSING is showing True , wait a few minutes and try again. Note A new revision of the Kubernetes API server only rolls out if the API server named certificate is added for the first time. When the API server named certificate is renewed, a new revision of the Kubernetes API server does not roll out because the kube-apiserver pods dynamically reload the updated certificate. 3.3. Securing service traffic using service serving certificate secrets 3.3.1. Understanding service serving certificates Service serving certificates are intended to support complex middleware applications that require encryption. These certificates are issued as TLS web server certificates. The service-ca controller uses the x509.SHA256WithRSA signature algorithm to generate service certificates. The generated certificate and key are in PEM format, stored in tls.crt and tls.key respectively, within a created secret. The certificate and key are automatically replaced when they get close to expiration. The service CA certificate, which issues the service certificates, is valid for 26 months and is automatically rotated when there is less than 13 months validity left. After rotation, the service CA configuration is still trusted until its expiration. This allows a grace period for all affected services to refresh their key material before the expiration. If you do not upgrade your cluster during this grace period, which restarts services and refreshes their key material, you might need to manually restart services to avoid failures after the service CA expires. Note You can use the following command to manually restart all pods in the cluster. Be aware that running this command causes a service interruption, because it deletes every running pod in every namespace. These pods will automatically restart after they are deleted. USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done 3.3.2. Add a service certificate To secure communication to your service, generate a signed serving certificate and key pair into a secret in the same namespace as the service. The generated certificate is only valid for the internal service DNS name <service.name>.<service.namespace>.svc , and is only valid for internal communications. If your service is a headless service (no clusterIP value set), the generated certificate also contains a wildcard subject in the format of *.<service.name>.<service.namespace>.svc . Important Because the generated certificates contain wildcard subjects for headless services, you must not use the service CA if your client must differentiate between individual pods. In this case: Generate individual TLS certificates by using a different CA. Do not accept the service CA as a trusted CA for connections that are directed to individual pods and must not be impersonated by other pods. These connections must be configured to trust the CA that was used to generate the individual TLS certificates. Prerequisites: You must have a service defined. Procedure Annotate the service with service.beta.openshift.io/serving-cert-secret-name : USD oc annotate service <service_name> \ 1 service.beta.openshift.io/serving-cert-secret-name=<secret_name> 2 1 Replace <service_name> with the name of the service to secure. 2 <secret_name> will be the name of the generated secret containing the certificate and key pair. For convenience, it is recommended that this be the same as <service_name> . For example, use the following command to annotate the service test1 : USD oc annotate service test1 service.beta.openshift.io/serving-cert-secret-name=test1 Examine the service to confirm that the annotations are present: USD oc describe service <service_name> Example output ... Annotations: service.beta.openshift.io/serving-cert-secret-name: <service_name> service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1556850837 ... After the cluster generates a secret for your service, your Pod spec can mount it, and the pod will run after it becomes available. Additional resources You can use a service certificate to configure a secure route using reencrypt TLS termination. For more information, see Creating a re-encrypt route with a custom certificate . 3.3.3. Add the service CA bundle to a config map A pod can access the service CA certificate by mounting a ConfigMap object that is annotated with service.beta.openshift.io/inject-cabundle=true . Once annotated, the cluster automatically injects the service CA certificate into the service-ca.crt key on the config map. Access to this CA certificate allows TLS clients to verify connections to services using service serving certificates. Important After adding this annotation to a config map all existing data in it is deleted. It is recommended to use a separate config map to contain the service-ca.crt , instead of using the same config map that stores your pod configuration. Procedure Annotate the config map with service.beta.openshift.io/inject-cabundle=true : USD oc annotate configmap <config_map_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <config_map_name> with the name of the config map to annotate. Note Explicitly referencing the service-ca.crt key in a volume mount will prevent a pod from starting until the config map has been injected with the CA bundle. This behavior can be overridden by setting the optional field to true for the volume's serving certificate configuration. For example, use the following command to annotate the config map test1 : USD oc annotate configmap test1 service.beta.openshift.io/inject-cabundle=true View the config map to ensure that the service CA bundle has been injected: USD oc get configmap <config_map_name> -o yaml The CA bundle is displayed as the value of the service-ca.crt key in the YAML output: apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE----- ... 3.3.4. Add the service CA bundle to an API service You can annotate an APIService object with service.beta.openshift.io/inject-cabundle=true to have its spec.caBundle field populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Procedure Annotate the API service with service.beta.openshift.io/inject-cabundle=true : USD oc annotate apiservice <api_service_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <api_service_name> with the name of the API service to annotate. For example, use the following command to annotate the API service test1 : USD oc annotate apiservice test1 service.beta.openshift.io/inject-cabundle=true View the API service to ensure that the service CA bundle has been injected: USD oc get apiservice <api_service_name> -o yaml The CA bundle is displayed in the spec.caBundle field in the YAML output: apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... spec: caBundle: <CA_BUNDLE> ... 3.3.5. Add the service CA bundle to a custom resource definition You can annotate a CustomResourceDefinition (CRD) object with service.beta.openshift.io/inject-cabundle=true to have its spec.conversion.webhook.clientConfig.caBundle field populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note The service CA bundle will only be injected into the CRD if the CRD is configured to use a webhook for conversion. It is only useful to inject the service CA bundle if a CRD's webhook is secured with a service CA certificate. Procedure Annotate the CRD with service.beta.openshift.io/inject-cabundle=true : USD oc annotate crd <crd_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <crd_name> with the name of the CRD to annotate. For example, use the following command to annotate the CRD test1 : USD oc annotate crd test1 service.beta.openshift.io/inject-cabundle=true View the CRD to ensure that the service CA bundle has been injected: USD oc get crd <crd_name> -o yaml The CA bundle is displayed in the spec.conversion.webhook.clientConfig.caBundle field in the YAML output: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... spec: conversion: strategy: Webhook webhook: clientConfig: caBundle: <CA_BUNDLE> ... 3.3.6. Add the service CA bundle to a mutating webhook configuration You can annotate a MutatingWebhookConfiguration object with service.beta.openshift.io/inject-cabundle=true to have the clientConfig.caBundle field of each webhook populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note Do not set this annotation for admission webhook configurations that need to specify different CA bundles for different webhooks. If you do, then the service CA bundle will be injected for all webhooks. Procedure Annotate the mutating webhook configuration with service.beta.openshift.io/inject-cabundle=true : USD oc annotate mutatingwebhookconfigurations <mutating_webhook_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <mutating_webhook_name> with the name of the mutating webhook configuration to annotate. For example, use the following command to annotate the mutating webhook configuration test1 : USD oc annotate mutatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true View the mutating webhook configuration to ensure that the service CA bundle has been injected: USD oc get mutatingwebhookconfigurations <mutating_webhook_name> -o yaml The CA bundle is displayed in the clientConfig.caBundle field of all webhooks in the YAML output: apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE> ... 3.3.7. Add the service CA bundle to a validating webhook configuration You can annotate a ValidatingWebhookConfiguration object with service.beta.openshift.io/inject-cabundle=true to have the clientConfig.caBundle field of each webhook populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note Do not set this annotation for admission webhook configurations that need to specify different CA bundles for different webhooks. If you do, then the service CA bundle will be injected for all webhooks. Procedure Annotate the validating webhook configuration with service.beta.openshift.io/inject-cabundle=true : USD oc annotate validatingwebhookconfigurations <validating_webhook_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <validating_webhook_name> with the name of the validating webhook configuration to annotate. For example, use the following command to annotate the validating webhook configuration test1 : USD oc annotate validatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true View the validating webhook configuration to ensure that the service CA bundle has been injected: USD oc get validatingwebhookconfigurations <validating_webhook_name> -o yaml The CA bundle is displayed in the clientConfig.caBundle field of all webhooks in the YAML output: apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE> ... 3.3.8. Manually rotate the generated service certificate You can rotate the service certificate by deleting the associated secret. Deleting the secret results in a new one being automatically created, resulting in a new certificate. Prerequisites A secret containing the certificate and key pair must have been generated for the service. Procedure Examine the service to determine the secret containing the certificate. This is found in the serving-cert-secret-name annotation, as seen below. USD oc describe service <service_name> Example output ... service.beta.openshift.io/serving-cert-secret-name: <secret> ... Delete the generated secret for the service. This process will automatically recreate the secret. USD oc delete secret <secret> 1 1 Replace <secret> with the name of the secret from the step. Confirm that the certificate has been recreated by obtaining the new secret and examining the AGE . USD oc get secret <service_name> Example output NAME TYPE DATA AGE <service.name> kubernetes.io/tls 2 1s 3.3.9. Manually rotate the service CA certificate The service CA is valid for 26 months and is automatically refreshed when there is less than 13 months validity left. If necessary, you can manually refresh the service CA by using the following procedure. Warning A manually-rotated service CA does not maintain trust with the service CA. You might experience a temporary service disruption until the pods in the cluster are restarted, which ensures that pods are using service serving certificates issued by the new service CA. Prerequisites You must be logged in as a cluster admin. Procedure View the expiration date of the current service CA certificate by using the following command. USD oc get secrets/signing-key -n openshift-service-ca \ -o template='{{index .data "tls.crt"}}' \ | base64 --decode \ | openssl x509 -noout -enddate Manually rotate the service CA. This process generates a new service CA which will be used to sign the new service certificates. USD oc delete secret/signing-key -n openshift-service-ca To apply the new certificates to all services, restart all the pods in your cluster. This command ensures that all services use the updated certificates. USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done Warning This command will cause a service interruption, as it goes through and deletes every running pod in every namespace. These pods will automatically restart after they are deleted. 3.4. Updating the CA bundle 3.4.1. Understanding the CA Bundle certificate Proxy certificates allow users to specify one or more custom certificate authority (CA) used by platform components when making egress connections. The trustedCA field of the Proxy object is a reference to a config map that contains a user-provided trusted certificate authority (CA) bundle. This bundle is merged with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle and injected into the trust store of platform components that make egress HTTPS calls. For example, image-registry-operator calls an external image registry to download images. If trustedCA is not specified, only the RHCOS trust bundle is used for proxied HTTPS connections. Provide custom CA certificates to the RHCOS trust bundle if you want to use your own certificate infrastructure. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from required key ca-bundle.crt and copying it to a config map named trusted-ca-bundle in the openshift-config-managed namespace. The namespace for the config map referenced by trustedCA is openshift-config : apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- 3.4.2. Replacing the CA Bundle certificate Procedure Create a config map that includes the root CA certificate used to sign the wildcard certificate: USD oc create configmap custom-ca \ --from-file=ca-bundle.crt=</path/to/example-ca.crt> \ 1 -n openshift-config 1 </path/to/example-ca.crt> is the path to the CA certificate bundle on your local file system. Update the cluster-wide proxy configuration with the newly created config map: USD oc patch proxy/cluster \ --type=merge \ --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}' Additional resources Replacing the default ingress certificate Enabling the cluster-wide proxy Proxy certificate customization | [
"oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config",
"oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'",
"oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-ingress",
"oc patch ingresscontroller.operator default --type=merge -p '{\"spec\":{\"defaultCertificate\": {\"name\": \"<secret>\"}}}' \\ 1 -n openshift-ingress-operator",
"oc login -u kubeadmin -p <password> https://FQDN:6443",
"oc config view --flatten > kubeconfig-newapi",
"oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-config",
"oc patch apiserver cluster --type=merge -p '{\"spec\":{\"servingCerts\": {\"namedCertificates\": [{\"names\": [\"<FQDN>\"], 1 \"servingCertificate\": {\"name\": \"<secret>\"}}]}}}' 2",
"oc get apiserver cluster -o yaml",
"spec: servingCerts: namedCertificates: - names: - <FQDN> servingCertificate: name: <secret>",
"oc get clusteroperators kube-apiserver",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.10.0 True False False 145m",
"for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done",
"oc annotate service <service_name> \\ 1 service.beta.openshift.io/serving-cert-secret-name=<secret_name> 2",
"oc annotate service test1 service.beta.openshift.io/serving-cert-secret-name=test1",
"oc describe service <service_name>",
"Annotations: service.beta.openshift.io/serving-cert-secret-name: <service_name> service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1556850837",
"oc annotate configmap <config_map_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate configmap test1 service.beta.openshift.io/inject-cabundle=true",
"oc get configmap <config_map_name> -o yaml",
"apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE-----",
"oc annotate apiservice <api_service_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate apiservice test1 service.beta.openshift.io/inject-cabundle=true",
"oc get apiservice <api_service_name> -o yaml",
"apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: caBundle: <CA_BUNDLE>",
"oc annotate crd <crd_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate crd test1 service.beta.openshift.io/inject-cabundle=true",
"oc get crd <crd_name> -o yaml",
"apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: conversion: strategy: Webhook webhook: clientConfig: caBundle: <CA_BUNDLE>",
"oc annotate mutatingwebhookconfigurations <mutating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate mutatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true",
"oc get mutatingwebhookconfigurations <mutating_webhook_name> -o yaml",
"apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>",
"oc annotate validatingwebhookconfigurations <validating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate validatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true",
"oc get validatingwebhookconfigurations <validating_webhook_name> -o yaml",
"apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>",
"oc describe service <service_name>",
"service.beta.openshift.io/serving-cert-secret-name: <secret>",
"oc delete secret <secret> 1",
"oc get secret <service_name>",
"NAME TYPE DATA AGE <service.name> kubernetes.io/tls 2 1s",
"oc get secrets/signing-key -n openshift-service-ca -o template='{{index .data \"tls.crt\"}}' | base64 --decode | openssl x509 -noout -enddate",
"oc delete secret/signing-key -n openshift-service-ca",
"for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done",
"apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE-----",
"oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config",
"oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/security_and_compliance/configuring-certificates |
Chapter 6. Technology Previews | Chapter 6. Technology Previews Technology Preview features included with Streams for Apache Kafka 2.9. Important Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope, see Technology Preview Features Support Scope . There are no technology previews for Streams for Apache Kafka 2.9 on RHEL. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/release_notes_for_streams_for_apache_kafka_2.9_on_rhel/tech-preview-str |
Chapter 5. View OpenShift Data Foundation Topology | Chapter 5. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/viewing-odf-topology_rhodf |
Troubleshooting OpenShift Data Foundation | Troubleshooting OpenShift Data Foundation Red Hat OpenShift Data Foundation 4.17 Instructions on troubleshooting OpenShift Data Foundation Red Hat Storage Documentation Team Abstract Read this document for instructions on troubleshooting Red Hat OpenShift Data Foundation. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Chapter 1. Overview Troubleshooting OpenShift Data Foundation is written to help administrators understand how to troubleshoot and fix their Red Hat OpenShift Data Foundation cluster. Most troubleshooting tasks focus on either a fix or a workaround. This document is divided into chapters based on the errors that an administrator may encounter: Chapter 2, Downloading log files and diagnostic information using must-gather shows you how to use the must-gather utility in OpenShift Data Foundation. Chapter 4, Commonly required logs for troubleshooting shows you how to obtain commonly required log files for OpenShift Data Foundation. Chapter 7, Troubleshooting alerts and errors in OpenShift Data Foundation shows you how to identify the encountered error and perform required actions. Warning Red Hat does not support running Ceph commands in OpenShift Data Foundation clusters (unless indicated by Red Hat support or Red Hat documentation) as it can cause data loss if you run the wrong commands. In that case, the Red Hat support team is only able to provide commercially reasonable effort and may not be able to restore all the data in case of any data loss. Chapter 2. Downloading log files and diagnostic information using must-gather If Red Hat OpenShift Data Foundation is unable to automatically resolve a problem, use the must-gather tool to collect log files and diagnostic information so that you or Red Hat support can review the problem and determine a solution. Important When Red Hat OpenShift Data Foundation is deployed in external mode, must-gather only collects logs from the OpenShift Data Foundation cluster and does not collect debug data and logs from the external Red Hat Ceph Storage cluster. To collect debug logs from the external Red Hat Ceph Storage cluster, see Red Hat Ceph Storage Troubleshooting guide and contact your Red Hat Ceph Storage Administrator. Prerequisites Optional: If OpenShift Data Foundation is deployed in a disconnected environment, ensure that you mirror the individual must-gather image to the mirror registry available from the disconnected environment. <local-registry> Is the local image mirror registry available for a disconnected OpenShift Container Platform cluster. <path-to-the-registry-config> Is the path to your registry credentials, by default it is ~/.docker/config.json . --insecure Add this flag only if the mirror registry is insecure. For more information, see the Red Hat Knowledgebase solutions: How to mirror images between Redhat Openshift registries Failed to mirror OpenShift image repository when private registry is insecure Procedure Run the must-gather command from the client connected to the OpenShift Data Foundation cluster: <directory-name> Is the name of the directory where you want to write the data to. Important For a disconnected environment deployment, replace the image in --image parameter with the mirrored must-gather image. <local-registry> Is the local image mirror registry available for a disconnected OpenShift Container Platform cluster. This collects the following information in the specified directory: All Red Hat OpenShift Data Foundation cluster related Custom Resources (CRs) with their namespaces. Pod logs of all the Red Hat OpenShift Data Foundation related pods. Output of some standard Ceph commands like Status, Cluster health, and others. 2.1. Variations of must-gather-commands If one or more master nodes are not in the Ready state, use --node-name to provide a master node that is Ready so that the must-gather pod can be safely scheduled. If you want to gather information from a specific time: To specify a relative time period for logs gathered, such as within 5 seconds or 2 days, add /usr/bin/gather since=<duration> : To specify a specific time to gather logs after, add /usr/bin/gather since-time=<rfc3339-timestamp> : Replace the example values in these commands as follows: <node-name> If one or more master nodes are not in the Ready state, use this parameter to provide the name of a master node that is still in the Ready state. This avoids scheduling errors by ensuring that the must-gather pod is not scheduled on a master node that is not ready. <directory-name> The directory to store information collected by must-gather . <duration> Specify the period of time to collect information from as a relative duration, for example, 5h (starting from 5 hours ago). <rfc3339-timestamp> Specify the period of time to collect information from as an RFC 3339 timestamp, for example, 2020-11-10T04:00:00+00:00 (starting from 4 am UTC on 11 Nov 2020). 2.2. Running must-gather in modular mode Red Hat OpenShift Data Foundation must-gather can take a long time to run in some environments. To avoid this, run must-gather in modular mode and collect only the resources you require using the following command: Replace < -arg> with one or more of the following arguments to specify the resources for which the must-gather logs is required. -o , --odf ODF logs (includes Ceph resources, namespaced resources, clusterscoped resources and Ceph logs) -d , --dr DR logs -n , --noobaa Noobaa logs -c , --ceph Ceph commands and pod logs -cl , --ceph-logs Ceph daemon, kernel, journal logs, and crash reports -ns , --namespaced namespaced resources -cs , --clusterscoped clusterscoped resources -pc , --provider openshift-storage-client logs from a provider/consumer cluster (includes all the logs under operator namespace, pods, deployments, secrets, configmap, and other resources) -h , --help Print help message Note If no < -arg> is included, must-gather will collect all logs. Chapter 3. Using odf-cli command odf-cli command and its subcommands help to reduce repetitive tasks and provide better experience. You can download the odf-cli tool from the customer portal . Subcommands of odf get command odf get recovery-profile Displays the recovery-profile value set for the OSD. By default, an empty value is displayed if the value is not set using the odf set recovery-profile command. After the value is set, the appropriate value is displayed. Example: odf get health Checks the health of the Ceph cluster and common configuration issues. This command checks for the following: At least three mon pods are running on different nodes Mon quorum and Ceph health details At least three OSD pods are running on different nodes The 'Running' status of all pods Placement group status At least one MGR pod is running Example: odf get dr-health In mirroring-enabled clusters, fetches the connection status of a cluster from another cluster. The cephblockpool is queried with mirroring-enabled and If not found will exit with relevant logs. Example: odf get dr-prereq Checks and fetches the status of all the prerequisites to enable Disaster Recovery on a pair of clusters. The command takes the peer cluster name as an argument and uses it to compare current cluster configuration with the peer cluster configuration. Based on the comparison results, the status of the prerequisites is shown. Example Subcommands of odf operator command odf operator rook set Sets the provided property value in the rook-ceph-operator config configmap Example: where, ROOK_LOG_LEVEL can be DEBUG , INFO , or WARNING odf operator rook restart Restarts the Rook-Ceph operator Example: odf restore mon-quorum Restores the mon quorum when the majority of mons are not in quorum and the cluster is down. When the majority of mons are lost permanently, the quorum needs to be restored to a remaining good mon in order to bring the Ceph cluster up again. Example: odf restore deleted <crd> Restores the deleted Rook CR when there is still data left for the components, CephClusters, CephFilesystems, and CephBlockPools. Generally, when Rook CR is deleted and there is leftover data, the Rook operator does not delete the CR to ensure data is not lost and the operator does not remove the finalizer on the CR. As a result, the CR is stuck in the Deleting state and cluster health is not ensured. Upgrades are blocked too. This command helps to repair the CR without the cluster downtime. Note A warning message seeking confirmation to restore appears. After confirming, you need to enter continue to start the operator and expand to the full mon-quorum again. Example: 3.1. Configuring debug verbosity of Ceph components You can configure verbosity of Ceph components by enabling or increasing the log debugging for a specific Ceph subsystem from OpenShift Data Foundation. For information about the Ceph subsystems and the log levels that can be updated, see Ceph subsystems default logging level values . Procedure Set log level for Ceph daemons: where ceph-subsystem can be osd , mds , or mon . For example, Chapter 4. Commonly required logs for troubleshooting Some of the commonly used logs for troubleshooting OpenShift Data Foundation are listed, along with the commands to generate them. Generating logs for a specific pod: Generating logs for Ceph or OpenShift Data Foundation cluster: Important Currently, the rook-ceph-operator logs do not provide any information about the failure and this acts as a limitation in troubleshooting issues, see Enabling and disabling debug logs for rook-ceph-operator . Generating logs for plugin pods like cephfs or rbd to detect any problem in the PVC mount of the app-pod: To generate logs for all the containers in the CSI pod: Generating logs for cephfs or rbd provisioner pods to detect problems if PVC is not in BOUND state: To generate logs for all the containers in the CSI pod: Generating OpenShift Data Foundation logs using cluster-info command: When using Local Storage Operator, generating logs can be done using cluster-info command: Check the OpenShift Data Foundation operator logs and events. To check the operator logs : <ocs-operator> To check the operator events : Get the OpenShift Data Foundation operator version and channel. Example output : Example output : Confirm that the installplan is created. Verify the image of the components post updating OpenShift Data Foundation. Check the node on which the pod of the component you want to verify the image is running. For Example : Example output: dell-r440-12.gsslab.pnq2.redhat.com is the node-name . Check the image ID. <node-name> Is the name of the node on which the pod of the component you want to verify the image is running. For Example : Take a note of the IMAGEID and map it to the Digest ID on the Rook Ceph Operator page. Additional resources Using must-gather 4.1. Adjusting verbosity level of logs The amount of space consumed by debugging logs can become a significant issue. Red Hat OpenShift Data Foundation offers a method to adjust, and therefore control, the amount of storage to be consumed by debugging logs. In order to adjust the verbosity levels of debugging logs, you can tune the log levels of the containers responsible for container storage interface (CSI) operations. In the container's yaml file, adjust the following parameters to set the logging levels: CSI_LOG_LEVEL - defaults to 5 CSI_SIDECAR_LOG_LEVEL - defaults to 1 The supported values are 0 through 5 . Use 0 for general useful logs, and 5 for trace level verbosity. Chapter 5. Overriding the cluster-wide default node selector for OpenShift Data Foundation post deployment When a cluster-wide default node selector is used for OpenShift Data Foundation, the pods generated by container storage interface (CSI) daemonsets are able to start only on the nodes that match the selector. To be able to use OpenShift Data Foundation from nodes which do not match the selector, override the cluster-wide default node selector by performing the following steps in the command line interface : Procedure Specify a blank node selector for the openshift-storage namespace. Delete the original pods generated by the DaemonSets. Chapter 6. Encryption token is deleted or expired Use this procedure to update the token if the encryption token for your key management system gets deleted or expires. Prerequisites Ensure that you have a new token with the same policy as the deleted or expired token Procedure Log in to OpenShift Container Platform Web Console. Click Workloads -> Secrets To update the ocs-kms-token used for cluster wide encryption: Set the Project to openshift-storage . Click ocs-kms-token -> Actions -> Edit Secret . Drag and drop or upload your encryption token file in the Value field. The token can either be a file or text that can be copied and pasted. Click Save . To update the ceph-csi-kms-token for a given project or namespace with encrypted persistent volumes: Select the required Project . Click ceph-csi-kms-token -> Actions -> Edit Secret . Drag and drop or upload your encryption token file in the Value field. The token can either be a file or text that can be copied and pasted. Click Save . Note The token can be deleted only after all the encrypted PVCs using the ceph-csi-kms-token have been deleted. Chapter 7. Troubleshooting alerts and errors in OpenShift Data Foundation 7.1. Resolving alerts and errors Red Hat OpenShift Data Foundation can detect and automatically resolve a number of common failure scenarios. However, some problems require administrator intervention. To know the errors currently firing, check one of the following locations: Observe -> Alerting -> Firing option Home -> Overview -> Cluster tab Storage -> Data Foundation -> Storage System -> storage system link in the pop up -> Overview -> Block and File tab Storage -> Data Foundation -> Storage System -> storage system link in the pop up -> Overview -> Object tab Copy the error displayed and search it in the following section to know its severity and resolution: Name : CephMonVersionMismatch Message : There are multiple versions of storage services running. Description : There are {{ USDvalue }} different versions of Ceph Mon components running. Severity : Warning Resolution : Fix Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update is in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Name : CephOSDVersionMismatch Message : There are multiple versions of storage services running. Description : There are {{ USDvalue }} different versions of Ceph OSD components running. Severity : Warning Resolution : Fix Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update is in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Name : CephClusterCriticallyFull Message : Storage cluster is critically full and needs immediate expansion Description : Storage cluster utilization has crossed 85%. Severity : Crtical Resolution : Fix Procedure : Remove unnecessary data or expand the cluster. Name : CephClusterNearFull Fixed : Storage cluster is nearing full. Expansion is required. Description : Storage cluster utilization has crossed 75%. Severity : Warning Resolution : Fix Procedure : Remove unnecessary data or expand the cluster. Name : NooBaaBucketErrorState Message : A NooBaa Bucket Is In Error State Description : A NooBaa bucket {{ USDlabels.bucket_name }} is in error state for more than 6m Severity : Warning Resolution : Workaround Procedure : Finding the error code of an unhealthy bucket Name : NooBaaNamespaceResourceErrorState Message : A NooBaa Namespace Resource Is In Error State Description : A NooBaa namespace resource {{ USDlabels.namespace_resource_name }} is in error state for more than 5m Severity : Warning Resolution : Fix Procedure : Finding the error code of an unhealthy namespace store resource Name : NooBaaNamespaceBucketErrorState Message : A NooBaa Namespace Bucket Is In Error State Description : A NooBaa namespace bucket {{ USDlabels.bucket_name }} is in error state for more than 5m Severity : Warning Resolution : Fix Procedure : Finding the error code of an unhealthy bucket Name : CephMdsMissingReplicas Message : Insufficient replicas for storage metadata service. Description : `Minimum required replicas for storage metadata service not available. Might affect the working of storage cluster.` Severity : Warning Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, contact Red Hat support . Name : CephMgrIsAbsent Message : Storage metrics collector service not available anymore. Description : Ceph Manager has disappeared from Prometheus target discovery. Severity : Critical Resolution : Contact Red Hat support Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update is in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Once the upgrade is complete, check for alerts and operator status. If the issue persists or cannot be identified, contact Red Hat support . Name : CephNodeDown Message : Storage node {{ USDlabels.node }} went down Description : Storage node {{ USDlabels.node }} went down. Check the node immediately. Severity : Critical Resolution : Contact Red Hat support Procedure : Check which node stopped functioning and its cause. Take appropriate actions to recover the node. If node cannot be recovered: See Replacing storage nodes for Red Hat OpenShift Data Foundation Contact Red Hat support . Name : CephClusterErrorState Message : Storage cluster is in error state Description : Storage cluster is in error state for more than 10m. Severity : Critical Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, download log files and diagnostic information using must-gather . Open a Support Ticket with Red Hat Support with an attachment of the output of must-gather. Name : CephClusterWarningState Message : Storage cluster is in degraded state Description : Storage cluster is in warning state for more than 10m. Severity : Warning Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, download log files and diagnostic information using must-gather . Open a Support Ticket with Red Hat Support with an attachment of the output of must-gather. Name : CephDataRecoveryTakingTooLong Message : Data recovery is slow Description : Data recovery has been active for too long. Severity : Warning Resolution : Contact Red Hat support Name : CephOSDDiskNotResponding Message : Disk not responding Description : Disk device {{ USDlabels.device }} not responding, on host {{ USDlabels.host }}. Severity : Critical Resolution : Contact Red Hat support Name : CephOSDDiskUnavailable Message : Disk not accessible Description : Disk device {{ USDlabels.device }} not accessible on host {{ USDlabels.host }}. Severity : Critical Resolution : Contact Red Hat support Name : CephPGRepairTakingTooLong Message : Self heal problems detected Description : Self heal operations taking too long. Severity : Warning Resolution : Contact Red Hat support Name : CephMonHighNumberOfLeaderChanges Message : Storage Cluster has seen many leader changes recently. Description : 'Ceph Monitor "{{ USDlabels.job }}": instance {{ USDlabels.instance }} has seen {{ USDvalue printf "%.2f" }} leader changes per minute recently.' Severity : Warning Resolution : Contact Red Hat support Name : CephMonQuorumAtRisk Message : Storage quorum at risk Description : Storage cluster quorum is low. Severity : Critical Resolution : Contact Red Hat support Name : ClusterObjectStoreState Message : Cluster Object Store is in an unhealthy state. Check Ceph cluster health . Description : Cluster Object Store is in an unhealthy state for more than 15s. Check Ceph cluster health . Severity : Critical Resolution : Contact Red Hat support Procedure : Check the CephObjectStore CR instance. Contact Red Hat support . Name : CephOSDFlapping Message : Storage daemon osd.x has restarted 5 times in the last 5 minutes. Check the pod events or Ceph status to find out the cause . Description : Storage OSD restarts more than 5 times in 5 minutes . Severity : Critical Resolution : Contact Red Hat support Name : OdfPoolMirroringImageHealth Message : Mirroring image(s) (PV) in the pool <pool-name> are in Warning state for more than a 1m. Mirroring might not work as expected. Description : Disaster recovery is failing for one or a few applications. Severity : Warning Resolution : Contact Red Hat support Name : OdfMirrorDaemonStatus Message : Mirror daemon is unhealthy . Description : Disaster recovery is failing for the entire cluster. Mirror daemon is in an unhealthy status for more than 1m. Mirroring on this cluster is not working as expected. Severity : Critical Resolution : Contact Red Hat support 7.2. Resolving cluster health issues There is a finite set of possible health messages that a Red Hat Ceph Storage cluster can raise that show in the OpenShift Data Foundation user interface. These are defined as health checks which have unique identifiers. The identifier is a terse pseudo-human-readable string that is intended to enable tools to make sense of health checks, and present them in a way that reflects their meaning. Click the health code below for more information and troubleshooting. Health code Description MON_DISK_LOW One or more Ceph Monitors are low on disk space. 7.2.1. MON_DISK_LOW This alert triggers if the available space on the file system storing the monitor database as a percentage, drops below mon_data_avail_warn (default: 15%). This may indicate that some other process or user on the system is filling up the same file system used by the monitor. It may also indicate that the monitor's database is large. Note The paths to the file system differ depending on the deployment of your mons. You can find the path to where the mon is deployed in storagecluster.yaml . Example paths: Mon deployed over PVC path: /var/lib/ceph/mon Mon deployed over hostpath: /var/lib/rook/mon In order to clear up space, view the high usage files in the file system and choose which to delete. To view the files, run: Replace <path-in-the-mon-node> with the path to the file system where mons are deployed. 7.3. Resolving cluster alerts There is a finite set of possible health alerts that a Red Hat Ceph Storage cluster can raise that show in the OpenShift Data Foundation user interface. These are defined as health alerts which have unique identifiers. The identifier is a terse pseudo-human-readable string that is intended to enable tools to make sense of health checks, and present them in a way that reflects their meaning. Click the health alert for more information and troubleshooting. Table 7.1. Types of cluster health alerts Health alert Overview CephClusterCriticallyFull Storage cluster utilization has crossed 80%. CephClusterErrorState Storage cluster is in an error state for more than 10 minutes. CephClusterNearFull Storage cluster is nearing full capacity. Data deletion or cluster expansion is required. CephClusterReadOnly Storage cluster is read-only now and needs immediate data deletion or cluster expansion. CephClusterWarningState Storage cluster is in a warning state for more than 10 mins. CephDataRecoveryTakingTooLong Data recovery has been active for too long. CephMdsCacheUsageHigh Ceph metadata service (MDS) cache usage for the MDS daemon has exceeded 95% of the mds_cache_memory_limit . CephMdsCpuUsageHigh Ceph MDS CPU usage for the MDS daemon has exceeded the threshold for adequate performance. CephMdsMissingReplicas Minimum required replicas for storage metadata service not available. Might affect the working of the storage cluster. CephMgrIsAbsent Ceph Manager has disappeared from Prometheus target discovery. CephMgrIsMissingReplicas Ceph manager is missing replicas. Thispts health status reporting and will cause some of the information reported by the ceph status command to be missing or stale. In addition, the Ceph manager is responsible for a manager framework aimed at expanding the existing capabilities of Ceph. CephMonHighNumberOfLeaderChanges The Ceph monitor leader is being changed an unusual number of times. CephMonQuorumAtRisk Storage cluster quorum is low. CephMonQuorumLost The number of monitor pods in the storage cluster are not enough. CephMonVersionMismatch There are different versions of Ceph Mon components running. CephNodeDown A storage node went down. Check the node immediately. The alert should contain the node name. CephOSDCriticallyFull Utilization of back-end Object Storage Device (OSD) has crossed 80%. Free up some space immediately or expand the storage cluster or contact support. CephOSDDiskNotResponding A disk device is not responding on one of the hosts. CephOSDDiskUnavailable A disk device is not accessible on one of the hosts. CephOSDFlapping Ceph storage OSD flapping. CephOSDNearFull One of the OSD storage devices is nearing full. CephOSDSlowOps OSD requests are taking too long to process. CephOSDVersionMismatch There are different versions of Ceph OSD components running. CephPGRepairTakingTooLong Self-healing operations are taking too long. CephPoolQuotaBytesCriticallyExhausted Storage pool quota usage has crossed 90%. CephPoolQuotaBytesNearExhaustion Storage pool quota usage has crossed 70%. OSDCPULoadHigh CPU usage in the OSD container on a specific pod has exceeded 80%, potentially affecting the performance of the OSD. PersistentVolumeUsageCritical Persistent Volume Claim usage has exceeded more than 85% of its capacity. PersistentVolumeUsageNearFull Persistent Volume Claim usage has exceeded more than 75% of its capacity. 7.3.1. CephClusterCriticallyFull Meaning Storage cluster utilization has crossed 80% and will become read-only at 85%. Your Ceph cluster will become read-only once utilization crosses 85%. Free up some space or expand the storage cluster immediately. It is common to see alerts related to Object Storage Device (OSD) full or near full prior to this alert. Impact High Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information to free up some space. 7.3.2. CephClusterErrorState Meaning This alert reflects that the storage cluster is in ERROR state for an unacceptable amount of time and thispts the storage availability. Check for other alerts that would have triggered prior to this one and troubleshoot those alerts first. Impact Critical Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.3. CephClusterNearFull Meaning Storage cluster utilization has crossed 75% and will become read-only at 85%. Free up some space or expand the storage cluster. Impact Critical Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information in order to free up some space. 7.3.4. CephClusterReadOnly Meaning Storage cluster utilization has crossed 85% and will become read-only now. Free up some space or expand the storage cluster immediately. Impact Critical Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information in order to free up some space. 7.3.5. CephClusterWarningState Meaning This alert reflects that the storage cluster has been in a warning state for an unacceptable amount of time. While the storage operations will continue to function in this state, it is recommended to fix the errors so that the cluster does not get into an error state. Check for other alerts that might have triggered prior to this one and troubleshoot those alerts first. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.6. CephDataRecoveryTakingTooLong Meaning Data recovery is slow. Check whether all the Object Storage Devices (OSDs) are up and running. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.7. CephMdsCacheUsageHigh Meaning When the storage metadata service (MDS) cannot keep its cache usage under the target threshold specified by mds_health_cache_threshold , or 150% of the cache limit set by mds_cache_memory_limit , the MDS sends a health alert to the monitors indicating the cache is too large. As a result, the MDS related operations become slow. Impact High Diagnosis The MDS tries to stay under a reservation of the mds_cache_memory_limit by trimming unused metadata in its cache and recalling cached items in the client caches. It is possible for the MDS to exceed this limit due to slow recall from clients as a result of multiple clients accesing the files. Mitigation Make sure you have enough memory provisioned for MDS cache. Memory resources for the MDS pods need to be updated in the ocs-storageCluster in order to increase the mds_cache_memory_limit . Run the following command to set the memory of MDS pods, for example, 16GB: OpenShift Data Foundation automatically sets mds_cache_memory_limit to half of the MDS pod memory limit. If the memory is set to 8GB using the command, then the operator sets the MDS cache memory limit to 4GB. 7.3.8. CephMdsCpuUsageHigh Meaning The storage metadata service (MDS) serves filesystem metadata. The MDS is crucial for any file creation, rename, deletion, and update operations. MDS by default is allocated two or three CPUs. This does not cause issues as long as there are not too many metadata operations. When the metadata operation load increases enough to trigger this alert, it means the default CPU allocation is unable to cope with load. You need to increase the CPU allocation or run multiple active MDS servers. Impact High Diagnosis Click Workloads -> Pods . Select the corresponding MDS pod and click on the Metrics tab. There you will see the allocated and used CPU. By default, the alert is fired if the used CPU is 67% of allocated CPU for 6 hours. If this is the case, follow the steps in the mitigation section. Mitigation You need to either increase the allocated CPU or run multiple active MDS. Use the following command to set the number of allocated CPU for MDS, for example, 8: In order to run multiple active MDS servers, use the following command: Make sure you have enough CPU provisioned for MDS depending on the load. Important Always increase the activeMetadataServers by 1 . The scaling of activeMetadataServers works only if you have more than one PV. If there is only one PV that is causing CPU load, look at increasing the CPU resource as described above. 7.3.9. CephMdsMissingReplicas Meaning Minimum required replicas for the storage metadata service (MDS) are not available. MDS is responsible for filing metadata. Degradation of the MDS service can affect how the storage cluster works (related to the CephFS storage class) and should be fixed as soon as possible. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.10. CephMgrIsAbsent Meaning Not having a Ceph manager running the monitoring of the cluster. Persistent Volume Claim (PVC) creation and deletion requests should be resolved as soon as possible. Impact High Diagnosis Verify that the rook-ceph-mgr pod is failing, and restart if necessary. If the Ceph mgr pod restart fails, follow the general pod troubleshooting to resolve the issue. Verify that the Ceph mgr pod is failing: Describe the Ceph mgr pod for more details: <pod_name> Specify the rook-ceph-mgr pod name from the step. Analyze the errors related to resource issues. Delete the pod, and wait for the pod to restart: Follow these steps for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.11. CephMgrIsMissingReplicas Meaning To resolve this alert, you need to determine the cause of the disappearance of the Ceph manager and restart if necessary. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.12. CephMonHighNumberOfLeaderChanges Meaning In a Ceph cluster there is a redundant set of monitor pods that store critical information about the storage cluster. Monitor pods synchronize periodically to obtain information about the storage cluster. The first monitor pod to get the most updated information becomes the leader, and the other monitor pods will start their synchronization process after asking the leader. A problem in network connection or another kind of problem in one or more monitor pods produces an unusual change of the leader. This situation can negatively affect the storage cluster performance. Impact Medium Important Check for any network issues. If there is a network issue, you need to escalate to the OpenShift Data Foundation team before you proceed with any of the following troubleshooting steps. Diagnosis Print the logs of the affected monitor pod to gather more information about the issue: <rook-ceph-mon-X-yyyy> Specify the name of the affected monitor pod. Alternatively, use the Openshift Web console to open the logs of the affected monitor pod. More information about possible causes is reflected in the log. Perform the general pod troubleshooting steps: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.13. CephMonQuorumAtRisk Meaning Multiple MONs work together to provide redundancy. Each of the MONs keeps a copy of the metadata. The cluster is deployed with 3 MONs, and requires 2 or more MONs to be up and running for quorum and for the storage operations to run. If quorum is lost, access to data is at risk. Impact High Diagnosis Restore the Ceph MON Quorum. For more information, see Restoring ceph-monitor quorum in OpenShift Data Foundation in the Troubleshooting guide . If the restoration of the Ceph MON Quorum fails, follow the general pod troubleshooting to resolve the issue. Perform the following for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.14. CephMonQuorumLost Meaning In a Ceph cluster there is a redundant set of monitor pods that store critical information about the storage cluster. Monitor pods synchronize periodically to obtain information about the storage cluster. The first monitor pod to get the most updated information becomes the leader, and the other monitor pods will start their synchronization process after asking the leader. A problem in network connection or another kind of problem in one or more monitor pods produces an unusual change of the leader. This situation can negatively affect the storage cluster performance. Impact High Important Check for any network issues. If there is a network issue, you need to escalate to the OpenShift Data Foundation team before you proceed with any of the following troubleshooting steps. Diagnosis Restore the Ceph MON Quorum. For more information, see Restoring ceph-monitor quorum in OpenShift Data Foundation in the Troubleshooting guide . If the restoration of the Ceph MON Quorum fails, follow the general pod troubleshooting to resolve the issue. Alternatively, perform general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.15. CephMonVersionMismatch Meaning Typically this alert triggers during an upgrade that is taking a long time. Impact Medium Diagnosis Check the ocs-operator subscription status and the operator pod health to check if an operator upgrade is in progress. Check the ocs-operator subscription health. The status condition types are CatalogSourcesUnhealthy , InstallPlanMissing , InstallPlanPending , and InstallPlanFailed . The status for each type should be False . Example output: The example output shows a False status for type CatalogSourcesUnHealthly , which means that the catalog sources are healthy. Check the OCS operator pod status to see if there is an OCS operator upgrading in progress. If you determine that the `ocs-operator`is in progress, wait for 5 mins and this alert should resolve itself. If you have waited or see a different error status condition, continue troubleshooting. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.16. CephNodeDown Meaning A node running Ceph pods is down. While storage operations will continue to function as Ceph is designed to deal with a node failure, it is recommended to resolve the issue to minimize the risk of another node going down and affecting storage functions. Impact Medium Diagnosis List all the pods that are running and failing: Important Ensure that you meet the OpenShift Data Foundation resource requirements so that the Object Storage Device (OSD) pods are scheduled on the new node. This may take a few minutes as the Ceph cluster recovers data for the failing but now recovering OSD. To watch this recovery in action, ensure that the OSD pods are correctly placed on the new worker node. Check if the OSD pods that were previously failing are now running: If the previously failing OSD pods have not been scheduled, use the describe command and check the events for reasons the pods were not rescheduled. Describe the events for the failing OSD pod: Find the one or more failing OSD pods: In the events section look for the failure reasons, such as the resources are not being met. In addition, you may use the rook-ceph-toolbox to watch the recovery. This step is optional, but is helpful for large Ceph clusters. To access the toolbox, run the following command: From the rsh command prompt, run the following, and watch for "recovery" under the io section: Determine if there are failed nodes. Get the list of worker nodes, and check for the node status: Describe the node which is of the NotReady status to get more information about the failure: Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.17. CephOSDCriticallyFull Meaning One of the Object Storage Devices (OSDs) is critically full. Expand the cluster immediately. Impact High Diagnosis Deleting data to free up storage space You can delete data, and the cluster will resolve the alert through self healing processes. Important This is only applicable to OpenShift Data Foundation clusters that are near or full but not in read-only mode. Read-only mode prevents any changes that include deleting data, that is, deletion of Persistent Volume Claim (PVC), Persistent Volume (PV) or both. Expanding the storage capacity Current storage size is less than 1 TB You must first assess the ability to expand. For every 1 TB of storage added, the cluster needs to have 3 nodes each with a minimum available 2 vCPUs and 8 GiB memory. You can increase the storage capacity to 4 TB via the add-on and the cluster will resolve the alert through self healing processes. If the minimum vCPU and memory resource requirements are not met, you need to add 3 additional worker nodes to the cluster. Mitigation If your current storage size is equal to 4 TB, contact Red Hat support. Optional: Run the following command to gather the debugging information for the Ceph cluster: 7.3.18. CephOSDDiskNotResponding Meaning A disk device is not responding. Check whether all the Object Storage Devices (OSDs) are up and running. Impact Medium Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.19. CephOSDDiskUnavailable Meaning A disk device is not accessible on one of the hosts and its corresponding Object Storage Device (OSD) is marked out by the Ceph cluster. This alert is raised when a Ceph node fails to recover within 10 minutes. Impact High Diagnosis Determine the failed node Get the list of worker nodes, and check for the node status: Describe the node which is of NotReady status to get more information on the failure: 7.3.20. CephOSDFlapping Meaning A storage daemon has restarted 5 times in the last 5 minutes. Check the pod events or Ceph status to find out the cause. Impact High Diagnosis Follow the steps in the Flapping OSDs section of the Red Hat Ceph Storage Troubleshooting Guide. Alternatively, follow the steps for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.21. CephOSDNearFull Meaning Utilization of back-end storage device Object Storage Device (OSD) has crossed 75% on a host. Impact High Mitigation Free up some space in the cluster, expand the storage cluster, or contact Red Hat support. For more information on scaling storage, see the Scaling storage guide . 7.3.22. CephOSDSlowOps Meaning An Object Storage Device (OSD) with slow requests is every OSD that is not able to service the I/O operations per second (IOPS) in the queue within the time defined by the osd_op_complaint_time parameter. By default, this parameter is set to 30 seconds. Impact Medium Diagnosis More information about the slow requests can be obtained using the Openshift console. Access the OSD pod terminal, and run the following commands: Note The number of the OSD is seen in the pod name. For example, in rook-ceph-osd-0-5d86d4d8d4-zlqkx , <0> is the OSD. Mitigation The main causes of the OSDs having slow requests are: Problems with the underlying hardware or infrastructure, such as, disk drives, hosts, racks, or network switches. Use the Openshift monitoring console to find the alerts or errors about cluster resources. This can give you an idea about the root cause of the slow operations in the OSD. Problems with the network. These problems are usually connected with flapping OSDs. See the Flapping OSDs section of the Red Hat Ceph Storage Troubleshooting Guide If it is a network issue, escalate to the OpenShift Data Foundation team System load. Use the Openshift console to review the metrics of the OSD pod and the node which is running the OSD. Adding or assigning more resources can be a possible solution. 7.3.23. CephOSDVersionMismatch Meaning Typically this alert triggers during an upgrade that is taking a long time. Impact Medium Diagnosis Check the ocs-operator subscription status and the operator pod health to check if an operator upgrade is in progress. Check the ocs-operator subscription health. The status condition types are CatalogSourcesUnhealthy , InstallPlanMissing , InstallPlanPending , and InstallPlanFailed . The status for each type should be False . Example output: The example output shows a False status for type CatalogSourcesUnHealthly , which means that the catalog sources are healthy. Check the OCS operator pod status to see if there is an OCS operator upgrading in progress. If you determine that the `ocs-operator`is in progress, wait for 5 mins and this alert should resolve itself. If you have waited or see a different error status condition, continue troubleshooting. 7.3.24. CephPGRepairTakingTooLong Meaning Self-healing operations are taking too long. Impact High Diagnosis Check for inconsistent Placement Groups (PGs), and repair them. For more information, see the Red Hat Knowledgebase solution Handle Inconsistent Placement Groups in Ceph . 7.3.25. CephPoolQuotaBytesCriticallyExhausted Meaning One or more pools has reached, or is very close to reaching, its quota. The threshold to trigger this error condition is controlled by the mon_pool_quota_crit_threshold configuration option. Impact High Mitigation Adjust the pool quotas. Run the following commands to fully remove or adjust the pool quotas up or down: Setting the quota value to 0 will disable the quota. 7.3.26. CephPoolQuotaBytesNearExhaustion Meaning One or more pools is approaching a configured fullness threshold. One threshold that can trigger this warning condition is the mon_pool_quota_warn_threshold configuration option. Impact High Mitigation Adjust the pool quotas. Run the following commands to fully remove or adjust the pool quotas up or down: Setting the quota value to 0 will disable the quota. 7.3.27. OSDCPULoadHigh Meaning OSD is a critical component in Ceph storage, responsible for managing data placement and recovery. High CPU usage in the OSD container suggests increased processing demands, potentially leading to degraded storage performance. Impact High Diagnosis Navigate to the Kubernetes dashboard or equivalent. Access the Workloads section and select the relevant pod associated with the OSD alert. Click the Metrics tab to view CPU metrics for the OSD container. Verify that the CPU usage exceeds 80% over a significant period (as specified in the alert configuration). Mitigation If the OSD CPU usage is consistently high, consider taking the following steps: Evaluate the overall storage cluster performance and identify the OSDs contributing to high CPU usage. Increase the number of OSDs in the cluster by adding more new storage devices in the existing nodes or adding new nodes with new storage devices. Review the Scaling storage4 for instructions to help distribute the load and improve overall system performance. 7.3.28. PersistentVolumeUsageCritical Meaning A Persistent Volume Claim (PVC) is nearing its full capacity and may lead to data loss if not attended to timely. Impact High Mitigation Expand the PVC size to increase the capacity. Log in to the OpenShift Web Console. Click Storage -> PersistentVolumeClaim . Select openshift-storage from the Project drop-down list. On the PVC you want to expand, click Action menu (...) -> Expand PVC . Update the Total size to the desired size. Click Expand . Alternatively, you can delete unnecessary data that may be taking up space. 7.3.29. PersistentVolumeUsageNearFull Meaning A Persistent Volume Claim (PVC) is nearing its full capacity and may lead to data loss if not attended to timely. Impact High Mitigation Expand the PVC size to increase the capacity. Log in to the OpenShift Web Console. Click Storage -> PersistentVolumeClaim . Select openshift-storage from the Project drop-down list. On the PVC you want to expand, click Action menu (...) -> Expand PVC . Update the Total size to the desired size. Click Expand . Alternatively, you can delete unnecessary data that may be taking up space. 7.4. Finding the error code of an unhealthy bucket Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Object Bucket Claims tab. Look for the object bucket claims (OBCs) that are not in Bound state and click on it. Click the Events tab and do one of the following: Look for events that might hint you about the current state of the bucket. Click the YAML tab and look for related errors around the status and mode sections of the YAML. If the OBC is in Pending state. the error might appear in the product logs. However, in this case, it is recommended to verify that all the variables provided are accurate. 7.5. Finding the error code of an unhealthy namespace store resource Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Namespace Store tab. Look for the namespace store resources that are not in Bound state and click on it. Click the Events tab and do one of the following: Look for events that might hint you about the current state of the resource. Click the YAML tab and look for related errors around the status and mode sections of the YAML. 7.6. Recovering pods When a first node (say NODE1 ) goes to NotReady state because of some issue, the hosted pods that are using PVC with ReadWriteOnce (RWO) access mode try to move to the second node (say NODE2 ) but get stuck due to multi-attach error. In such a case, you can recover MON, OSD, and application pods by using the following steps. Procedure Power off NODE1 (from AWS or vSphere side) and ensure that NODE1 is completely down. Force delete the pods on NODE1 by using the following command: 7.7. Recovering from EBS volume detach When an OSD or MON elastic block storage (EBS) volume where the OSD disk resides is detached from the worker Amazon EC2 instance, the volume gets reattached automatically within one or two minutes. However, the OSD pod gets into a CrashLoopBackOff state. To recover and bring back the pod to Running state, you must restart the EC2 instance. 7.8. Enabling and disabling debug logs for rook-ceph-operator Enable the debug logs for the rook-ceph-operator to obtain information about failures that help in troubleshooting issues. Procedure Enabling the debug logs Edit the configmap of the rook-ceph-operator. Add the ROOK_LOG_LEVEL: DEBUG parameter in the rook-ceph-operator-config yaml file to enable the debug logs for rook-ceph-operator. Now, the rook-ceph-operator logs consist of the debug information. Disabling the debug logs Edit the configmap of the rook-ceph-operator. Add the ROOK_LOG_LEVEL: INFO parameter in the rook-ceph-operator-config yaml file to disable the debug logs for rook-ceph-operator. 7.9. Resolving low Ceph monitor count alert The CephMonLowNumber alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the low number of Ceph monitor count when your internal mode deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains in the deployment. You can increase the Ceph monitor count to improve the availability of cluster. Procedure In the CephMonLowNumber alert of the notification panel or Alert Center of OpenShift Web Console, click Configure . In the Configure Ceph Monitor pop up, click Update count. In the pop up, the recommended monitor count depending on the number of failure zones is shown. In the Configure CephMon pop up, update the monitor count value based on the recommended value and click Save changes . 7.10. Troubleshooting unhealthy blocklisted nodes 7.10.1. ODFRBDClientBlocked Meaning This alert indicates that an RADOS Block Device (RBD) client might be blocked by Ceph on a specific node within your Kubernetes cluster. The blocklisting occurs when the ocs_rbd_client_blocklisted metric reports a value of 1 for the node. Additionally, there are pods in a CreateContainerError state on the same node. The blocklisting can potentially result in the filesystem for the Persistent Volume Claims (PVCs) using RBD becoming read-only. It is crucial to investigate this alert to prevent any disruption to your storage cluster. Impact High Diagnosis The blocklisting of an RBD client can occur due to several factors, such as network or cluster slowness. In certain cases, the exclusive lock contention among three contending clients (workload, mirror daemon, and manager/scheduler) can lead to the blocklist. Mitigation Taint the blocklisted node: In Kubernetes, consider tainting the node that is blocklisted to trigger the eviction of pods to another node. This approach relies on the assumption that the unmounting/unmapping process progresses gracefully. Once the pods have been successfully evicted, the blocklisted node can be untainted, allowing the blocklist to be cleared. The pods can then be moved back to the untainted node. Reboot the blocklisted node: If tainting the node and evicting the pods do not resolve the blocklisting issue, a reboot of the blocklisted node can be attempted. This step may help alleviate any underlying issues causing the blocklist and restore normal functionality. Important Investigating and resolving the blocklist issue promptly is essential to avoid any further impact on the storage cluster. Chapter 8. Checking for Local Storage Operator deployments Red Hat OpenShift Data Foundation clusters with Local Storage Operator are deployed using local storage devices. To find out if your existing cluster with OpenShift Data Foundation was deployed using local storage devices, use the following procedure: Prerequisites OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure By checking the storage class associated with your OpenShift Data Foundation cluster's persistent volume claims (PVCs), you can tell if your cluster was deployed using local storage devices. Check the storage class associated with OpenShift Data Foundation cluster's PVCs with the following command: Check the output. For clusters with Local Storage Operators, the PVCs associated with ocs-deviceset use the storage class localblock . The output looks similar to the following: Additional Resources Deploying OpenShift Data Foundation using local storage devices on VMware Deploying OpenShift Data Foundation using local storage devices on Red Hat Virtualization Deploying OpenShift Data Foundation using local storage devices on bare metal Deploying OpenShift Data Foundation using local storage devices on IBM Power Chapter 9. Removing failed or unwanted Ceph Object Storage devices The failed or unwanted Ceph OSDs (Object Storage Devices) affects the performance of the storage infrastructure. Hence, to improve the reliability and resilience of the storage cluster, you must remove the failed or unwanted Ceph OSDs. If you have any failed or unwanted Ceph OSDs to remove: Verify the Ceph health status. For more information see: Verifying Ceph cluster is healthy . Based on the provisioning of the OSDs, remove failed or unwanted Ceph OSDs. See: Removing failed or unwanted Ceph OSDs in dynamically provisioned Red Hat OpenShift Data Foundation . Removing failed or unwanted Ceph OSDs provisioned using local storage devices . If you are using local disks, you can reuse these disks after removing the old OSDs. 9.1. Verifying Ceph cluster is healthy Storage health is visible on the Block and File and Object dashboards. Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. 9.2. Removing failed or unwanted Ceph OSDs in dynamically provisioned Red Hat OpenShift Data Foundation Follow the steps in the procedure to remove the failed or unwanted Ceph Object Storage Devices (OSDs) in dynamically provisioned Red Hat OpenShift Data Foundation. Important Scaling down of clusters is supported only with the help of the Red Hat support team. Warning Removing an OSD when the Ceph component is not in a healthy state can result in data loss. Removing two or more OSDs at the same time results in data loss. Prerequisites Check if Ceph is healthy. For more information see Verifying Ceph cluster is healthy . Ensure no alerts are firing or any rebuilding process is in progress. Procedure Scale down the OSD deployment. Get the osd-prepare pod for the Ceph OSD to be removed. Delete the osd-prepare pod. Remove the failed OSD from the cluster. where, FAILED_OSD_ID is the integer in the pod name immediately after the rook-ceph-osd prefix. Verify that the OSD is removed successfully by checking the logs. Optional: If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, see Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs . Delete the OSD deployment. Verification step To check if the OSD is deleted successfully, run: This command must return the status as Completed . 9.3. Removing failed or unwanted Ceph OSDs provisioned using local storage devices You can remove failed or unwanted Ceph provisioned Object Storage Devices (OSDs) using local storage devices by following the steps in the procedure. Important Scaling down of clusters is supported only with the help of the Red Hat support team. Warning Removing an OSD when the Ceph component is not in a healthy state can result in data loss. Removing two or more OSDs at the same time results in data loss. Prerequisites Check if Ceph is healthy. For more information see Verifying Ceph cluster is healthy . Ensure no alerts are firing or any rebuilding process is in progress. Procedure Forcibly, mark the OSD down by scaling the replicas on the OSD deployment to 0. You can skip this step if the OSD is already down due to failure. Remove the failed OSD from the cluster. where, FAILED_OSD_ID is the integer in the pod name immediately after the rook-ceph-osd prefix. Verify that the OSD is removed successfully by checking the logs. Optional: If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, see Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs . Delete persistent volume claim (PVC) resources associated with the failed OSD. Get the PVC associated with the failed OSD. Get the persistent volume (PV) associated with the PVC. Get the failed device name. Get the prepare-pod associated with the failed OSD. Delete the osd-prepare pod before removing the associated PVC. Delete the PVC associated with the failed OSD. Remove failed device entry from the LocalVolume custom resource (CR). Log in to node with the failed device. Record the /dev/disk/by-id/<id> for the failed device name. Optional: In case, Local Storage Operator is used for provisioning OSD, login to the machine with {osd-id} and remove the device symlink. Get the OSD symlink for the failed device name. Remove the symlink. Delete the PV associated with the OSD. Verification step To check if the OSD is deleted successfully, run: This command must return the status as Completed . 9.4. Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, run the Object Storage Device (OSD) removal job with FORCE_OSD_REMOVAL option to move the OSD to a destroyed state. Note You must use the FORCE_OSD_REMOVAL option only if all the PGs are in active state. If not, PGs must either complete the back filling or further investigate to ensure they are active. Chapter 10. Troubleshooting and deleting remaining resources during Uninstall Occasionally some of the custom resources managed by an operator may remain in "Terminating" status waiting on the finalizer to complete, although you have performed all the required cleanup tasks. In such an event you need to force the removal of such resources. If you do not do so, the resources remain in the Terminating state even after you have performed all the uninstall steps. Check if the openshift-storage namespace is stuck in the Terminating state upon deletion. Output: Check for the NamespaceFinalizersRemaining and NamespaceContentRemaining messages in the STATUS section of the command output and perform the step for each of the listed resources. Example output : Delete all the remaining resources listed in the step. For each of the resources to be deleted, do the following: Get the object kind of the resource which needs to be removed. See the message in the above output. Example : message: Some content in the namespace has finalizers remaining: cephobjectstoreuser.ceph.rook.io Here cephobjectstoreuser.ceph.rook.io is the object kind. Get the Object name corresponding to the object kind. Example : Example output: Patch the resources. Example: Output: Verify that the openshift-storage project is deleted. Output: If the issue persists, reach out to Red Hat Support . Chapter 11. Troubleshooting CephFS PVC creation in external mode If you have updated the Red Hat Ceph Storage cluster from a version lower than 4.1.1 to the latest release and is not a freshly deployed cluster, you must manually set the application type for the CephFS pool on the Red Hat Ceph Storage cluster to enable CephFS Persistent Volume Claim (PVC) creation in external mode. Check for CephFS pvc stuck in Pending status. Example output : Check the output of the oc describe command to see the events for respective pvc. Expected error message is cephfs_metadata/csi.volumes.default/csi.volume.pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx: (1) Operation not permitted) Example output: Check the settings for the <cephfs metadata pool name> (here cephfs_metadata ) and <cephfs data pool name> (here cephfs_data ). For running the command, you will need jq preinstalled in the Red Hat Ceph Storage client node. Set the application type for the CephFS pool. Run the following commands on the Red Hat Ceph Storage client node : Verify if the settings are applied. Check the CephFS PVC status again. The PVC should now be in Bound state. Example output : Chapter 12. Restoring the monitor pods in OpenShift Data Foundation Restore the monitor pods if all three of them go down, and when OpenShift Data Foundation is not able to recover the monitor pods automatically. Note This is a disaster recovery procedure and must be performed under the guidance of the Red Hat support team. Contact Red Hat support team on, Red Hat support . Procedure Scale down the rook-ceph-operator and ocs operator deployments. Create a backup of all deployments in openshift-storage namespace. Patch the Object Storage Device (OSD) deployments to remove the livenessProbe parameter, and run it with the command parameter as sleep . Retrieve the monstore cluster map from all the OSDs. Create the recover_mon.sh script. Run the recover_mon.sh script. Patch the MON deployments, and run it with the command parameter as sleep . Edit the MON deployments. Patch the MON deployments to increase the initialDelaySeconds . Copy the previously retrieved monstore to the mon-a pod. Navigate into the MON pod and change the ownership of the retrieved monstore . Copy the keyring template file before rebuilding the mon db . Identify the keyring of all other Ceph daemons (MGR, MDS, RGW, Crash, CSI and CSI provisioners) from its respective secrets. Example keyring file, /etc/ceph/ceph.client.admin.keyring : Important For client.csi related keyring, refer to the keyring file output and add the default caps after fetching the key from its respective OpenShift Data Foundation secret. OSD keyring is added automatically post recovery. Navigate into the mon-a pod, and verify that the monstore has a monmap . Navigate into the mon-a pod. Verify that the monstore has a monmap . Optional: If the monmap is missing then create a new monmap . <mon-a-id> Is the ID of the mon-a pod. <mon-a-ip> Is the IP address of the mon-a pod. <mon-b-id> Is the ID of the mon-b pod. <mon-b-ip> Is the IP address of the mon-b pod. <mon-c-id> Is the ID of the mon-c pod. <mon-c-ip> Is the IP address of the mon-c pod. <fsid> Is the file system ID. Verify the monmap . Import the monmap . Important Use the previously created keyring file. Create a backup of the old store.db file. Copy the rebuild store.db file to the monstore directory. After rebuilding the monstore directory, copy the store.db file from local to the rest of the MON pods. <id> Is the ID of the MON pod Navigate into the rest of the MON pods and change the ownership of the copied monstore . <id> Is the ID of the MON pod Revert the patched changes. For MON deployments: <mon-deployment.yaml> Is the MON deployment yaml file For OSD deployments: <osd-deployment.yaml> Is the OSD deployment yaml file For MGR deployments: <mgr-deployment.yaml> Is the MGR deployment yaml file Important Ensure that the MON, MGR and OSD pods are up and running. Scale up the rook-ceph-operator and ocs-operator deployments. Verification steps Check the Ceph status to confirm that CephFS is running. Example output: Check the Multicloud Object Gateway (MCG) status. It should be active, and the backingstore and bucketclass should be in Ready state. Important If the MCG is not in the active state, and the backingstore and bucketclass not in the Ready state, you need to restart all the MCG related pods. For more information, see Section 12.1, "Restoring the Multicloud Object Gateway" . 12.1. Restoring the Multicloud Object Gateway If the Multicloud Object Gateway (MCG) is not in the active state, and the backingstore and bucketclass is not in the Ready state, you need to restart all the MCG related pods, and check the MCG status to confirm that the MCG is back up and running. Procedure Restart all the pods related to the MCG. <noobaa-operator> Is the name of the MCG operator <noobaa-core> Is the name of the MCG core pod <noobaa-endpoint> Is the name of the MCG endpoint <noobaa-db> Is the name of the MCG db pod If the RADOS Object Gateway (RGW) is configured, restart the pod. <rgw-pod> Is the name of the RGW pod Note In OpenShift Container Platform 4.11, after the recovery, RBD PVC fails to get mounted on the application pods. Hence, you need to restart the node that is hosting the application pods. To get the node name that is hosting the application pod, run the following command: Chapter 13. Restoring ceph-monitor quorum in OpenShift Data Foundation In some circumstances, the ceph-mons might lose quorum. If the mons cannot form quorum again, there is a manual procedure to get the quorum going again. The only requirement is that at least one mon must be healthy. The following steps removes the unhealthy mons from quorum and enables you to form a quorum again with a single mon , then bring the quorum back to the original size. For example, if you have three mons and lose quorum, you need to remove the two bad mons from quorum, notify the good mon that it is the only mon in quorum, and then restart the good mon . Procedure Stop the rook-ceph-operator so that the mons are not failed over when you are modifying the monmap . Inject a new monmap . Warning You must inject the monmap very carefully. If run incorrectly, your cluster could be permanently destroyed. The Ceph monmap keeps track of the mon quorum. The monmap is updated to only contain the healthy mon. In this example, the healthy mon is rook-ceph-mon-b , while the unhealthy mons are rook-ceph-mon-a and rook-ceph-mon-c . Take a backup of the current rook-ceph-mon-b Deployment: Open the YAML file and copy the command and arguments from the mon container (see containers list in the following example). This is needed for the monmap changes. Cleanup the copied command and args fields to form a pastable command as follows: Note Make sure to remove the single quotes around the --log-stderr-prefix flag and the parenthesis around the variables being passed ROOK_CEPH_MON_HOST , ROOK_CEPH_MON_INITIAL_MEMBERS and ROOK_POD_IP ). Patch the rook-ceph-mon-b Deployment to stop the working of this mon without deleting the mon pod. Perform the following steps on the mon-b pod: Connect to the pod of a healthy mon and run the following commands: Set the variable. Extract the monmap to a file, by pasting the ceph mon command from the good mon deployment and adding the --extract-monmap=USD{monmap_path} flag. Review the contents of the monmap . Remove the bad mons from the monmap . In this example we remove mon0 and mon2 : Inject the modified monmap into the good mon , by pasting the ceph mon command and adding the --inject-monmap=USD{monmap_path} flag as follows: Exit the shell to continue. Edit the Rook configmaps . Edit the configmap that the operator uses to track the mons . Verify that in the data element you see three mons such as the following (or more depending on your moncount ): Delete the bad mons from the list to end up with a single good mon . For example: Save the file and exit. Now, you need to adapt a Secret which is used for the mons and other components. Set a value for the variable good_mon_id . For example: You can use the oc patch command to patch the rook-ceph-config secret and update the two key/value pairs mon_host and mon_initial_members . Note If you are using hostNetwork: true , you need to replace the mon_host var with the node IP the mon is pinned to ( nodeSelector ). This is because there is no rook-ceph-mon-* service created in that "mode". Restart the mon . You need to restart the good mon pod with the original ceph-mon command to pick up the changes. Use the oc replace command on the backup of the mon deployment YAML file: Note Option --force deletes the deployment and creates a new one. Verify the status of the cluster. The status should show one mon in quorum. If the status looks good, your cluster should be healthy again. Delete the two mon deployments that are no longer expected to be in quorum. For example: In this example the deployments to be deleted are rook-ceph-mon-a and rook-ceph-mon-c . Restart the operator. Start the rook operator again to resume monitoring the health of the cluster. Note It is safe to ignore the errors that a number of resources already exist. The operator automatically adds more mons to increase the quorum size again depending on the mon count. Chapter 14. Enabling the Red Hat OpenShift Data Foundation console plugin The Data Foundation console plugin is enabled by default. In case, this option was unchecked during OpenShift Data Foundation Operator installation, use the following instructions to enable the console plugin post-deployment either from the graphical user interface (GUI) or command-line interface. Prerequisites You have administrative access to the OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Procedure From user interface In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator. Enable the console plugin option. In the Details tab, click the pencil icon under the Console plugin . Select Enable , and click Save . From command-line interface Execute the following command to enable the console plugin option: Verification steps After the console plugin option is enabled, a pop-up with a message, Web console update is available appears on the GUI. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if Data Foundation is available. Chapter 15. Changing resources for the OpenShift Data Foundation components When you install OpenShift Data Foundation, it comes with pre-defined resources that the OpenShift Data Foundation pods can consume. In some situations with higher I/O load, it might be required to increase these limits. To change the CPU and memory resources on the rook-ceph pods, see Section 15.1, "Changing the CPU and memory resources on the rook-ceph pods" . To tune the resources for the Multicloud Object Gateway (MCG), see Section 15.2, "Tuning the resources for the MCG" . 15.1. Changing the CPU and memory resources on the rook-ceph pods When you install OpenShift Data Foundation, it comes with pre-defined CPU and memory resources for the rook-ceph pods. You can manually increase these values according to the requirements. You can change the CPU and memory resources on the following pods: mgr mds rgw The following example illustrates how to change the CPU and memory resources on the rook-ceph pods. In this example, the existing MDS pod values of cpu and memory are increased from 1 and 4Gi to 2 and 8Gi respectively. Edit the storage cluster: <storagecluster_name> Specify the name of the storage cluster. For example: Add the following lines to the storage cluster Custom Resource (CR): Save the changes and exit the editor. Alternatively, run the oc patch command to change the CPU and memory value of the mds pod: <storagecluster_name> Specify the name of the storage cluster. For example: 15.2. Tuning the resources for the MCG The default configuration for the Multicloud Object Gateway (MCG) is optimized for low resource consumption and not performance. For more information on how to tune the resources for the MCG, see the Red Hat Knowledgebase solution Performance tuning guide for Multicloud Object Gateway (NooBaa) . Chapter 16. Disabling Multicloud Object Gateway external service after deploying OpenShift Data Foundation When you deploy OpenShift Data Foundation, public IPs are created even when OpenShift is installed as a private cluster. However, you can disable the Multicloud Object Gateway (MCG) load balancer usage by using the disableLoadBalancerService variable in the storagecluster CRD. This restricts MCG from creating any public resources for private clusters and helps to disable the NooBaa service EXTERNAL-IP . Procedure Run the following command and add the disableLoadBalancerService variable in the storagecluster YAML to set the service to ClusterIP: Note To undo the changes and set the service to LoadBalancer, set the disableLoadBalancerService variable to false or remove that line completely. Chapter 17. Accessing odf-console with the ovs-multitenant plugin by manually enabling global pod networking In OpenShift Container Platform, when ovs-multitenant plugin is used for software-defined networking (SDN), pods from different projects cannot send packets to or receive packets from pods and services of a different project. By default, pods can not communicate between namespaces or projects because a project's pod networking is not global. To access odf-console, the OpenShift console pod in the openshift-console namespace needs to connect with the OpenShift Data Foundation odf-console in the openshift-storage namespace. This is possible only when you manually enable global pod networking. Issue When`ovs-multitenant` plugin is used in the OpenShift Container Platform, the odf-console plugin fails with the following message: Resolution Make the pod networking for the OpenShift Data Foundation project global: Chapter 18. Annotating encrypted RBD storage classes Starting with OpenShift Data Foundation 4.14, when the OpenShift console creates a RADOS block device (RBD) storage class with encryption enabled, the annotation is set automatically. However, you need to add the annotation, cdi.kubevirt.io/clone-strategy=copy for any of the encrypted RBD storage classes that were previously created before updating to the OpenShift Data Foundation version 4.14. This enables customer data integration (CDI) to use host-assisted cloning instead of the default smart cloning. The keys used to access an encrypted volume are tied to the namespace where the volume was created. When cloning an encrypted volume to a new namespace, such as, provisioning a new OpenShift Virtualization virtual machine, a new volume must be created and the content of the source volume must then be copied into the new volume. This behavior is triggered automatically if the storage class is properly annotated. Chapter 19. Troubleshooting issues in provider mode 19.1. Force deletion of storage in provider clusters When a client cluster is deleted without performing the offboarding process to remove all the resources from the corresponding provider cluster, you need to perform force deletion of the corresponding storage consumer from the provider cluster. This helps to release the storage space that was claimed by the client. Caution It is recommended to use this method only in unavoidable situations such as accidental deletion of storage client clusters. Prerequisites Access to the OpenShift Data Foundation storage cluster in provider mode. Procedure Click Storage -> Storage Clients from the OpenShift console. Click the delete icon at the far right of the listed storage client cluster. The delete icon is enabled only after 5 minutes of the last heartbeat of the cluster. Click Confirm . | [
"oc image mirror registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 <local-registry> /odf4/odf-must-gather-rhel9:v4.15 [--registry-config= <path-to-the-registry-config> ] [--insecure=true]",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir= <directory-name>",
"oc adm must-gather --image=<local-registry>/odf4/odf-must-gather-rhel9:v4.15 --dest-dir= <directory-name>",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir=_<directory-name>_ --node-name=_<node-name>_",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir=_<directory-name>_ /usr/bin/gather since=<duration>",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir=_<directory-name>_ /usr/bin/gather since-time=<rfc3339-timestamp>",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 -- /usr/bin/gather <-arg>",
"odf get recovery-profile high_recovery_ops",
"odf get health Info: Checking if at least three mon pods are running on different nodes rook-ceph-mon-a-7fb76597dc-98pxz Running openshift-storage ip-10-0-69-145.us-west-1.compute.internal rook-ceph-mon-b-885bdc59c-4vvcm Running openshift-storage ip-10-0-64-239.us-west-1.compute.internal rook-ceph-mon-c-5f59bb5dbc-8vvlg Running openshift-storage ip-10-0-30-197.us-west-1.compute.internal Info: Checking mon quorum and ceph health details Info: HEALTH_OK [...]",
"odf get dr-health Info: fetching the cephblockpools with mirroring enabled Info: found \"ocs-storagecluster-cephblockpool\" cephblockpool with mirroring enabled Info: running ceph status from peer cluster Info: cluster: id: 9a2e7e55-40e1-4a79-9bfa-c3e4750c6b0f health: HEALTH_OK [...]",
"odf get dr-prereq peer-cluster-1 Info: Submariner is installed. Info: Globalnet is required. Info: Globalnet is enabled. odf get mon-endpoints Displays the mon endpoints odf get dr-prereq peer-cluster-1 Info: Submariner is installed. Info: Globalnet is required. Info: Globalnet is enabled.",
"odf operator rook set ROOK_LOG_LEVEL DEBUG configmap/rook-ceph-operator-config patched",
"odf operator rook restart deployment.apps/rook-ceph-operator restarted",
"odf restore mon-quorum c",
"odf restore deleted cephclusters Info: Detecting which resources to restore for crd \"cephclusters\" Info: Restoring CR my-cluster Warning: The resource my-cluster was found deleted. Do you want to restore it? yes | no [...]",
"odf set ceph log-level <ceph-subsystem1> <ceph-subsystem2> <log-level>",
"odf set ceph log-level osd crush 20",
"odf set ceph log-level mds crush 20",
"odf set ceph log-level mon crush 20",
"oc logs <pod-name> -n <namespace>",
"oc logs rook-ceph-operator-<ID> -n openshift-storage",
"oc logs csi-cephfsplugin-<ID> -n openshift-storage -c csi-cephfsplugin",
"oc logs csi-rbdplugin-<ID> -n openshift-storage -c csi-rbdplugin",
"oc logs csi-cephfsplugin-<ID> -n openshift-storage --all-containers",
"oc logs csi-rbdplugin-<ID> -n openshift-storage --all-containers",
"oc logs csi-cephfsplugin-provisioner-<ID> -n openshift-storage -c csi-cephfsplugin",
"oc logs csi-rbdplugin-provisioner-<ID> -n openshift-storage -c csi-rbdplugin",
"oc logs csi-cephfsplugin-provisioner-<ID> -n openshift-storage --all-containers",
"oc logs csi-rbdplugin-provisioner-<ID> -n openshift-storage --all-containers",
"oc cluster-info dump -n openshift-storage --output-directory=<directory-name>",
"oc cluster-info dump -n openshift-local-storage --output-directory=<directory-name>",
"oc logs <ocs-operator> -n openshift-storage",
"oc get pods -n openshift-storage | grep -i \"ocs-operator\" | awk '{print USD1}'",
"oc get events --sort-by=metadata.creationTimestamp -n openshift-storage",
"oc get csv -n openshift-storage",
"NAME DISPLAY VERSION REPLACES PHASE mcg-operator.v4.15.0 NooBaa Operator 4.15.0 Succeeded ocs-operator.v4.15.0 OpenShift Container Storage 4.15.0 Succeeded odf-csi-addons-operator.v4.15.0 CSI Addons 4.15.0 Succeeded odf-operator.v4.15.0 OpenShift Data Foundation 4.15.0 Succeeded",
"oc get subs -n openshift-storage",
"NAME PACKAGE SOURCE CHANNEL mcg-operator-stable-4.15-redhat-operators-openshift-marketplace mcg-operator redhat-operators stable-4.15 ocs-operator-stable-4.15-redhat-operators-openshift-marketplace ocs-operator redhat-operators stable-4.15 odf-csi-addons-operator odf-csi-addons-operator redhat-operators stable-4.15 odf-operator odf-operator redhat-operators stable-4.15",
"oc get installplan -n openshift-storage",
"oc get pods -o wide | grep <component-name>",
"oc get pods -o wide | grep rook-ceph-operator",
"rook-ceph-operator-566cc677fd-bjqnb 1/1 Running 20 4h6m 10.128.2.5 rook-ceph-operator-566cc677fd-bjqnb 1/1 Running 20 4h6m 10.128.2.5 dell-r440-12.gsslab.pnq2.redhat.com <none> <none> <none> <none>",
"oc debug node/<node name>",
"chroot /host",
"crictl images | grep <component>",
"crictl images | grep rook-ceph",
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"delete pod -l app=csi-cephfsplugin -n openshift-storage delete pod -l app=csi-rbdplugin -n openshift-storage",
"du -a <path-in-the-mon-node> |sort -n -r |head -n10",
"oc project openshift-storage",
"oc get pod | grep rook-ceph",
"Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc project openshift-storage",
"get pod | grep {ceph-component}",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc project openshift-storage",
"get pod | grep rook-ceph-osd",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"memory\": \"16Gi\"},\"requests\": {\"memory\": \"16Gi\"}}}}}'",
"patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"8\"}, \"requests\": {\"cpu\": \"8\"}}}}}'",
"patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"managedResources\": {\"cephFilesystems\":{\"activeMetadataServers\": 2}}}}'",
"oc project openshift-storage",
"get pod | grep rook-ceph-mds",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc get pods | grep mgr",
"oc describe pods/ <pod_name>",
"oc get pods | grep mgr",
"oc project openshift-storage",
"get pod | grep rook-ceph-mgr",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc project openshift-storage",
"get pod | grep rook-ceph-mgr",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc logs <rook-ceph-mon-X-yyyy> -n openshift-storage",
"oc project openshift-storage",
"get pod | grep {ceph-component}",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc project openshift-storage",
"get pod | grep rook-ceph-mon",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc project openshift-storage",
"get pod | grep {ceph-component}",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc get sub USD(oc get pods -n openshift-storage | grep -v ocs-operator) -n openshift-storage -o json | jq .status.conditions",
"[ { \"lastTransitionTime\": \"2021-01-26T19:21:37Z\", \"message\": \"all available catalogsources are healthy\", \"reason\": \"AllCatalogSourcesHealthy\", \"status\": \"False\", \"type\": \"CatalogSourcesUnhealthy\" } ]",
"oc get pod -n openshift-storage | grep ocs-operator OCSOP=USD(oc get pod -n openshift-storage -o custom-columns=POD:.metadata.name --no-headers | grep ocs-operator) echo USDOCSOP oc get pod/USD{OCSOP} -n openshift-storage oc describe pod/USD{OCSOP} -n openshift-storage",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"-n openshift-storage get pods",
"-n openshift-storage get pods",
"-n openshift-storage get pods | grep osd",
"-n openshift-storage describe pods/<osd_podname_ from_the_ previous step>",
"TOOLS_POD=USD(oc get pods -n openshift-storage -l app=rook-ceph-tools -o name) rsh -n openshift-storage USDTOOLS_POD",
"ceph status",
"get nodes --selector='node-role.kubernetes.io/worker','!node-role.kubernetes.io/infra'",
"describe node <node_name>",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc project openshift-storage",
"oc get pod | grep rook-ceph",
"Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"get nodes --selector='node-role.kubernetes.io/worker','!node-role.kubernetes.io/infra'",
"describe node <node_name>",
"oc project openshift-storage",
"oc get pod | grep rook-ceph",
"Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"ceph daemon osd.<id> ops",
"ceph daemon osd.<id> dump_historic_ops",
"oc get sub USD(oc get pods -n openshift-storage | grep -v ocs-operator) -n openshift-storage -o json | jq .status.conditions",
"[ { \"lastTransitionTime\": \"2021-01-26T19:21:37Z\", \"message\": \"all available catalogsources are healthy\", \"reason\": \"AllCatalogSourcesHealthy\", \"status\": \"False\", \"type\": \"CatalogSourcesUnhealthy\" } ]",
"oc get pod -n openshift-storage | grep ocs-operator OCSOP=USD(oc get pod -n openshift-storage -o custom-columns=POD:.metadata.name --no-headers | grep ocs-operator) echo USDOCSOP oc get pod/USD{OCSOP} -n openshift-storage oc describe pod/USD{OCSOP} -n openshift-storage",
"ceph osd pool set-quota <pool> max_bytes <bytes>",
"ceph osd pool set-quota <pool> max_objects <objects>",
"ceph osd pool set-quota <pool> max_bytes <bytes>",
"ceph osd pool set-quota <pool> max_objects <objects>",
"oc delete pod <pod-name> --grace-period=0 --force",
"oc edit configmap rook-ceph-operator-config",
"... data: # The logging level for the operator: INFO | DEBUG ROOK_LOG_LEVEL: DEBUG",
"oc edit configmap rook-ceph-operator-config",
"... data: # The logging level for the operator: INFO | DEBUG ROOK_LOG_LEVEL: INFO",
"oc get pvc -n openshift-storage",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-noobaa-db-0 Bound pvc-d96c747b-2ab5-47e2-b07e-1079623748d8 50Gi RWO ocs-storagecluster-ceph-rbd 114s ocs-deviceset-0-0-lzfrd Bound local-pv-7e70c77c 1769Gi RWO localblock 2m10s ocs-deviceset-1-0-7rggl Bound local-pv-b19b3d48 1769Gi RWO localblock 2m10s ocs-deviceset-2-0-znhk8 Bound local-pv-e9f22cdc 1769Gi RWO localblock 2m10s",
"oc scale deployment rook-ceph-osd-<osd-id> --replicas=0",
"oc get deployment rook-ceph-osd-<osd-id> -oyaml | grep ceph.rook.io/pvc",
"oc delete -n openshift-storage pod rook-ceph-osd-prepare-<pvc-from-above-command>-<pod-suffix>",
"failed_osd_id=<osd-id> oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD<failed_osd_id> | oc create -f -",
"oc logs -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>",
"oc delete deployment rook-ceph-osd-<osd-id>",
"oc get pod -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>",
"oc scale deployment rook-ceph-osd-<osd-id> --replicas=0",
"failed_osd_id=<osd_id> oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD<failed_osd_id> | oc create -f -",
"oc logs -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>",
"oc get -n openshift-storage -o yaml deployment rook-ceph-osd-<osd-id> | grep ceph.rook.io/pvc",
"oc get -n openshift-storage pvc <pvc-name>",
"oc get pv <pv-name-from-above-command> -oyaml | grep path",
"oc describe -n openshift-storage pvc ocs-deviceset-0-0-nvs68 | grep Mounted",
"oc delete -n openshift-storage pod <osd-prepare-pod-from-above-command>",
"oc delete -n openshift-storage pvc <pvc-name-from-step-a>",
"oc debug node/<node_with_failed_osd>",
"ls -alh /mnt/local-storage/localblock/",
"oc debug node/<node_with_failed_osd>",
"ls -alh /mnt/local-storage/localblock",
"rm /mnt/local-storage/localblock/<failed-device-name>",
"oc delete pv <pv-name>",
"#oc get pod -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>",
"oc process -n openshift-storage ocs-osd-removal -p FORCE_OSD_REMOVAL=true -p FAILED_OSD_IDS=USD<failed_osd_id> | oc create -f -",
"oc get project -n <namespace>",
"NAME DISPLAY NAME STATUS openshift-storage Terminating",
"oc get project openshift-storage -o yaml",
"status: conditions: - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: All resources successfully discovered reason: ResourcesDiscovered status: \"False\" type: NamespaceDeletionDiscoveryFailure - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: All legacy kube types successfully parsed reason: ParsedGroupVersions status: \"False\" type: NamespaceDeletionGroupVersionParsingFailure - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: All content successfully deleted, may be waiting on finalization reason: ContentDeleted status: \"False\" type: NamespaceDeletionContentFailure - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: 'Some resources are remaining: cephobjectstoreusers.ceph.rook.io has 1 resource instances' reason: SomeResourcesRemain status: \"True\" type: NamespaceContentRemaining - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: 'Some content in the namespace has finalizers remaining: cephobjectstoreuser.ceph.rook.io in 1 resource instances' reason: SomeFinalizersRemain status: \"True\" type: NamespaceFinalizersRemaining",
"oc get <Object-kind> -n <project-name>",
"oc get cephobjectstoreusers.ceph.rook.io -n openshift-storage",
"NAME AGE noobaa-ceph-objectstore-user 26h",
"oc patch -n <project-name> <object-kind>/<object-name> --type=merge -p '{\"metadata\": {\"finalizers\":null}}'",
"oc patch -n openshift-storage cephobjectstoreusers.ceph.rook.io/noobaa-ceph-objectstore-user --type=merge -p '{\"metadata\": {\"finalizers\":null}}'",
"cephobjectstoreuser.ceph.rook.io/noobaa-ceph-objectstore-user patched",
"oc get project openshift-storage",
"Error from server (NotFound): namespaces \"openshift-storage\" not found",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ngx-fs-pxknkcix20-pod Pending ocs-external-storagecluster-cephfs 28h [...]",
"oc describe pvc ngx-fs-pxknkcix20-pod -n nginx-file",
"Name: ngx-fs-pxknkcix20-pod Namespace: nginx-file StorageClass: ocs-external-storagecluster-cephfs Status: Pending Volume: Labels: <none> Annotations: volume.beta.kubernetes.io/storage-provisioner: openshift-storage.cephfs.csi.ceph.com Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Mounted By: ngx-fs-oyoe047v2bn2ka42jfgg-pod-hqhzf Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 107m (x245 over 22h) openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-5f8b66cc96-hvcqp_6b7044af-c904-4795-9ce5-bf0cf63cc4a4 (combined from similar events): failed to provision volume with StorageClass \"ocs-external-storagecluster-cephfs\": rpc error: code = Internal desc = error (an error (exit status 1) occurred while running rados args: [-m 192.168.13.212:6789,192.168.13.211:6789,192.168.13.213:6789 --id csi-cephfs-provisioner --keyfile= stripped -c /etc/ceph/ceph.conf -p cephfs_metadata getomapval csi.volumes.default csi.volume.pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47 /tmp/omap-get-186436239 --namespace=csi]) occurred, command output streams is ( error getting omap value cephfs_metadata/csi.volumes.default/csi.volume.pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47: (1) Operation not permitted)",
"ceph osd pool ls detail --format=json | jq '.[] | select(.pool_name| startswith(\"cephfs\")) | .pool_name, .application_metadata' \"cephfs_data\" { \"cephfs\": {} } \"cephfs_metadata\" { \"cephfs\": {} }",
"ceph osd pool application set <cephfs metadata pool name> cephfs metadata cephfs",
"ceph osd pool application set <cephfs data pool name> cephfs data cephfs",
"ceph osd pool ls detail --format=json | jq '.[] | select(.pool_name| startswith(\"cephfs\")) | .pool_name, .application_metadata' \"cephfs_data\" { \"cephfs\": { \"data\": \"cephfs\" } } \"cephfs_metadata\" { \"cephfs\": { \"metadata\": \"cephfs\" } }",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ngx-fs-pxknkcix20-pod Bound pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47 1Mi RWO ocs-external-storagecluster-cephfs 29h [...]",
"oc scale deployment rook-ceph-operator --replicas=0 -n openshift-storage",
"oc scale deployment ocs-operator --replicas=0 -n openshift-storage",
"mkdir backup",
"cd backup",
"oc project openshift-storage",
"for d in USD(oc get deployment|awk -F' ' '{print USD1}'|grep -v NAME); do echo USDd;oc get deployment USDd -o yaml > oc_get_deployment.USD{d}.yaml; done",
"for i in USD(oc get deployment -l app=rook-ceph-osd -oname);do oc patch USD{i} -n openshift-storage --type='json' -p '[{\"op\":\"remove\", \"path\":\"/spec/template/spec/containers/0/livenessProbe\"}]' ; oc patch USD{i} -n openshift-storage -p '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"osd\", \"command\": [\"sleep\", \"infinity\"], \"args\": []}]}}}}' ; done",
"#!/bin/bash ms=/tmp/monstore rm -rf USDms mkdir USDms for osd_pod in USD(oc get po -l app=rook-ceph-osd -oname -n openshift-storage); do echo \"Starting with pod: USDosd_pod\" podname=USD(echo USDosd_pod|sed 's/pod\\///g') oc exec USDosd_pod -- rm -rf USDms oc cp USDms USDpodname:USDms rm -rf USDms mkdir USDms echo \"pod in loop: USDosd_pod ; done deleting local dirs\" oc exec USDosd_pod -- ceph-objectstore-tool --type bluestore --data-path /var/lib/ceph/osd/ceph-USD(oc get USDosd_pod -ojsonpath='{ .metadata.labels.ceph_daemon_id }') --op update-mon-db --no-mon-config --mon-store-path USDms echo \"Done with COT on pod: USDosd_pod\" oc cp USDpodname:USDms USDms echo \"Finished pulling COT data from pod: USDosd_pod\" done",
"chmod +x recover_mon.sh",
"./recover_mon.sh",
"for i in USD(oc get deployment -l app=rook-ceph-mon -oname);do oc patch USD{i} -n openshift-storage -p '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"mon\", \"command\": [\"sleep\", \"infinity\"], \"args\": []}]}}}}'; done",
"oc get deployment rook-ceph-mon-a -o yaml | sed \"s/initialDelaySeconds: 10/initialDelaySeconds: 2000/g\" | oc replace -f -",
"oc get deployment rook-ceph-mon-b -o yaml | sed \"s/initialDelaySeconds: 10/initialDelaySeconds: 2000/g\" | oc replace -f -",
"oc get deployment rook-ceph-mon-c -o yaml | sed \"s/initialDelaySeconds: 10/initialDelaySeconds: 2000/g\" | oc replace -f -",
"oc cp /tmp/monstore/ USD(oc get po -l app=rook-ceph-mon,mon=a -oname |sed 's/pod\\///g'):/tmp/",
"oc rsh USD(oc get po -l app=rook-ceph-mon,mon=a -oname)",
"chown -R ceph:ceph /tmp/monstore",
"oc rsh USD(oc get po -l app=rook-ceph-mon,mon=a -oname)",
"cp /etc/ceph/keyring-store/keyring /tmp/keyring",
"cat /tmp/keyring [mon.] key = AQCleqldWqm5IhAAgZQbEzoShkZV42RiQVffnA== caps mon = \"allow *\" [client.admin] key = AQCmAKld8J05KxAArOWeRAw63gAwwZO5o75ZNQ== auid = 0 caps mds = \"allow *\" caps mgr = \"allow *\" caps mon = \"allow *\" caps osd = \"allow *\"",
"oc get secret rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-keyring -ojson | jq .data.keyring | xargs echo | base64 -d [mds.ocs-storagecluster-cephfilesystem-a] key = AQB3r8VgAtr6OhAAVhhXpNKqRTuEVdRoxG4uRA== caps mon = \"allow profile mds\" caps osd = \"allow *\" caps mds = \"allow\"",
"[mon.] key = AQDxTF1hNgLTNxAAi51cCojs01b4I5E6v2H8Uw== caps mon = \"allow \" [client.admin] key = AQDxTF1hpzguOxAA0sS8nN4udoO35OEbt3bqMQ== caps mds = \"allow \" caps mgr = \"allow *\" caps mon = \"allow *\" caps osd = \"allow *\" [mds.ocs-storagecluster-cephfilesystem-a] key = AQCKTV1horgjARAA8aF/BDh/4+eG4RCNBCl+aw== caps mds = \"allow\" caps mon = \"allow profile mds\" caps osd = \"allow *\" [mds.ocs-storagecluster-cephfilesystem-b] key = AQCKTV1hN4gKLBAA5emIVq3ncV7AMEM1c1RmGA== caps mds = \"allow\" caps mon = \"allow profile mds\" caps osd = \"allow *\" [client.rgw.ocs.storagecluster.cephobjectstore.a] key = AQCOkdBixmpiAxAA4X7zjn6SGTI9c1MBflszYA== caps mon = \"allow rw\" caps osd = \"allow rwx\" [mgr.a] key = AQBOTV1hGYOEORAA87471+eIZLZtptfkcHvTRg== caps mds = \"allow *\" caps mon = \"allow profile mgr\" caps osd = \"allow *\" [client.crash] key = AQBOTV1htO1aGRAAe2MPYcGdiAT+Oo4CNPSF1g== caps mgr = \"allow rw\" caps mon = \"allow profile crash\" [client.csi-cephfs-node] key = AQBOTV1hiAtuBBAAaPPBVgh1AqZJlDeHWdoFLw== caps mds = \"allow rw\" caps mgr = \"allow rw\" caps mon = \"allow r\" caps osd = \"allow rw tag cephfs *=\" [client.csi-cephfs-provisioner] key = AQBNTV1hHu6wMBAAzNXZv36aZJuE1iz7S7GfeQ== caps mgr = \"allow rw\" caps mon = \"allow r\" caps osd = \"allow rw tag cephfs metadata= \" [client.csi-rbd-node] key = AQBNTV1h+LnkIRAAWnpIN9bUAmSHOvJ0EJXHRw== caps mgr = \"allow rw\" caps mon = \"profile rbd\" caps osd = \"profile rbd\" [client.csi-rbd-provisioner] key = AQBNTV1hMNcsExAAvA3gHB2qaY33LOdWCvHG/A== caps mgr = \"allow rw\" caps mon = \"profile rbd\" caps osd = \"profile rbd\"",
"oc rsh USD(oc get po -l app=rook-ceph-mon,mon=a -oname)",
"ceph-monstore-tool /tmp/monstore get monmap -- --out /tmp/monmap",
"monmaptool /tmp/monmap --print",
"monmaptool --create --add <mon-a-id> <mon-a-ip> --add <mon-b-id> <mon-b-ip> --add <mon-c-id> <mon-c-ip> --enable-all-features --clobber /root/monmap --fsid <fsid>",
"monmaptool /root/monmap --print",
"ceph-monstore-tool /tmp/monstore rebuild -- --keyring /tmp/keyring --monmap /root/monmap",
"chown -R ceph:ceph /tmp/monstore",
"mv /var/lib/ceph/mon/ceph-a/store.db /var/lib/ceph/mon/ceph-a/store.db.corrupted",
"mv /var/lib/ceph/mon/ceph-b/store.db /var/lib/ceph/mon/ceph-b/store.db.corrupted",
"mv /var/lib/ceph/mon/ceph-c/store.db /var/lib/ceph/mon/ceph-c/store.db.corrupted",
"mv /tmp/monstore/store.db /var/lib/ceph/mon/ceph-a/store.db",
"chown -R ceph:ceph /var/lib/ceph/mon/ceph-a/store.db",
"oc cp USD(oc get po -l app=rook-ceph-mon,mon=a -oname | sed 's/pod\\///g'):/var/lib/ceph/mon/ceph-a/store.db /tmp/store.db",
"oc cp /tmp/store.db USD(oc get po -l app=rook-ceph-mon,mon=<id> -oname | sed 's/pod\\///g'):/var/lib/ceph/mon/ceph- <id>",
"oc rsh USD(oc get po -l app=rook-ceph-mon,mon= <id> -oname)",
"chown -R ceph:ceph /var/lib/ceph/mon/ceph- <id> /store.db",
"oc replace --force -f <mon-deployment.yaml>",
"oc replace --force -f <osd-deployment.yaml>",
"oc replace --force -f <mgr-deployment.yaml>",
"oc -n openshift-storage scale deployment ocs-operator --replicas=1",
"ceph -s",
"cluster: id: f111402f-84d1-4e06-9fdb-c27607676e55 health: HEALTH_ERR 1 filesystem is offline 1 filesystem is online with fewer MDS than max_mds 3 daemons have recently crashed services: mon: 3 daemons, quorum b,c,a (age 15m) mgr: a(active, since 14m) mds: ocs-storagecluster-cephfilesystem:0 osd: 3 osds: 3 up (since 15m), 3 in (since 2h) data: pools: 3 pools, 96 pgs objects: 500 objects, 1.1 GiB usage: 5.5 GiB used, 295 GiB / 300 GiB avail pgs: 96 active+clean",
"noobaa status -n openshift-storage",
"oc delete pods <noobaa-operator> -n openshift-storage",
"oc delete pods <noobaa-core> -n openshift-storage",
"oc delete pods <noobaa-endpoint> -n openshift-storage",
"oc delete pods <noobaa-db> -n openshift-storage",
"oc delete pods <rgw-pod> -n openshift-storage",
"oc get pods <application-pod> -n <namespace> -o yaml | grep nodeName nodeName: node_name",
"oc -n openshift-storage scale deployment rook-ceph-operator --replicas=0",
"oc -n openshift-storage get deployment rook-ceph-mon-b -o yaml > rook-ceph-mon-b-deployment.yaml",
"[...] containers: - args: - --fsid=41a537f2-f282-428e-989f-a9e07be32e47 - --keyring=/etc/ceph/keyring-store/keyring - --log-to-stderr=true - --err-to-stderr=true - --mon-cluster-log-to-stderr=true - '--log-stderr-prefix=debug ' - --default-log-to-file=false - --default-mon-cluster-log-to-file=false - --mon-host=USD(ROOK_CEPH_MON_HOST) - --mon-initial-members=USD(ROOK_CEPH_MON_INITIAL_MEMBERS) - --id=b - --setuser=ceph - --setgroup=ceph - --foreground - --public-addr=10.100.13.242 - --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db - --public-bind-addr=USD(ROOK_POD_IP) command: - ceph-mon [...]",
"ceph-mon --fsid=41a537f2-f282-428e-989f-a9e07be32e47 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=USDROOK_CEPH_MON_HOST --mon-initial-members=USDROOK_CEPH_MON_INITIAL_MEMBERS --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.100.13.242 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db --public-bind-addr=USDROOK_POD_IP",
"oc -n openshift-storage patch deployment rook-ceph-mon-b --type='json' -p '[{\"op\":\"remove\", \"path\":\"/spec/template/spec/containers/0/livenessProbe\"}]' oc -n openshift-storage patch deployment rook-ceph-mon-b -p '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"mon\", \"command\": [\"sleep\", \"infinity\"], \"args\": []}]}}}}'",
"oc -n openshift-storage exec -it <mon-pod> bash",
"monmap_path=/tmp/monmap",
"ceph-mon --fsid=41a537f2-f282-428e-989f-a9e07be32e47 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=USDROOK_CEPH_MON_HOST --mon-initial-members=USDROOK_CEPH_MON_INITIAL_MEMBERS --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.100.13.242 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db --public-bind-addr=USDROOK_POD_IP --extract-monmap=USD{monmap_path}",
"monmaptool --print /tmp/monmap",
"monmaptool USD{monmap_path} --rm <bad_mon>",
"monmaptool USD{monmap_path} --rm a monmaptool USD{monmap_path} --rm c",
"ceph-mon --fsid=41a537f2-f282-428e-989f-a9e07be32e47 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=USDROOK_CEPH_MON_HOST --mon-initial-members=USDROOK_CEPH_MON_INITIAL_MEMBERS --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.100.13.242 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db --public-bind-addr=USDROOK_POD_IP --inject-monmap=USD{monmap_path}",
"oc -n openshift-storage edit configmap rook-ceph-mon-endpoints",
"data: a=10.100.35.200:6789;b=10.100.13.242:6789;c=10.100.35.12:6789",
"data: b=10.100.13.242:6789",
"good_mon_id=b",
"mon_host=USD(oc -n openshift-storage get svc rook-ceph-mon-b -o jsonpath='{.spec.clusterIP}') oc -n openshift-storage patch secret rook-ceph-config -p '{\"stringData\": {\"mon_host\": \"[v2:'\"USD{mon_host}\"':3300,v1:'\"USD{mon_host}\"':6789]\", \"mon_initial_members\": \"'\"USD{good_mon_id}\"'\"}}'",
"oc replace --force -f rook-ceph-mon-b-deployment.yaml",
"oc delete deploy <rook-ceph-mon-1> oc delete deploy <rook-ceph-mon-2>",
"oc -n openshift-storage scale deployment rook-ceph-operator --replicas=1",
"oc patch console.operator cluster -n openshift-storage --type json -p '[{\"op\": \"add\", \"path\": \"/spec/plugins\", \"value\": [\"odf-console\"]}]'",
"oc edit storagecluster -n openshift-storage <storagecluster_name>",
"oc edit storagecluster -n openshift-storage ocs-storagecluster",
"spec: resources: mds: limits: cpu: 2 memory: 8Gi requests: cpu: 2 memory: 8Gi",
"oc patch -n openshift-storage storagecluster <storagecluster_name> --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"2\",\"memory\": \"8Gi\"},\"requests\": {\"cpu\": \"2\",\"memory\": \"8Gi\"}}}}}'",
"oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch ' {\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"2\",\"memory\": \"8Gi\"},\"requests\": {\"cpu\": \"2\",\"memory\": \"8Gi\"}}}}} '",
"oc edit storagecluster -n openshift-storage <storagecluster_name> [...] spec: arbiter: {} encryption: kms: {} externalStorage: {} managedResources: cephBlockPools: {} cephCluster: {} cephConfig: {} cephDashboard: {} cephFilesystems: {} cephNonResilientPools: {} cephObjectStoreUsers: {} cephObjectStores: {} cephRBDMirror: {} cephToolbox: {} mirroring: {} multiCloudGateway: disableLoadBalancerService: true <--------------- Add this endpoints: [...]",
"GET request for \"odf-console\" plugin failed: Get \"https://odf-console-service.openshift-storage.svc.cluster.local:9001/locales/en/plugin__odf-console.json\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)",
"oc adm pod-network make-projects-global openshift-storage"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html-single/troubleshooting_openshift_data_foundation/commonly-required-logs_rhodf |
Chapter 42. PingService | Chapter 42. PingService 42.1. Ping GET /v1/ping 42.1.1. Description 42.1.2. Parameters 42.1.3. Return Type V1PongMessage 42.1.4. Content Type application/json 42.1.5. Responses Table 42.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1PongMessage 0 An unexpected error response. GooglerpcStatus 42.1.6. Samples 42.1.7. Common object reference 42.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 42.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 42.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 42.1.7.3. V1PongMessage Field Name Required Nullable Type Description Format status String | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/pingservice |
Chapter 2. Understanding custom back ends | Chapter 2. Understanding custom back ends A custom back end is a storage server, appliance, or configuration that is not yet fully integrated into the Red Hat OpenStack Platform director. Supported Block Storage back ends are already integrated and pre-configured with built-in director files. For example, Red Hat Ceph and single-back end configurations of Dell EMC PS Series, Dell Storage Center, and NetApp appliances. Some integrated storage appliances support only a single-instance back end. For example, with the pre-configured director files for Dell Storage Center, you can only deploy a single back end. If you want to deploy multiple back end instances of this appliance, you need a custom configuration. Although you can manually configure the Block Storage service by directly editing the /etc/cinder/cinder.conf file on the node where the Block Storage service is located, the director overwrites your configuration when you run the openstack overcloud deploy command. For more information, see Deploying the configured back ends . Deploy the Block Storage back end with the director to ensure that your settings persist through overcloud deployments and updates. If your back end configuration is fully integrated you can edit and invoke the packaged environment files. However, for custom back ends, you must write your own environment file. This document includes the annotated /home/stack/templates/custom-env.yaml file that you can edit for your deployment, see Configuration from sample environment file . This sample file is suitable for configuring the Block Storage service to use two NetApp back ends. For more information about environment files, see Including environment files in an overcloud deployment in the Installing and managing Red Hat OpenStack Platform with director guide. 2.1. Requirements The following additional prerequisite conditions must apply to your environment to configure custom Block Storage back ends: If you are using third-party back end appliances, you have configured them as storage repositories. You have deployed the overcloud with director with the instructions in Installing and managing Red Hat OpenStack Platform with director . You have the username and password of an account with elevated privileges. You can use the same stack user account that you created to deploy the overcloud. You have already planned the resulting configuration that you want for the Block Storage back end in /etc/cinder/cinder.conf . 2.2. Understanding the configuration process Configuring the Block Storage service to use custom back ends involves the following steps: Creating the environment file. For more information, see Creating the custom back end environment file . Deploying the configured back ends. For more information, Deploying the configured back ends . Testing the configured back end. For more information, Testing the configured back ends . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_a_custom_block_storage_back_end/con_block-storage-custom-back-ends_custom-cinder-back-end |
5.2.2. Protect portmap With IPTables | 5.2.2. Protect portmap With IPTables To further restrict access to the portmap service, it is a good idea to add IPTables rules to the server and restrict access to specific networks. Below are two example IPTables commands that allow TCP connections to the portmap service (listening on port 111) from the 192.168.0/24 network and from the localhost (which is necessary for the sgi_fam service used by Nautilus ). All other packets are dropped. To similarly limit UDP traffic, use the following command. Note Refer to Chapter 7, Firewalls for more information about implementing firewalls with IPTables commands. | [
"iptables -A INPUT -p tcp -s! 192.168.0.0/24 --dport 111 -j DROP iptables -A INPUT -p tcp -s 127.0.0.1 --dport 111 -j ACCEPT",
"iptables -A INPUT -p udp -s! 192.168.0.0/24 --dport 111 -j DROP"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-server-port-iptble |
Chapter 1. Configuring Jenkins images | Chapter 1. Configuring Jenkins images OpenShift Container Platform provides a container image for running Jenkins. This image provides a Jenkins server instance, which can be used to set up a basic flow for continuous testing, integration, and delivery. The image is based on the Red Hat Universal Base Images (UBI). OpenShift Container Platform follows the LTS release of Jenkins. OpenShift Container Platform provides an image that contains Jenkins 2.x. The OpenShift Container Platform Jenkins images are available on Quay.io or registry.redhat.io . For example: USD podman pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag> To use these images, you can either access them directly from these registries or push them into your OpenShift Container Platform container image registry. Additionally, you can create an image stream that points to the image, either in your container image registry or at the external location. Your OpenShift Container Platform resources can then reference the image stream. But for convenience, OpenShift Container Platform provides image streams in the openshift namespace for the core Jenkins image as well as the example Agent images provided for OpenShift Container Platform integration with Jenkins. 1.1. Configuration and customization You can manage Jenkins authentication in two ways: OpenShift Container Platform OAuth authentication provided by the OpenShift Container Platform Login plugin. Standard authentication provided by Jenkins. 1.1.1. OpenShift Container Platform OAuth authentication OAuth authentication is activated by configuring options on the Configure Global Security panel in the Jenkins UI, or by setting the OPENSHIFT_ENABLE_OAUTH environment variable on the Jenkins Deployment configuration to anything other than false . This activates the OpenShift Container Platform Login plugin, which retrieves the configuration information from pod data or by interacting with the OpenShift Container Platform API server. Valid credentials are controlled by the OpenShift Container Platform identity provider. Jenkins supports both browser and non-browser access. Valid users are automatically added to the Jenkins authorization matrix at log in, where OpenShift Container Platform roles dictate the specific Jenkins permissions that users have. The roles used by default are the predefined admin , edit , and view . The login plugin executes self-SAR requests against those roles in the project or namespace that Jenkins is running in. Users with the admin role have the traditional Jenkins administrative user permissions. Users with the edit or view role have progressively fewer permissions. The default OpenShift Container Platform admin , edit , and view roles and the Jenkins permissions those roles are assigned in the Jenkins instance are configurable. When running Jenkins in an OpenShift Container Platform pod, the login plugin looks for a config map named openshift-jenkins-login-plugin-config in the namespace that Jenkins is running in. If this plugin finds and can read in that config map, you can define the role to Jenkins Permission mappings. Specifically: The login plugin treats the key and value pairs in the config map as Jenkins permission to OpenShift Container Platform role mappings. The key is the Jenkins permission group short ID and the Jenkins permission short ID, with those two separated by a hyphen character. If you want to add the Overall Jenkins Administer permission to an OpenShift Container Platform role, the key should be Overall-Administer . To get a sense of which permission groups and permissions IDs are available, go to the matrix authorization page in the Jenkins console and IDs for the groups and individual permissions in the table they provide. The value of the key and value pair is the list of OpenShift Container Platform roles the permission should apply to, with each role separated by a comma. If you want to add the Overall Jenkins Administer permission to both the default admin and edit roles, as well as a new Jenkins role you have created, the value for the key Overall-Administer would be admin,edit,jenkins . Note The admin user that is pre-populated in the OpenShift Container Platform Jenkins image with administrative privileges is not given those privileges when OpenShift Container Platform OAuth is used. To grant these permissions the OpenShift Container Platform cluster administrator must explicitly define that user in the OpenShift Container Platform identity provider and assigns the admin role to the user. Jenkins users' permissions that are stored can be changed after the users are initially established. The OpenShift Container Platform Login plugin polls the OpenShift Container Platform API server for permissions and updates the permissions stored in Jenkins for each user with the permissions retrieved from OpenShift Container Platform. If the Jenkins UI is used to update permissions for a Jenkins user, the permission changes are overwritten the time the plugin polls OpenShift Container Platform. You can control how often the polling occurs with the OPENSHIFT_PERMISSIONS_POLL_INTERVAL environment variable. The default polling interval is five minutes. The easiest way to create a new Jenkins service using OAuth authentication is to use a template. 1.1.2. Jenkins authentication Jenkins authentication is used by default if the image is run directly, without using a template. The first time Jenkins starts, the configuration is created along with the administrator user and password. The default user credentials are admin and password . Configure the default password by setting the JENKINS_PASSWORD environment variable when using, and only when using, standard Jenkins authentication. Procedure Create a Jenkins application that uses standard Jenkins authentication: USD oc new-app -e \ JENKINS_PASSWORD=<password> \ ocp-tools-4/jenkins-rhel8 1.2. Jenkins environment variables The Jenkins server can be configured with the following environment variables: Variable Definition Example values and settings OPENSHIFT_ENABLE_OAUTH Determines whether the OpenShift Container Platform Login plugin manages authentication when logging in to Jenkins. To enable, set to true . Default: false JENKINS_PASSWORD The password for the admin user when using standard Jenkins authentication. Not applicable when OPENSHIFT_ENABLE_OAUTH is set to true . Default: password JAVA_MAX_HEAP_PARAM , CONTAINER_HEAP_PERCENT , JENKINS_MAX_HEAP_UPPER_BOUND_MB These values control the maximum heap size of the Jenkins JVM. If JAVA_MAX_HEAP_PARAM is set, its value takes precedence. Otherwise, the maximum heap size is dynamically calculated as CONTAINER_HEAP_PERCENT of the container memory limit, optionally capped at JENKINS_MAX_HEAP_UPPER_BOUND_MB MiB. By default, the maximum heap size of the Jenkins JVM is set to 50% of the container memory limit with no cap. JAVA_MAX_HEAP_PARAM example setting: -Xmx512m CONTAINER_HEAP_PERCENT default: 0.5 , or 50% JENKINS_MAX_HEAP_UPPER_BOUND_MB example setting: 512 MiB JAVA_INITIAL_HEAP_PARAM , CONTAINER_INITIAL_PERCENT These values control the initial heap size of the Jenkins JVM. If JAVA_INITIAL_HEAP_PARAM is set, its value takes precedence. Otherwise, the initial heap size is dynamically calculated as CONTAINER_INITIAL_PERCENT of the dynamically calculated maximum heap size. By default, the JVM sets the initial heap size. JAVA_INITIAL_HEAP_PARAM example setting: -Xms32m CONTAINER_INITIAL_PERCENT example setting: 0.1 , or 10% CONTAINER_CORE_LIMIT If set, specifies an integer number of cores used for sizing numbers of internal JVM threads. Example setting: 2 JAVA_TOOL_OPTIONS Specifies options to apply to all JVMs running in this container. It is not recommended to override this value. Default: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true JAVA_GC_OPTS Specifies Jenkins JVM garbage collection parameters. It is not recommended to override this value. Default: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 JENKINS_JAVA_OVERRIDES Specifies additional options for the Jenkins JVM. These options are appended to all other options, including the Java options above, and may be used to override any of them if necessary. Separate each additional option with a space; if any option contains space characters, escape them with a backslash. Example settings: -Dfoo -Dbar ; -Dfoo=first\ value -Dbar=second\ value . JENKINS_OPTS Specifies arguments to Jenkins. INSTALL_PLUGINS Specifies additional Jenkins plugins to install when the container is first run or when OVERRIDE_PV_PLUGINS_WITH_IMAGE_PLUGINS is set to true . Plugins are specified as a comma-delimited list of name:version pairs. Example setting: git:3.7.0,subversion:2.10.2 . OPENSHIFT_PERMISSIONS_POLL_INTERVAL Specifies the interval in milliseconds that the OpenShift Container Platform Login plugin polls OpenShift Container Platform for the permissions that are associated with each user that is defined in Jenkins. Default: 300000 - 5 minutes OVERRIDE_PV_CONFIG_WITH_IMAGE_CONFIG When running this image with an OpenShift Container Platform persistent volume (PV) for the Jenkins configuration directory, the transfer of configuration from the image to the PV is performed only the first time the image starts because the PV is assigned when the persistent volume claim (PVC) is created. If you create a custom image that extends this image and updates the configuration in the custom image after the initial startup, the configuration is not copied over unless you set this environment variable to true . Default: false OVERRIDE_PV_PLUGINS_WITH_IMAGE_PLUGINS When running this image with an OpenShift Container Platform PV for the Jenkins configuration directory, the transfer of plugins from the image to the PV is performed only the first time the image starts because the PV is assigned when the PVC is created. If you create a custom image that extends this image and updates plugins in the custom image after the initial startup, the plugins are not copied over unless you set this environment variable to true . Default: false ENABLE_FATAL_ERROR_LOG_FILE When running this image with an OpenShift Container Platform PVC for the Jenkins configuration directory, this environment variable allows the fatal error log file to persist when a fatal error occurs. The fatal error file is saved at /var/lib/jenkins/logs . Default: false AGENT_BASE_IMAGE Setting this value overrides the image used for the jnlp container in the sample Kubernetes plugin pod templates provided with this image. Otherwise, the image from the jenkins-agent-base-rhel8:latest image stream tag in the openshift namespace is used. Default: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest JAVA_BUILDER_IMAGE Setting this value overrides the image used for the java-builder container in the java-builder sample Kubernetes plugin pod templates provided with this image. Otherwise, the image from the java:latest image stream tag in the openshift namespace is used. Default: image-registry.openshift-image-registry.svc:5000/openshift/java:latest JAVA_FIPS_OPTIONS Setting this value controls how the JVM operates when running on a FIPS node. For more information, see Configure OpenJDK 11 in FIPS mode . Default: -Dcom.redhat.fips=false 1.3. Providing Jenkins cross project access If you are going to run Jenkins somewhere other than your same project, you must provide an access token to Jenkins to access your project. Procedure Identify the secret for the service account that has appropriate permissions to access the project Jenkins must access: USD oc describe serviceaccount jenkins Example output Name: default Labels: <none> Secrets: { jenkins-token-uyswp } { jenkins-dockercfg-xcr3d } Tokens: jenkins-token-izv1u jenkins-token-uyswp In this case the secret is named jenkins-token-uyswp . Retrieve the token from the secret: USD oc describe secret <secret name from above> Example output Name: jenkins-token-uyswp Labels: <none> Annotations: kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA The token parameter contains the token value Jenkins requires to access the project. 1.4. Jenkins cross volume mount points The Jenkins image can be run with mounted volumes to enable persistent storage for the configuration: /var/lib/jenkins is the data directory where Jenkins stores configuration files, including job definitions. 1.5. Customizing the Jenkins image through source-to-image To customize the official OpenShift Container Platform Jenkins image, you can use the image as a source-to-image (S2I) builder. You can use S2I to copy your custom Jenkins jobs definitions, add additional plugins, or replace the provided config.xml file with your own, custom, configuration. To include your modifications in the Jenkins image, you must have a Git repository with the following directory structure: plugins This directory contains those binary Jenkins plugins you want to copy into Jenkins. plugins.txt This file lists the plugins you want to install using the following syntax: configuration/jobs This directory contains the Jenkins job definitions. configuration/config.xml This file contains your custom Jenkins configuration. The contents of the configuration/ directory is copied to the /var/lib/jenkins/ directory, so you can also include additional files, such as credentials.xml , there. Sample build configuration customizes the Jenkins image in OpenShift Container Platform apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: custom-jenkins-build spec: source: 1 git: uri: https://github.com/custom/repository type: Git strategy: 2 sourceStrategy: from: kind: ImageStreamTag name: jenkins:2 namespace: openshift type: Source output: 3 to: kind: ImageStreamTag name: custom-jenkins:latest 1 The source parameter defines the source Git repository with the layout described above. 2 The strategy parameter defines the original Jenkins image to use as a source image for the build. 3 The output parameter defines the resulting, customized Jenkins image that you can use in deployment configurations instead of the official Jenkins image. 1.6. Configuring the Jenkins Kubernetes plugin The OpenShift Jenkins image includes the preinstalled Kubernetes plugin for Jenkins so that Jenkins agents can be dynamically provisioned on multiple container hosts using Kubernetes and OpenShift Container Platform. To use the Kubernetes plugin, OpenShift Container Platform provides an OpenShift Agent Base image that is suitable for use as a Jenkins agent. Important OpenShift Container Platform 4.11 moves the OpenShift Jenkins and OpenShift Agent Base images to the ocp-tools-4 repository at registry.redhat.io so that Red Hat can produce and update the images outside the OpenShift Container Platform lifecycle. Previously, these images were in the OpenShift Container Platform install payload and the openshift4 repository at registry.redhat.io . The OpenShift Jenkins Maven and NodeJS Agent images were removed from the OpenShift Container Platform 4.11 payload. Red Hat no longer produces these images, and they are not available from the ocp-tools-4 repository at registry.redhat.io . Red Hat maintains the 4.10 and earlier versions of these images for any significant bug fixes or security CVEs, following the OpenShift Container Platform lifecycle policy . For more information, see the "Important changes to OpenShift Jenkins images" link in the following "Additional resources" section. The Maven and Node.js agent images are automatically configured as Kubernetes pod template images within the OpenShift Container Platform Jenkins image configuration for the Kubernetes plugin. That configuration includes labels for each image that you can apply to any of your Jenkins jobs under their Restrict where this project can be run setting. If the label is applied, jobs run under an OpenShift Container Platform pod running the respective agent image. Important In OpenShift Container Platform 4.10 and later, the recommended pattern for running Jenkins agents using the Kubernetes plugin is to use pod templates with both jnlp and sidecar containers. The jnlp container uses the OpenShift Container Platform Jenkins Base agent image to facilitate launching a separate pod for your build. The sidecar container image has the tools needed to build in a particular language within the separate pod that was launched. Many container images from the Red Hat Container Catalog are referenced in the sample image streams in the openshift namespace. The OpenShift Container Platform Jenkins image has a pod template named java-build with sidecar containers that demonstrate this approach. This pod template uses the latest Java version provided by the java image stream in the openshift namespace. The Jenkins image also provides auto-discovery and auto-configuration of additional agent images for the Kubernetes plugin. With the OpenShift Container Platform sync plugin, on Jenkins startup, the Jenkins image searches within the project it is running, or the projects listed in the plugin's configuration, for the following items: Image streams with the role label set to jenkins-agent . Image stream tags with the role annotation set to jenkins-agent . Config maps with the role label set to jenkins-agent . When the Jenkins image finds an image stream with the appropriate label, or an image stream tag with the appropriate annotation, it generates the corresponding Kubernetes plugin configuration. This way, you can assign your Jenkins jobs to run in a pod running the container image provided by the image stream. The name and image references of the image stream, or image stream tag, are mapped to the name and image fields in the Kubernetes plugin pod template. You can control the label field of the Kubernetes plugin pod template by setting an annotation on the image stream, or image stream tag object, with the key agent-label . Otherwise, the name is used as the label. Note Do not log in to the Jenkins console and change the pod template configuration. If you do so after the pod template is created, and the OpenShift Container Platform Sync plugin detects that the image associated with the image stream or image stream tag has changed, it replaces the pod template and overwrites those configuration changes. You cannot merge a new configuration with the existing configuration. Consider the config map approach if you have more complex configuration needs. When it finds a config map with the appropriate label, the Jenkins image assumes that any values in the key-value data payload of the config map contain Extensible Markup Language (XML) consistent with the configuration format for Jenkins and the Kubernetes plugin pod templates. One key advantage of config maps over image streams and image stream tags is that you can control all the Kubernetes plugin pod template parameters. Sample config map for jenkins-agent kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template1: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template1</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template1</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate> The following example shows two containers that reference image streams in the openshift namespace. One container handles the JNLP contract for launching Pods as Jenkins Agents. The other container uses an image with tools for building code in a particular coding language: kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\USD(JENKINS_SECRET) \USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate> Note Do not log in to the Jenkins console and change the pod template configuration. If you do so after the pod template is created, and the OpenShift Container Platform Sync plugin detects that the image associated with the image stream or image stream tag has changed, it replaces the pod template and overwrites those configuration changes. You cannot merge a new configuration with the existing configuration. Consider the config map approach if you have more complex configuration needs. After it is installed, the OpenShift Container Platform Sync plugin monitors the API server of OpenShift Container Platform for updates to image streams, image stream tags, and config maps and adjusts the configuration of the Kubernetes plugin. The following rules apply: Removing the label or annotation from the config map, image stream, or image stream tag deletes any existing PodTemplate from the configuration of the Kubernetes plugin. If those objects are removed, the corresponding configuration is removed from the Kubernetes plugin. If you create appropriately labeled or annotated ConfigMap , ImageStream , or ImageStreamTag objects, or add labels after their initial creation, this results in the creation of a PodTemplate in the Kubernetes-plugin configuration. In the case of the PodTemplate by config map form, changes to the config map data for the PodTemplate are applied to the PodTemplate settings in the Kubernetes plugin configuration. The changes also override any changes that were made to the PodTemplate through the Jenkins UI between changes to the config map. To use a container image as a Jenkins agent, the image must run the agent as an entry point. For more details, see the official Jenkins documentation . Additional resources Important changes to OpenShift Jenkins images 1.7. Jenkins permissions If in the config map the <serviceAccount> element of the pod template XML is the OpenShift Container Platform service account used for the resulting pod, the service account credentials are mounted into the pod. The permissions are associated with the service account and control which operations against the OpenShift Container Platform master are allowed from the pod. Consider the following scenario with service accounts used for the pod, which is launched by the Kubernetes Plugin that runs in the OpenShift Container Platform Jenkins image. If you use the example template for Jenkins that is provided by OpenShift Container Platform, the jenkins service account is defined with the edit role for the project Jenkins runs in, and the master Jenkins pod has that service account mounted. The two default Maven and NodeJS pod templates that are injected into the Jenkins configuration are also set to use the same service account as the Jenkins master. Any pod templates that are automatically discovered by the OpenShift Container Platform sync plugin because their image streams or image stream tags have the required label or annotations are configured to use the Jenkins master service account as their service account. For the other ways you can provide a pod template definition into Jenkins and the Kubernetes plugin, you have to explicitly specify the service account to use. Those other ways include the Jenkins console, the podTemplate pipeline DSL that is provided by the Kubernetes plugin, or labeling a config map whose data is the XML configuration for a pod template. If you do not specify a value for the service account, the default service account is used. Ensure that whatever service account is used has the necessary permissions, roles, and so on defined within OpenShift Container Platform to manipulate whatever projects you choose to manipulate from the within the pod. 1.8. Creating a Jenkins service from a template Templates provide parameter fields to define all the environment variables with predefined default values. OpenShift Container Platform provides templates to make creating a new Jenkins service easy. The Jenkins templates should be registered in the default openshift project by your cluster administrator during the initial cluster setup. The two available templates both define deployment configuration and a service. The templates differ in their storage strategy, which affects whether the Jenkins content persists across a pod restart. Note A pod might be restarted when it is moved to another node or when an update of the deployment configuration triggers a redeployment. jenkins-ephemeral uses ephemeral storage. On pod restart, all data is lost. This template is only useful for development or testing. jenkins-persistent uses a Persistent Volume (PV) store. Data survives a pod restart. To use a PV store, the cluster administrator must define a PV pool in the OpenShift Container Platform deployment. After you select which template you want, you must instantiate the template to be able to use Jenkins. Procedure Create a new Jenkins application using one of the following methods: A PV: USD oc new-app jenkins-persistent Or an emptyDir type volume where configuration does not persist across pod restarts: USD oc new-app jenkins-ephemeral With both templates, you can run oc describe on them to see all the parameters available for overriding. For example: USD oc describe jenkins-ephemeral 1.9. Using the Jenkins Kubernetes plugin In the following example, the openshift-jee-sample BuildConfig object causes a Jenkins Maven agent pod to be dynamically provisioned. The pod clones some Java source code, builds a WAR file, and causes a second BuildConfig , openshift-jee-sample-docker to run. The second BuildConfig layers the new WAR file into a container image. Important OpenShift Container Platform 4.11 removed the OpenShift Jenkins Maven and NodeJS Agent images from its payload. Red Hat no longer produces these images, and they are not available from the ocp-tools-4 repository at registry.redhat.io . Red Hat maintains the 4.10 and earlier versions of these images for any significant bug fixes or security CVEs, following the OpenShift Container Platform lifecycle policy . For more information, see the "Important changes to OpenShift Jenkins images" link in the following "Additional resources" section. Sample BuildConfig that uses the Jenkins Kubernetes plugin kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node("maven") { sh "git clone https://github.com/openshift/openshift-jee-sample.git ." sh "mvn -B -Popenshift package" sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war" } triggers: - type: ConfigChange It is also possible to override the specification of the dynamically created Jenkins agent pod. The following is a modification to the preceding example, which overrides the container memory and specifies an environment variable. Sample BuildConfig that uses the Jenkins Kubernetes plugin, specifying memory limit and environment variable kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: "mypod", 1 cloud: "openshift", 2 inheritFrom: "maven", 3 containers: [ containerTemplate(name: "jnlp", 4 image: "openshift/jenkins-agent-maven-35-centos7:v3.10", 5 resourceRequestMemory: "512Mi", 6 resourceLimitMemory: "512Mi", 7 envVars: [ envVar(key: "CONTAINER_HEAP_PERCENT", value: "0.25") 8 ]) ]) { node("mypod") { 9 sh "git clone https://github.com/openshift/openshift-jee-sample.git ." sh "mvn -B -Popenshift package" sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war" } } triggers: - type: ConfigChange 1 A new pod template called mypod is defined dynamically. The new pod template name is referenced in the node stanza. 2 The cloud value must be set to openshift . 3 The new pod template can inherit its configuration from an existing pod template. In this case, inherited from the Maven pod template that is pre-defined by OpenShift Container Platform. 4 This example overrides values in the pre-existing container, and must be specified by name. All Jenkins agent images shipped with OpenShift Container Platform use the Container name jnlp . 5 Specify the container image name again. This is a known issue. 6 A memory request of 512 Mi is specified. 7 A memory limit of 512 Mi is specified. 8 An environment variable CONTAINER_HEAP_PERCENT , with value 0.25 , is specified. 9 The node stanza references the name of the defined pod template. By default, the pod is deleted when the build completes. This behavior can be modified with the plugin or within a pipeline Jenkinsfile. Upstream Jenkins has more recently introduced a YAML declarative format for defining a podTemplate pipeline DSL in-line with your pipelines. An example of this format, using the sample java-builder pod template that is defined in the OpenShift Container Platform Jenkins image: def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml """ apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest args: ['\USD(JENKINS_SECRET)', '\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true """ } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container("java") { sh "mvn --version" } } } } } Additional resources Important changes to OpenShift Jenkins images 1.10. Jenkins memory requirements When deployed by the provided Jenkins Ephemeral or Jenkins Persistent templates, the default memory limit is 1 Gi . By default, all other process that run in the Jenkins container cannot use more than a total of 512 MiB of memory. If they require more memory, the container halts. It is therefore highly recommended that pipelines run external commands in an agent container wherever possible. And if Project quotas allow for it, see recommendations from the Jenkins documentation on what a Jenkins master should have from a memory perspective. Those recommendations proscribe to allocate even more memory for the Jenkins master. It is recommended to specify memory request and limit values on agent containers created by the Jenkins Kubernetes plugin. Admin users can set default values on a per-agent image basis through the Jenkins configuration. The memory request and limit parameters can also be overridden on a per-container basis. You can increase the amount of memory available to Jenkins by overriding the MEMORY_LIMIT parameter when instantiating the Jenkins Ephemeral or Jenkins Persistent template. 1.11. Additional resources See Base image options for more information about the Red Hat Universal Base Images (UBI). Important changes to OpenShift Jenkins images | [
"podman pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag>",
"oc new-app -e JENKINS_PASSWORD=<password> ocp-tools-4/jenkins-rhel8",
"oc describe serviceaccount jenkins",
"Name: default Labels: <none> Secrets: { jenkins-token-uyswp } { jenkins-dockercfg-xcr3d } Tokens: jenkins-token-izv1u jenkins-token-uyswp",
"oc describe secret <secret name from above>",
"Name: jenkins-token-uyswp Labels: <none> Annotations: kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA",
"pluginId:pluginVersion",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: custom-jenkins-build spec: source: 1 git: uri: https://github.com/custom/repository type: Git strategy: 2 sourceStrategy: from: kind: ImageStreamTag name: jenkins:2 namespace: openshift type: Source output: 3 to: kind: ImageStreamTag name: custom-jenkins:latest",
"kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template1: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template1</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template1</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>",
"kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\\USD(JENKINS_SECRET) \\USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>",
"oc new-app jenkins-persistent",
"oc new-app jenkins-ephemeral",
"oc describe jenkins-ephemeral",
"kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node(\"maven\") { sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } triggers: - type: ConfigChange",
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: \"mypod\", 1 cloud: \"openshift\", 2 inheritFrom: \"maven\", 3 containers: [ containerTemplate(name: \"jnlp\", 4 image: \"openshift/jenkins-agent-maven-35-centos7:v3.10\", 5 resourceRequestMemory: \"512Mi\", 6 resourceLimitMemory: \"512Mi\", 7 envVars: [ envVar(key: \"CONTAINER_HEAP_PERCENT\", value: \"0.25\") 8 ]) ]) { node(\"mypod\") { 9 sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } } triggers: - type: ConfigChange",
"def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml \"\"\" apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest args: ['\\USD(JENKINS_SECRET)', '\\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true \"\"\" } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container(\"java\") { sh \"mvn --version\" } } } } }"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/jenkins/images-other-jenkins |
Chapter 5. Optional: Enabling disk encryption | Chapter 5. Optional: Enabling disk encryption You can enable encryption of installation disks using either the TPM v2 or Tang encryption modes. Note In some situations, when you enable TPM disk encryption in the firmware for a bare-metal host and then boot it from an ISO that you generate with the Assisted Installer, the cluster deployment can get stuck. This can happen if there are left-over TPM encryption keys from a installation on the host. For more information, see BZ#2011634 . If you experience this problem, contact Red Hat support. 5.1. Enabling TPM v2 encryption Prerequisites Check to see if TPM v2 encryption is enabled in the BIOS on each host. Most Dell systems require this. Check the manual for your computer. The Assisted Installer will also validate that TPM is enabled in the firmware. See the disk-encruption model in the Assisted Installer API for additional details. Important Verify that a TPM v2 encryption chip is installed on each node and enabled in the firmware. Procedure Optional: Using the UI, in the Cluster details step of the user interface wizard, choose to enable TPM v2 encryption on either the control plane nodes, workers, or both. Optional: Using the API, follow the "Modifying hosts" procedure. Set the disk_encryption.enable_on setting to all , masters , or workers . Set the disk_encryption.mode setting to tpmv2 . Refresh the API token: USD source refresh-token Enable TPM v2 encryption: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "disk_encryption": { "enable_on": "none", "mode": "tpmv2" } } ' | jq Valid settings for enable_on are all , master , worker , or none . 5.2. Enabling Tang encryption Prerequisites You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate a thumbprint of the Tang exchange key. Procedure Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions. You can set multiple Tang servers, but the Assisted Installer must be able to connect to all of them during installation. On the Tang server, retrieve the thumbprint for the Tang server using tang-show-keys : USD tang-show-keys <port> Optional: Replace <port> with the port number. The default port number is 80 . Example thumbprint 1gYTN_LpU9ZMB35yn5IbADY5OQ0 Optional: Retrieve the thumbprint for the Tang server using jose . Ensure jose is installed on the Tang server: USD sudo dnf install jose On the Tang server, retrieve the thumbprint using jose : USD sudo jose jwk thp -i /var/db/tang/<public_key>.jwk Replace <public_key> with the public exchange key for the Tang server. Example thumbprint 1gYTN_LpU9ZMB35yn5IbADY5OQ0 Optional: In the Cluster details step of the user interface wizard, choose to enable Tang encryption on either the control plane nodes, workers, or both. You will be required to enter URLs and thumbprints for the Tang servers. Optional: Using the API, follow the "Modifying hosts" procedure. Refresh the API token: USD source refresh-token Set the disk_encryption.enable_on setting to all , masters , or workers . Set the disk_encryption.mode setting to tang . Set disk_encyrption.tang_servers to provide the URL and thumbprint details about one or more Tang servers: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "disk_encryption": { "enable_on": "all", "mode": "tang", "tang_servers": "[{\"url\":\"http://tang.example.com:7500\",\"thumbprint\":\"PLjNyRdGw03zlRoGjQYMahSZGu9\"},{\"url\":\"http://tang2.example.com:7500\",\"thumbprint\":\"XYjNyRdGw03zlRoGjQYMahSZGu3\"}]" } } ' | jq Valid settings for enable_on are all , master , worker , or none . Within the tang_servers value, comment out the quotes within the object(s). 5.3. Additional resources Modifying hosts | [
"source refresh-token",
"curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"disk_encryption\": { \"enable_on\": \"none\", \"mode\": \"tpmv2\" } } ' | jq",
"tang-show-keys <port>",
"1gYTN_LpU9ZMB35yn5IbADY5OQ0",
"sudo dnf install jose",
"sudo jose jwk thp -i /var/db/tang/<public_key>.jwk",
"1gYTN_LpU9ZMB35yn5IbADY5OQ0",
"source refresh-token",
"curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"disk_encryption\": { \"enable_on\": \"all\", \"mode\": \"tang\", \"tang_servers\": \"[{\\\"url\\\":\\\"http://tang.example.com:7500\\\",\\\"thumbprint\\\":\\\"PLjNyRdGw03zlRoGjQYMahSZGu9\\\"},{\\\"url\\\":\\\"http://tang2.example.com:7500\\\",\\\"thumbprint\\\":\\\"XYjNyRdGw03zlRoGjQYMahSZGu3\\\"}]\" } } ' | jq"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/assisted_installer_for_openshift_container_platform/assembly_enabling-disk-encryption |
3.2. Upgrading a Remote Database Environment from Red Hat Virtualization 4.2 to 4.3 | 3.2. Upgrading a Remote Database Environment from Red Hat Virtualization 4.2 to 4.3 Upgrading your environment from 4.2 to 4.3 involves the following steps: Make sure you meet the prerequisites, including enabling the correct repositories Use the Log Collection Analysis tool and Image Discrepancies tool to check for issues that might prevent a successful upgrade Update the 4.2 Manager to the latest version of 4.2 Upgrade the database from PostgreSQL 9.5 to 10.0 Upgrade the Manager from 4.2 to 4.3 Update the hosts Update the compatibility version of the clusters Reboot any running or suspended virtual machines to update their configuration Update the compatibility version of the data centers If you previously upgraded to 4.2 without replacing SHA-1 certificates with SHA-256 certificates, you must replace the certificates now . 3.2.1. Prerequisites Plan for any necessary virtual machine downtime. After you update the clusters' compatibility versions during the upgrade, a new hardware configuration is automatically applied to each virtual machine once it reboots. You must reboot any running or suspended virtual machines as soon as possible to apply the configuration changes. Ensure your environment meets the requirements for Red Hat Virtualization 4.4. For a complete list of prerequisites, see the Planning and Prerequisites Guide . When upgrading Red Hat Virtualization Manager, it is recommended that you use one of the existing hosts. If you decide to use a new host, you must assign a unique name to the new host and then add it to the existing cluster before you begin the upgrade procedure. 3.2.2. Analyzing the Environment It is recommended to run the Log Collection Analysis tool and the Image Discrepancies tool prior to performing updates and for troubleshooting. These tools analyze your environment for known issues that might prevent you from performing an update, and provide recommendations to resolve them. 3.2.3. Log Collection Analysis tool Run the Log Collection Analysis tool prior to performing updates and for troubleshooting. The tool analyzes your environment for known issues that might prevent you from performing an update, and provides recommendations to resolve them. The tool gathers detailed information about your system and presents it as an HTML file. Prerequisites Ensure the Manager has the correct repositories enabled. For the list of required repositories, see Enabling the Red Hat Virtualization Manager Repositories for Red Hat Virtualization 4.2. Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network. Procedure Install the Log Collection Analysis tool on the Manager machine: Run the tool: A detailed report is displayed. By default, the report is saved to a file called analyzer_report.html . To save the file to a specific location, use the --html flag and specify the location: # rhv-log-collector-analyzer --live --html=/ directory / filename .html You can use the ELinks text mode web browser to read the analyzer reports within the terminal. To install the ELinks browser: Launch ELinks and open analyzer_report.html . To navigate the report, use the following commands in ELinks: Insert to scroll up Delete to scroll down PageUp to page up PageDown to page down Left Bracket to scroll left Right Bracket to scroll right 3.2.3.1. Monitoring snapshot health with the image discrepancies tool The RHV Image Discrepancies tool analyzes image data in the Storage Domain and RHV Database. It alerts you if it finds discrepancies in volumes and volume attributes, but does not fix those discrepancies. Use this tool in a variety of scenarios, such as: Before upgrading versions, to avoid carrying over broken volumes or chains to the new version. Following a failed storage operation, to detect volumes or attributes in a bad state. After restoring the RHV database or storage from backup. Periodically, to detect potential problems before they worsen. To analyze a snapshot- or live storage migration-related issues, and to verify system health after fixing these types of problems. Prerequisites Required Versions: this tool was introduced in RHV version 4.3.8 with rhv-log-collector-analyzer-0.2.15-0.el7ev . Because data collection runs simultaneously at different places and is not atomic, stop all activity in the environment that can modify the storage domains. That is, do not create or remove snapshots, edit, move, create, or remove disks. Otherwise, false detection of inconsistencies may occur. Virtual Machines can remain running normally during the process. Procedure To run the tool, enter the following command on the RHV Manager: If the tool finds discrepancies, rerun it to confirm the results, especially if there is a chance some operations were performed while the tool was running. Note This tool includes any Export and ISO storage domains and may report discrepancies for them. If so, these can be ignored, as these storage domains do not have entries for images in the RHV database. Understanding the results The tool reports the following: If there are volumes that appear on the storage but are not in the database, or appear in the database but are not on the storage. If some volume attributes differ between the storage and the database. Sample output: You can now update the Manager to the latest version of 4.2. 3.2.4. Updating the Red Hat Virtualization Manager Prerequisites Ensure the Manager has the correct repositories enabled. For the list of required repositories, see Enabling the Red Hat Virtualization Manager Repositories for Red Hat Virtualization 4.2. Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network. Procedure On the Manager machine, check if updated packages are available: Update the setup packages: # yum update ovirt\*setup\* rh\*vm-setup-plugins Update the Red Hat Virtualization Manager with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service. When the script completes successfully, the following message appears: Note The engine-setup script is also used during the Red Hat Virtualization Manager installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup . Important The update process might take some time. Do not stop the process before it completes. Update the base operating system and any optional packages installed on the Manager: Important If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict) . Important If any kernel packages were updated, reboot the machine to complete the update. 3.2.5. Upgrading remote databases from PostgreSQL 9.5 to 10 Red Hat Virtualization 4.3 uses PostgreSQL 10 instead of PostgreSQL 9.5. If your databases are installed locally, the upgrade script automatically upgrades them from version 9.5 to 10. However, if either of your databases (Manager or Data Warehouse) is installed on a separate machine, you must perform the following procedure on each remote database before upgrading the Manager. Stop the service running on the machine: When upgrading the Manager database, stop the ovirt-engine service on the Manager machine: # systemctl stop ovirt-engine When upgrading the Data Warehouse database, stop the ovirt-engine-dwhd service on the Data Warehouse machine: # systemctl stop ovirt-engine-dwhd Enable the required repository to receive the PostgreSQL 10 package: Enable either the Red Hat Virtualization Manager repository: # subscription-manager repos --enable=rhel-7-server-rhv-4.3-manager-rpms or the SCL repository: # subscription-manager repos --enable rhel-server-rhscl-7-rpms Install the PostgreSQL 10 packages: Stop and disable the PostgreSQL 9.5 service: Upgrade the PostgreSQL 9.5 database to PostgreSQL 10: Start and enable the rh-postgresql10-postgresql.service and check that it is running: Ensure that you see output similar to the following: Copy the pg_hba.conf client configuration file from the PostgreSQL 9.5 environment to the PostgreSQL 10 environment: # cp -p /var/opt/rh/rh-postgresql95/lib/pgsql/data/pg_hba.conf /var/opt/rh/rh-postgresql10/lib/pgsql/data/pg_hba.conf Update the following parameters in /var/opt/rh/rh-postgresql10/lib/pgsql/data/postgresql.conf : listen_addresses='*' autovacuum_vacuum_scale_factor=0.01 autovacuum_analyze_scale_factor=0.075 autovacuum_max_workers=6 maintenance_work_mem=65536 max_connections=150 work_mem = 8192 Restart the PostgreSQL 10 service to apply the configuration changes: You can now upgrade the Manager to 4.3. 3.2.6. Upgrading the Red Hat Virtualization Manager from 4.2 to 4.3 Follow these same steps when upgrading any of the following: the Red Hat Virtualization Manager a remote machine with the Data Warehouse service You need to be logged into the machine that you are upgrading. Important If the upgrade fails, the engine-setup command attempts to restore your Red Hat Virtualization Manager installation to its state. For this reason, do not remove the version's repositories until after the upgrade is complete. If the upgrade fails, the engine-setup script explains how to restore your installation. Procedure Enable the Red Hat Virtualization 4.3 repositories: # subscription-manager repos \ --enable=rhel-7-server-rhv-4.3-manager-rpms \ --enable=jb-eap-7.2-for-rhel-7-server-rpms All other repositories remain the same across Red Hat Virtualization releases. Update the setup packages: # yum update ovirt\*setup\* rh\*vm-setup-plugins Run engine-setup and follow the prompts to upgrade the Red Hat Virtualization Manager, the remote database or remote service: # engine-setup Note During the upgrade process for the Manager, the engine-setup script might prompt you to disconnect the remote Data Warehouse database. You must disconnect it to continue the setup. When the script completes successfully, the following message appears: Execution of setup completed successfully Disable the Red Hat Virtualization 4.2 repositories to ensure the system does not use any 4.2 packages: # subscription-manager repos \ --disable=rhel-7-server-rhv-4.2-manager-rpms \ --disable=jb-eap-7-for-rhel-7-server-rpms Update the base operating system: # yum update Important If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict) . Important If any kernel packages were updated, reboot the machine to complete the upgrade. The Manager is now upgraded to version 4.3. 3.2.6.1. Completing the remote Data Warehouse database upgrade Complete these additional steps when upgrading a remote Data Warehouse database from PostgreSQL 9.5 to 10. Procedure The ovirt-engine-dwhd service is now running on the Manager machine. If the ovirt-engine-dwhd service is on a remote machine, stop and disable the ovirt-engine-dwhd service on the Manager machine, and remove the configuration files that engine-setup created: # systemctl stop ovirt-engine-dwhd # systemctl disable ovirt-engine-dwhd # rm -f /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/* Repeat the steps in Upgrading the Manager to 4.3 on the machine hosting the ovirt-engine-dwhd service. You can now update the hosts. 3.2.7. Updating All Hosts in a Cluster You can update all hosts in a cluster instead of updating hosts individually. This is particularly useful during upgrades to new versions of Red Hat Virtualization. See oVirt Cluster Upgrade for more information about the Ansible role used to automate the updates. Update one cluster at a time. Limitations On RHVH, the update only preserves modified content in the /etc and /var directories. Modified data in other paths is overwritten during an update. If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster. In a self-hosted engine environment, the Manager virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts. The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts. You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines are shut down during the update, unless you choose to skip that host instead. Procedure In the Administration Portal, click Compute Clusters and select the cluster. The Upgrade status column shows if an upgrade is available for any hosts in the cluster. Click Upgrade . Select the hosts to update, then click . Configure the options: Stop Pinned VMs shuts down any virtual machines that are pinned to hosts in the cluster, and is selected by default. You can clear this check box to skip updating those hosts so that the pinned virtual machines stay running, such as when a pinned virtual machine is running important services or processes and you do not want it to shut down at an unknown time during the update. Upgrade Timeout (Minutes) sets the time to wait for an individual host to be updated before the cluster upgrade fails with a timeout. The default is 60 . You can increase it for large clusters where 60 minutes might not be enough, or reduce it for small clusters where the hosts update quickly. Check Upgrade checks each host for available updates before running the upgrade process. It is not selected by default, but you can select it if you need to ensure that recent updates are included, such as when you have configured the Manager to check for host updates less frequently than the default. Reboot After Upgrade reboots each host after it is updated, and is selected by default. You can clear this check box to speed up the process if you are sure that there are no pending updates that require a host reboot. Use Maintenance Policy sets the cluster's scheduling policy to cluster_maintenance during the update. It is selected by default, so activity is limited and virtual machines cannot start unless they are highly available. You can clear this check box if you have a custom scheduling policy that you want to keep using during the update, but this could have unknown consequences. Ensure your custom policy is compatible with cluster upgrade activity before disabling this option. Click . Review the summary of the hosts and virtual machines that will be affected. Click Upgrade . You can track the progress of host updates: in the Compute Clusters view, the Upgrade Status column shows Upgrade in progress . in the Compute Hosts view in the Events section of the Notification Drawer ( ). You can track the progress of individual virtual machine migrations in the Status column of the Compute Virtual Machines view. In large environments, you may need to filter the results to show a particular group of virtual machines. 3.2.8. Changing the Cluster Compatibility Version Red Hat Virtualization clusters have a compatibility version. The cluster compatibility version indicates the features of Red Hat Virtualization supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster. Prerequisites To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon to the host indicating an update is available. Limitations Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. Red Hat recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection. If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster. Procedure In the Administration Portal, click Compute Clusters . Select the cluster to change and click Edit . On the General tab, change the Compatibility Version to the desired value. Click OK . The Change Cluster Compatibility Version confirmation dialog opens. Click OK to confirm. Important An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine's configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version. 3.2.9. Changing Virtual Machine Cluster Compatibility After updating a cluster's compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon ( ). Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Any virtual machine that has not been rebooted runs with the configuration, and subsequent configuration changes made to the virtual machine might overwrite its pending cluster compatibility changes. Procedure In the Administration Portal, click Compute Virtual Machines . Check which virtual machines require a reboot. In the Vms: search bar, enter the following query: next_run_config_exists=True The search results show all virtual machines with pending changes. Select each virtual machine and click Restart . Alternatively, if necessary you can reboot a virtual machine from within the virtual machine itself. When the virtual machine starts, the new compatibility version is automatically applied. Note You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview. 3.2.10. Changing the Data Center Compatibility Version Red Hat Virtualization data centers have a compatibility version. The compatibility version indicates the version of Red Hat Virtualization with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level. Prerequisites To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center. Procedure In the Administration Portal, click Compute Data Centers . Select the data center to change and click Edit . Change the Compatibility Version to the desired value. Click OK . The Change Data Center Compatibility Version confirmation dialog opens. Click OK to confirm. If you previously upgraded to 4.2 without replacing SHA-1 certificates with SHA-256 certificates, you must do so now. 3.2.11. Replacing SHA-1 Certificates with SHA-256 Certificates Red Hat Virtualization 4.4 uses SHA-256 signatures, which provide a more secure way to sign SSL certificates than SHA-1. Newly installed systems do not require any special steps to enable Red Hat Virtualization's public key infrastructure (PKI) to use SHA-256 signatures. Warning Do NOT let certificates expire. If they expire, the environment becomes non-responsive and recovery is an error prone and time consuming process. For information on renewing certificates, see Renewing certificates before they expire in the Administration Guide . Preventing Warning Messages from Appearing in the Browser Log in to the Manager machine as the root user. Check whether /etc/pki/ovirt-engine/openssl.conf includes the line default_md = sha256 : # cat /etc/pki/ovirt-engine/openssl.conf If it still includes default_md = sha1 , back up the existing configuration and change the default to sha256 : # cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf."USD(date +"%Y%m%d%H%M%S")" # sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf Define the certificate that should be re-signed: # names="apache" On the Manager, save a backup of the /etc/ovirt-engine/engine.conf.d and /etc/pki/ovirt-engine directories, and re-sign the certificates: # . /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf # for name in USDnames; do subject="USD( openssl \ x509 \ -in /etc/pki/ovirt-engine/certs/"USD{name}".cer \ -noout \ -subject \ -nameopt compat \ | sed \ 's;subject=\(.*\);\1;' \ )" /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh \ --name="USD{name}" \ --password=mypass \ <1> --subject="USD{subject}" \ --san=DNS:"USD{ENGINE_FQDN}" \ --keep-key done Do not change this the password value. Restart the httpd service: # systemctl restart httpd Connect to the Administration Portal to confirm that the warning no longer appears. If you previously imported a CA or https certificate into the browser, find the certificate(s), remove them from the browser, and reimport the new CA certificate. Install the certificate authority according to the instructions provided by your browser. To get the certificate authority's certificate, navigate to http:// your-manager-fqdn /ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA , replacing your-manager-fqdn with the fully qualified domain name (FQDN). Replacing All Signed Certificates with SHA-256 Log in to the Manager machine as the root user. Check whether /etc/pki/ovirt-engine/openssl.conf includes the line default_md = sha256 : # cat /etc/pki/ovirt-engine/openssl.conf If it still includes default_md = sha1 , back up the existing configuration and change the default to sha256 : # cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf."USD(date +"%Y%m%d%H%M%S")" # sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf Re-sign the CA certificate by backing it up and creating a new certificate in ca.pem.new : # cp -p /etc/pki/ovirt-engine/private/ca.pem /etc/pki/ovirt-engine/private/ca.pem."USD(date +"%Y%m%d%H%M%S")" # openssl x509 -signkey /etc/pki/ovirt-engine/private/ca.pem -in /etc/pki/ovirt-engine/ca.pem -out /etc/pki/ovirt-engine/ca.pem.new -days 3650 -sha256 Replace the existing certificate with the new certificate: # mv /etc/pki/ovirt-engine/ca.pem.new /etc/pki/ovirt-engine/ca.pem Define the certificates that should be re-signed: # names="engine apache websocket-proxy jboss imageio-proxy" If you replaced the Red Hat Virtualization Manager SSL Certificate after the upgrade, run the following instead: # names="engine websocket-proxy jboss imageio-proxy" For more details see Replacing the Red Hat Virtualization Manager CA Certificate in the Administration Guide . On the Manager, save a backup of the /etc/ovirt-engine/engine.conf.d and /etc/pki/ovirt-engine directories, and re-sign the certificates: # . /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf # for name in USDnames; do subject="USD( openssl \ x509 \ -in /etc/pki/ovirt-engine/certs/"USD{name}".cer \ -noout \ -subject \ -nameopt compat \ | sed \ 's;subject=\(.*\);\1;' \ )" /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh \ --name="USD{name}" \ --password=mypass \ <1> --subject="USD{subject}" \ --san=DNS:"USD{ENGINE_FQDN}" \ --keep-key done Do not change this the password value. Restart the following services: # systemctl restart httpd # systemctl restart ovirt-engine # systemctl restart ovirt-websocket-proxy # systemctl restart ovirt-imageio Connect to the Administration Portal to confirm that the warning no longer appears. If you previously imported a CA or https certificate into the browser, find the certificate(s), remove them from the browser, and reimport the new CA certificate. Install the certificate authority according to the instructions provided by your browser. To get the certificate authority's certificate, navigate to http:// your-manager-fqdn /ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA , replacing your-manager-fqdn with the fully qualified domain name (FQDN). Enroll the certificates on the hosts. Repeat the following procedure for each host. In the Administration Portal, click Compute Hosts . Select the host and click Management Maintenance and OK . Once the host is in maintenance mode, click Installation Enroll Certificate . Click Management Activate . | [
"yum install rhv-log-collector-analyzer",
"rhv-log-collector-analyzer --live",
"rhv-log-collector-analyzer --live --html=/ directory / filename .html",
"yum install -y elinks",
"elinks /home/user1/analyzer_report.html",
"rhv-image-discrepancies",
"Checking storage domain c277ad93-0973-43d9-a0ca-22199bc8e801 Looking for missing images No missing images found Checking discrepancies between SD/DB attributes image ef325650-4b39-43cf-9e00-62b9f7659020 has a different attribute capacity on storage(2696984576) and on DB(2696986624) image 852613ce-79ee-4adc-a56a-ea650dcb4cfa has a different attribute capacity on storage(5424252928) and on DB(5424254976) Checking storage domain c64637b4-f0e8-408c-b8af-6a52946113e2 Looking for missing images No missing images found Checking discrepancies between SD/DB attributes No discrepancies found",
"engine-upgrade-check",
"yum update ovirt\\*setup\\* rh\\*vm-setup-plugins",
"engine-setup",
"Execution of setup completed successfully",
"yum update --nobest",
"systemctl stop ovirt-engine",
"systemctl stop ovirt-engine-dwhd",
"subscription-manager repos --enable=rhel-7-server-rhv-4.3-manager-rpms",
"subscription-manager repos --enable rhel-server-rhscl-7-rpms",
"yum install rh-postgresql10 rh-postgresql10-postgresql-contrib",
"systemctl stop rh-postgresql95-postgresql systemctl disable rh-postgresql95-postgresql",
"scl enable rh-postgresql10 -- postgresql-setup --upgrade-from=rh-postgresql95-postgresql --upgrade",
"systemctl start rh-postgresql10-postgresql.service systemctl enable rh-postgresql10-postgresql.service systemctl status rh-postgresql10-postgresql.service",
"rh-postgresql10-postgresql.service - PostgreSQL database server Loaded: loaded (/usr/lib/systemd/system/rh-postgresql10-postgresql.service; enabled; vendor preset: disabled) Active: active (running) since",
"cp -p /var/opt/rh/rh-postgresql95/lib/pgsql/data/pg_hba.conf /var/opt/rh/rh-postgresql10/lib/pgsql/data/pg_hba.conf",
"listen_addresses='*' autovacuum_vacuum_scale_factor=0.01 autovacuum_analyze_scale_factor=0.075 autovacuum_max_workers=6 maintenance_work_mem=65536 max_connections=150 work_mem = 8192",
"systemctl restart rh-postgresql10-postgresql.service",
"subscription-manager repos --enable=rhel-7-server-rhv-4.3-manager-rpms --enable=jb-eap-7.2-for-rhel-7-server-rpms",
"yum update ovirt\\*setup\\* rh\\*vm-setup-plugins",
"engine-setup",
"Execution of setup completed successfully",
"subscription-manager repos --disable=rhel-7-server-rhv-4.2-manager-rpms --disable=jb-eap-7-for-rhel-7-server-rpms",
"yum update",
"systemctl stop ovirt-engine-dwhd systemctl disable ovirt-engine-dwhd rm -f /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/*",
"next_run_config_exists=True",
"cat /etc/pki/ovirt-engine/openssl.conf",
"cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf.\"USD(date +\"%Y%m%d%H%M%S\")\" sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf",
"names=\"apache\"",
". /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf for name in USDnames; do subject=\"USD( openssl x509 -in /etc/pki/ovirt-engine/certs/\"USD{name}\".cer -noout -subject -nameopt compat | sed 's;subject=\\(.*\\);\\1;' )\" /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh --name=\"USD{name}\" --password=mypass \\ <1> --subject=\"USD{subject}\" --san=DNS:\"USD{ENGINE_FQDN}\" --keep-key done",
"systemctl restart httpd",
"cat /etc/pki/ovirt-engine/openssl.conf",
"cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf.\"USD(date +\"%Y%m%d%H%M%S\")\" sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf",
"cp -p /etc/pki/ovirt-engine/private/ca.pem /etc/pki/ovirt-engine/private/ca.pem.\"USD(date +\"%Y%m%d%H%M%S\")\" openssl x509 -signkey /etc/pki/ovirt-engine/private/ca.pem -in /etc/pki/ovirt-engine/ca.pem -out /etc/pki/ovirt-engine/ca.pem.new -days 3650 -sha256",
"mv /etc/pki/ovirt-engine/ca.pem.new /etc/pki/ovirt-engine/ca.pem",
"names=\"engine apache websocket-proxy jboss imageio-proxy\"",
"names=\"engine websocket-proxy jboss imageio-proxy\"",
". /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf for name in USDnames; do subject=\"USD( openssl x509 -in /etc/pki/ovirt-engine/certs/\"USD{name}\".cer -noout -subject -nameopt compat | sed 's;subject=\\(.*\\);\\1;' )\" /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh --name=\"USD{name}\" --password=mypass \\ <1> --subject=\"USD{subject}\" --san=DNS:\"USD{ENGINE_FQDN}\" --keep-key done",
"systemctl restart httpd systemctl restart ovirt-engine systemctl restart ovirt-websocket-proxy systemctl restart ovirt-imageio"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/upgrade_guide/remote_upgrading_from_4-2 |
Networking with Open Virtual Network | Networking with Open Virtual Network Red Hat OpenStack Platform 16.0 OpenStack Networking with OVN OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/networking_with_open_virtual_network/index |
Chapter 10. Viewing logs and audit records | Chapter 10. Viewing logs and audit records As a cluster administrator, you can use the OpenShift AI Operator logger to monitor and troubleshoot issues. You can also use OpenShift audit records to review a history of changes made to the OpenShift AI Operator configuration. 10.1. Configuring the OpenShift AI Operator logger You can change the log level for OpenShift AI Operator components by setting the .spec.devFlags.logmode flag for the DSC Initialization / DSCI custom resource during runtime. If you do not set a logmode value, the logger uses the INFO log level by default. The log level that you set with .spec.devFlags.logmode applies to all components, not just those in a Managed state. The following table shows the available log levels: Log level Stacktrace level Verbosity Output Timestamp type devel or development WARN INFO Console Epoch timestamps "" (or no logmode value set) ERROR INFO JSON Human-readable timestamps prod or production ERROR INFO JSON Human-readable timestamps Logs that are set to devel or development generate in a plain text console format. Logs that are set to prod , production , or which do not have a level set generate in a JSON format. Prerequisites You have admin access to the DSCInitialization resources in the OpenShift cluster. You installed the OpenShift command line interface ( oc ) as described in Installing the OpenShift CLI . Procedure Log in to the OpenShift as a cluster administrator. Click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. Click the DSC Initialization tab. Click the default-dsci object. Click the YAML tab. In the spec section, update the .spec.devFlags.logmode flag with the log level that you want to set. Click Save . You can also configure the log level from the OpenShift CLI by using the following command with the logmode value set to the log level that you want. Verification If you set the component log level to devel or development , logs generate more frequently and include logs at WARN level and above. If you set the component log level to prod or production , or do not set a log level, logs generate less frequently and include logs at ERROR level or above. 10.1.1. Viewing the OpenShift AI Operator log Log in to the OpenShift CLI. Run the following command: The operator pod log opens. You can also view the operator pod log in the OpenShift Console, under Workloads > Deployments > Pods > redhat-ods-operator > Logs . 10.2. Viewing audit records Cluster administrators can use OpenShift auditing to see changes made to the OpenShift AI Operator configuration by reviewing modifications to the DataScienceCluster (DSC) and DSCInitialization (DSCI) custom resources. Audit logging is enabled by default in standard OpenShift cluster configurations. For more information, see Viewing audit logs in the OpenShift documentation. Note In Red Hat OpenShift Service on Amazon Web Services with hosted control planes (ROSA HCP), audit logging is disabled by default because the Elasticsearch log store does not provide secure storage for audit logs. To send the audit logs to Amazon CloudWatch, see Forwarding logs to Amazon CloudWatch . The following example shows how to use the OpenShift audit logs to see the history of changes made (by users) to the DSC and DSCI custom resources. Prerequisites You have cluster administrator privileges for your OpenShift cluster. You installed the OpenShift command line interface ( oc ) as described in Installing the OpenShift CLI . Procedure In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI as shown in the following example: To access the full content of the changed custom resources, set the OpenShift audit log policy to WriteRequestBodies or a more comprehensive profile. For more information, see About audit log policy profiles . Fetch the audit log files that are available for the relevant control plane nodes. For example: Search the files for the DSC and DSCI custom resources. For example: Verification The commands return relevant log entries. Tip To configure the log retention time, see the Logging section in the OpenShift documentation. Additional resources Viewing audit logs About audit log policy profiles | [
"apiVersion: dscinitialization.opendatahub.io/v1 kind: DSCInitialization metadata: name: default-dsci spec: devFlags: logmode: development",
"patch dsci default-dsci -p '{\"spec\":{\"devFlags\":{\"logmode\":\"development\"}}}' --type=merge",
"get pods -l name=rhods-operator -o name -n redhat-ods-operator | xargs -I {} oc logs -f {} -n redhat-ods-operator",
"oc login <openshift_cluster_url> -u <admin_username> -p <password>",
"adm node-logs --role=master --path=kube-apiserver/ | awk '{ print USD1 }' | sort -u | while read node ; do oc adm node-logs USDnode --path=kube-apiserver/audit.log < /dev/null done | grep opendatahub > /tmp/kube-apiserver-audit-opendatahub.log",
"jq 'select((.objectRef.apiGroup == \"dscinitialization.opendatahub.io\" or .objectRef.apiGroup == \"datasciencecluster.opendatahub.io\") and .user.username != \"system:serviceaccount:redhat-ods-operator:redhat-ods-operator-controller-manager\" and .verb != \"get\" and .verb != \"watch\" and .verb != \"list\")' < /tmp/kube-apiserver-audit-opendatahub.log"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/installing_and_uninstalling_openshift_ai_self-managed/viewing-logs-and-audit-records_install |
Chapter 11. Keeping kernel panic parameters disabled in virtualized environments | Chapter 11. Keeping kernel panic parameters disabled in virtualized environments When configuring a Virtual Machine in RHEL 8, do not enable the softlockup_panic and nmi_watchdog kernel parameters, because the Virtual Machine might suffer from a spurious soft lockup. And that should not require a kernel panic. Find the reasons behind this advice in the following sections. 11.1. What is a soft lockup A soft lockup is a situation usually caused by a bug, when a task is executing in kernel space on a CPU without rescheduling. The task also does not allow any other task to execute on that particular CPU. As a result, a warning is displayed to a user through the system console. This problem is also referred to as the soft lockup firing. Additional resources What is a CPU soft lockup? 11.2. Parameters controlling kernel panic The following kernel parameters can be set to control a system's behavior when a soft lockup is detected. softlockup_panic Controls whether or not the kernel will panic when a soft lockup is detected. Type Value Effect Integer 0 kernel does not panic on soft lockup Integer 1 kernel panics on soft lockup By default, on RHEL 8, this value is 0. The system needs to detect a hard lockup first to be able to panic. The detection is controlled by the nmi_watchdog parameter. nmi_watchdog Controls whether lockup detection mechanisms ( watchdogs ) are active or not. This parameter is of integer type. Value Effect 0 disables lockup detector 1 enables lockup detector The hard lockup detector monitors each CPU for its ability to respond to interrupts. watchdog_thresh Controls frequency of watchdog hrtimer , NMI events, and soft or hard lockup thresholds. Default threshold Soft lockup threshold 10 seconds 2 * watchdog_thresh Setting this parameter to zero disables lockup detection altogether. Additional resources Softlockup detector and hardlockup detector Kernel sysctl 11.3. Spurious soft lockups in virtualized environments The soft lockup firing on physical hosts usually represents a kernel or a hardware bug. The same phenomenon happening on guest operating systems in virtualized environments might represent a false warning. Heavy workload on a host or high contention over some specific resource, such as memory, can cause a spurious soft lockup firing because the host might schedule out the guest CPU for a period longer than 20 seconds. When the guest CPU is again scheduled to run on the host, it experiences a time jump that triggers the due timers. The timers also include the hrtimer watchdog that can report a soft lockup on the guest CPU. Soft lockup in a virtualized environment can be false. You must not enable the kernel parameters that trigger a system panic when a soft lockup reports to a guest CPU. Important To understand soft lockups in guests, it is essential to know that the host schedules the guest as a task, and the guest then schedules its own tasks. Additional resources Virtual machine components and their interaction | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/keeping-kernel-panic-parameters-disabled-in-virtualized-environments_managing-monitoring-and-updating-the-kernel |
Chapter 2. Installing the Red Hat Ansible Automation Platform operator on Red Hat OpenShift Container Platform | Chapter 2. Installing the Red Hat Ansible Automation Platform operator on Red Hat OpenShift Container Platform Prerequisites You have installed the Red Hat Ansible Automation Platform catalog in Operator Hub. You have created a StorageClass object for your platform and a persistant volume claim (PVC) with ReadWriteMany access mode. See Dyamic Provisioning for details. To run Red Hat OpenShift Container Platform clusters on Amazon Web Services with ReadWriteMany access mode, you must add NFS or other storage. For information on the AWS Elastic Block Store (EBS) or to use the aws-ebs storage class, see Persistent storage using AWS Elastic Block Store . To use multi-attach ReadWriteMany access mode for AWS EBS, see Attaching a volume to multiple instances with Amazon EBS Multi-Attach . Procedure Log in to Red Hat OpenShift Container Platform. Navigate to Operators OperatorHub . Search for the Red Hat Ansible Automation Platform operator and click Install . Select an Update Channel : stable-2.x : installs a namespace-scoped operator, which limits deployments of automation hub and automation controller instances to the namespace the operator is installed in. This is suitable for most cases. The stable-2.x channel does not require administrator privileges and utilizes fewer resources because it only monitors a single namespace. stable-2.x-cluster-scoped : deploys automation hub and automation controller across multiple namespaces in the cluster and requires administrator privileges for all namespaces in the cluster. Select Installation Mode , Installed Namespace , and Approval Strategy . Click Install . The installation process will begin. When installation is complete, a modal will appear notifying you that the Red Hat Ansible Automation Platform operator is installed in the specified namespace. Click View Operator to view your newly installed Red Hat Ansible Automation Platform operator. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/assembly-install-aap-operator |
Installing on IBM Power Virtual Server | Installing on IBM Power Virtual Server OpenShift Container Platform 4.18 Installing OpenShift Container Platform on IBM Power Virtual Server Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_ibm_power_virtual_server/index |
Chapter 5. Upgrading and Downgrading | Chapter 5. Upgrading and Downgrading 5.1. Setting up an Atomic Compose Server This procedure explains how to set up an Atomic Compose server. It is possible to use an Atomic Compose server to create atomic update trees. The procedure here explains how to set up an Atomic Compose server that creates a local mirror of the upstream OSTree repository. Log into a shell on the host, and run the Atomic Tools container. From inside the tools container, create an unprivileged user. Acquire the entitlement certificates and use chown to make them owned by the unprivileged container user. Log out of the root account. Note We use /host/var/tmp/repo so the data is outside of the container. This could be a remote mount point to Ceph/etc. Put the entitlement certificates inside the repo directory. Copy the remote configuration from the host into the repository: Change variables Edit repo/config and change the tls-client-* variables to look like the ones below. This tells the command where to find the client certificates that are necessary to access the CDN. Final steps Everything is now set up. The following command will incrementally mirror all of the content. It is possible to run the command from a cron job or systemd timer. For client machines, change /etc/ostree/remotes.d/redhat.conf to point to a static web server that is exporting the repo directory. 5.2. Upgrading to a New Version Unlike Red Hat Enterprise Linux 7 which uses Yum and has a traditional package management model, RHEL Atomic Host uses OSTree and is upgraded by preparing a new operating system root, and making it the default for the boot. To perform an upgrade, execute the following commands: Note The OSTrees are downloaded securely. However, if you want, you can manually verify the provenance of the OSTree to which you are upgrading. See Manually Verifying OS Trees . If you are using a system that requires an HTTP proxy, the proxy is configured with an environment variable. To configure the environment variable, use a command similar to the following one: 5.3. Rolling Back to a Version To revert to a installation of Red Hat Enterprise Linux Atomic Host, execute the following commands: Two versions of Red Hat Enterprise Linux Atomic Host are available on the system after the initial upgrade. One is the currently running version. The other is either a new version recently installed from an upgrade or the version that was in place prior to the last upgrade. Important Configuration is preserved across updates, but is only forward-preserved. This means that if you make a configuration change and then later roll back to a version, the configuration change you made is reverted. Note Running the atomic host upgrade command will replace the non-running version of Red Hat Enterprise Linux Atomic Host. This version will also be configured to be used during the boot. To determine which version of the operating system is running, execute the following command: The output that includes the hash name of the directory in the /ostree/deploy/rhel-atomic-host/ directory looks like this: This fictional sample output shows that version 7.3 will be booted into on the restart. The version to be booted on the restart is printed first. This fictional sample also shows that version 7.2.7 is the currently running version. The currently running version is marked with an asterisk (*). This output was created just after the atomic host upgrade command was executed, and that means that a new version has been staged to be applied at the restart. 5.4. Generating the initramfs Image on the Client By default, Atomic Host uses a generic initramfs image built on the server side. This is distinct from the yum -based Red Hat Enterprise Linux, where initramfs is generated per installation. However, in some situations, additional configuration or content may need to be added, which requires generating initramfs on the client side. To make an Atomic Host client machine generate initramfs on every upgrade, run: After this, on every upgrade, the client runs the dracut program, which builds the new initramfs . To disable generating initramfs on the client, run: | [
"atomic run rhel7/rhel-tools",
"adduser container",
"cd ~container cp /host/etc/pki/entitlement/*.pem . chown container: *.pem runuser -u container bash",
"exit",
"cd /host/var/tmp mkdir repo && ostree --repo=repo init --mode=archive-z2 mv ~/*.pem repo/",
"cat /host/etc/ostree/remotes.d/redhat.conf >> repo/config",
"tls-client-cert-path = ./repo/123451234512345.pem tls-client-key-path = ./repo/123451234512345-key.pem",
"ostree --repo=repo pull --mirror rhel-atomic-host-ostree",
"atomic host upgrade systemctl reboot",
"env http_proxy=http://proxy.example.com:port/ atomic host upgrade",
"atomic host rollback systemctl reboot",
"atomic host status",
"atomic host status State: idle Deployments: * rhel-atomic-host-ostree:rhel-atomic-host/7/x86_64/standard Version: 7.3 (2016-09-27 17:53:07) BaseCommit: d3fa3283db8c5ee656f78dcfc0fcffe6cd5aa06596dac6ec5e436352208a59cb Commit: f5e639ce8186386d74e2558e6a34f55a427d8f59412d47a907793e046875d8dd OSName: rhel-atomic-host rhel-atomic-host-ostree:rhel-atomic-host/7.2/x86_64/standard Version: 7.2.7 (2016-09-15 22:28:54) BaseCommit: dbbc8e805f0003d8e55658dc220f1fe1397caf80221cc050eeb1bbf44bef56a1 Commit: 5cd426fa86bd1652ecd8f7d489f89f13ecb7d36e66003b0d7669721cb79545a8 OSName: rhel-atomic-host",
"rpm-ostree initramfs --enable",
"rpm-ostree initramfs --disable"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/installation_and_configuration_guide/upgrading_and_downgrading |
Monitoring Satellite performance | Monitoring Satellite performance Red Hat Satellite 6.16 Collect metrics from Satellite and allow their analysis in external tools Red Hat Satellite Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/monitoring_satellite_performance/index |
2.5. Considerations for Using Quorum Disk | 2.5. Considerations for Using Quorum Disk Quorum Disk is a disk-based quorum daemon, qdiskd , that provides supplemental heuristics to determine node fitness. With heuristics you can determine factors that are important to the operation of the node in the event of a network partition. For example, in a four-node cluster with a 3:1 split, ordinarily, the three nodes automatically "win" because of the three-to-one majority. Under those circumstances, the one node is fenced. With qdiskd however, you can set up heuristics that allow the one node to win based on access to a critical resource (for example, a critical network path). If your cluster requires additional methods of determining node health, then you should configure qdiskd to meet those needs. Note Configuring qdiskd is not required unless you have special requirements for node health. An example of a special requirement is an "all-but-one" configuration. In an all-but-one configuration, qdiskd is configured to provide enough quorum votes to maintain quorum even though only one node is working. Important Overall, heuristics and other qdiskd parameters for your Red Hat Cluster depend on the site environment and special requirements needed. To understand the use of heuristics and other qdiskd parameters, refer to the qdisk (5) man page. If you require assistance understanding and using qdiskd for your site, contact an authorized Red Hat support representative. If you need to use qdiskd , you should take into account the following considerations: Cluster node votes Each cluster node should have the same number of votes. CMAN membership timeout value The CMAN membership timeout value (the time a node needs to be unresponsive before CMAN considers that node to be dead, and not a member) should be at least two times that of the qdiskd membership timeout value. The reason is because the quorum daemon must detect failed nodes on its own, and can take much longer to do so than CMAN. The default value for CMAN membership timeout is 10 seconds. Other site-specific conditions may affect the relationship between the membership timeout values of CMAN and qdiskd . For assistance with adjusting the CMAN membership timeout value, contact an authorized Red Hat support representative. Fencing To ensure reliable fencing when using qdiskd , use power fencing. While other types of fencing (such as watchdog timers and software-based solutions to reboot a node internally) can be reliable for clusters not configured with qdiskd , they are not reliable for a cluster configured with qdiskd . Maximum nodes A cluster configured with qdiskd supports a maximum of 16 nodes. The reason for the limit is because of scalability; increasing the node count increases the amount of synchronous I/O contention on the shared quorum disk device. Quorum disk device A quorum disk device should be a shared block device with concurrent read/write access by all nodes in a cluster. The minimum size of the block device is 10 Megabytes. Examples of shared block devices that can be used by qdiskd are a multi-port SCSI RAID array, a Fibre Channel RAID SAN, or a RAID-configured iSCSI target. You can create a quorum disk device with mkqdisk , the Cluster Quorum Disk Utility. For information about using the utility refer to the mkqdisk(8) man page. Note Using JBOD as a quorum disk is not recommended. A JBOD cannot provide dependable performance and therefore may not allow a node to write to it quickly enough. If a node is unable to write to a quorum disk device quickly enough, the node is falsely evicted from a cluster. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-qdisk-considerations-CA |
5.13. Cleaning up Multipath Files on Package Removal | 5.13. Cleaning up Multipath Files on Package Removal If you should have occasion to remove the device-mapper-multipath rpm . file, note that this does not remove the /etc/multipath.conf , /etc/multipath/bindings , and /etc/multipath/wwids files. You may need to remove those files manually on subsequent installations of the device-mapper-multipath package. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/mpath-file-cleanup |
Support | Support OpenShift Container Platform 4.15 Getting support for OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"oc api-resources -o name | grep config.openshift.io",
"oc explain <resource_name>.config.openshift.io",
"oc get <resource_name>.config -o yaml",
"oc edit <resource_name>.config -o yaml",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"curl -G -k -H \"Authorization: Bearer USD(oc whoami -t)\" https://USD(oc get route prometheus-k8s-federate -n openshift-monitoring -o jsonpath=\"{.spec.host}\")/federate --data-urlencode 'match[]={__name__=~\"cluster:usage:.*\"}' --data-urlencode 'match[]={__name__=\"count:up0\"}' --data-urlencode 'match[]={__name__=\"count:up1\"}' --data-urlencode 'match[]={__name__=\"cluster_version\"}' --data-urlencode 'match[]={__name__=\"cluster_version_available_updates\"}' --data-urlencode 'match[]={__name__=\"cluster_version_capability\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_up\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_conditions\"}' --data-urlencode 'match[]={__name__=\"cluster_version_payload\"}' --data-urlencode 'match[]={__name__=\"cluster_installer\"}' --data-urlencode 'match[]={__name__=\"cluster_infrastructure_provider\"}' --data-urlencode 'match[]={__name__=\"cluster_feature_set\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_object_counts:sum\"}' --data-urlencode 'match[]={__name__=\"ALERTS\",alertstate=\"firing\"}' --data-urlencode 'match[]={__name__=\"code:apiserver_request_total:rate:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_memory_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"workload:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"workload:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:virt_platform_nodes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:node_instance_type_count:sum\"}' --data-urlencode 'match[]={__name__=\"cnv:vmi_status_running:count\"}' --data-urlencode 'match[]={__name__=\"cluster:vmi_request_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_sockets:sum\"}' --data-urlencode 'match[]={__name__=\"subscription_sync_total\"}' --data-urlencode 'match[]={__name__=\"olm_resolution_duration_seconds\"}' --data-urlencode 'match[]={__name__=\"csv_succeeded\"}' --data-urlencode 'match[]={__name__=\"csv_abnormal\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_used_raw_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_health_status\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_total_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_used_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_health_status\"}' --data-urlencode 'match[]={__name__=\"job:ceph_osd_metadata:count\"}' --data-urlencode 'match[]={__name__=\"job:kube_pv:count\"}' --data-urlencode 'match[]={__name__=\"job:odf_system_pvs:count\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops_bytes:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_versions_running:count\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_unhealthy_buckets:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_bucket_count:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_object_count:sum\"}' --data-urlencode 'match[]={__name__=\"odf_system_bucket_count\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"odf_system_objects_total\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"noobaa_accounts_num\"}' --data-urlencode 'match[]={__name__=\"noobaa_total_usage\"}' --data-urlencode 'match[]={__name__=\"console_url\"}' --data-urlencode 'match[]={__name__=\"cluster:ovnkube_master_egress_routing_via_host:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_instances:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_enabled_instance_up:max\"}' --data-urlencode 'match[]={__name__=\"cluster:ingress_controller_aws_nlb_active:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:min\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:max\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:avg\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:median\"}' --data-urlencode 'match[]={__name__=\"cluster:openshift_route_info:tls_termination:sum\"}' --data-urlencode 'match[]={__name__=\"insightsclient_request_send_total\"}' --data-urlencode 'match[]={__name__=\"cam_app_workload_migrations\"}' --data-urlencode 'match[]={__name__=\"cluster:apiserver_current_inflight_requests:sum:max_over_time:2m\"}' --data-urlencode 'match[]={__name__=\"cluster:alertmanager_integrations:max\"}' --data-urlencode 'match[]={__name__=\"cluster:telemetry_selected_series:count\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_series:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_samples_appended_total:sum\"}' --data-urlencode 'match[]={__name__=\"monitoring:container_memory_working_set_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_series_added:topk3_sum1h\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_samples_post_metric_relabeling:topk3\"}' --data-urlencode 'match[]={__name__=\"monitoring:haproxy_server_http_responses_total:sum\"}' --data-urlencode 'match[]={__name__=\"rhmi_status\"}' --data-urlencode 'match[]={__name__=\"status:upgrading:version:rhoam_state:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_critical_alerts:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_warning_alerts:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_percentile:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_remaining_error_budget:max\"}' --data-urlencode 'match[]={__name__=\"cluster_legacy_scheduler_policy\"}' --data-urlencode 'match[]={__name__=\"cluster_master_schedulable\"}' --data-urlencode 'match[]={__name__=\"che_workspace_status\"}' --data-urlencode 'match[]={__name__=\"che_workspace_started_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_failure_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_sum\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_count\"}' --data-urlencode 'match[]={__name__=\"cco_credentials_mode\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolume_plugin_type_counts:sum\"}' --data-urlencode 'match[]={__name__=\"visual_web_terminal_sessions_total\"}' --data-urlencode 'match[]={__name__=\"acm_managed_cluster_info\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_vcenter_info:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_esxi_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_node_hw_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:build_by_strategy:sum\"}' --data-urlencode 'match[]={__name__=\"rhods_aggregate_availability\"}' --data-urlencode 'match[]={__name__=\"rhods_total_users\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_storage_types\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_strategies\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_agent_strategies\"}' --data-urlencode 'match[]={__name__=\"appsvcs:cores_by_product:sum\"}' --data-urlencode 'match[]={__name__=\"nto_custom_profiles:count\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_configmap\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_secret\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_failures_total\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_requests_total\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_backup_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_restore_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_storage_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_redundancy_policy_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_defined_delete_namespaces_total\"}' --data-urlencode 'match[]={__name__=\"eo_es_misconfigured_memory_resources_info\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_data_nodes_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_created_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_deleted_total:sum\"}' --data-urlencode 'match[]={__name__=\"pod:eo_es_shards_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_cluster_management_state_info\"}' --data-urlencode 'match[]={__name__=\"imageregistry:imagestreamtags_count:sum\"}' --data-urlencode 'match[]={__name__=\"imageregistry:operations_count:sum\"}' --data-urlencode 'match[]={__name__=\"log_logging_info\"}' --data-urlencode 'match[]={__name__=\"log_collector_error_count_total\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_pipeline_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_input_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_output_info\"}' --data-urlencode 'match[]={__name__=\"cluster:log_collected_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:log_logged_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kata_monitor_running_shim_count:sum\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_hostedclusters:max\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_nodepools:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_bucket_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_buckets_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_accounts:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_usage:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_system_health_status:max\"}' --data-urlencode 'match[]={__name__=\"ocs_advanced_feature_usage\"}' --data-urlencode 'match[]={__name__=\"os_image_url_override:sum\"}'",
"INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running)",
"oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data",
"oc extract secret/pull-secret -n openshift-config --to=.",
"\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"<email_address>\"}",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1",
"oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \" <email_address> \" } } }",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' > pull-secret",
"cp pull-secret pull-secret-backup",
"set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret",
"apiVersion: v1 kind: ConfigMap metadata: name: insights-config namespace: openshift-insights data: config.yaml: | dataReporting: obfuscation: - networking - workload_names sca: disabled: false interval: 2h alerting: disabled: false binaryData: {} immutable: false",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | alerting: disabled: true",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | alerting: disabled: false",
"oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running",
"oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1",
"{ \"name\": \"clusterconfig/authentication\", \"duration_in_ms\": 730, 1 \"records_count\": 1, \"errors\": null, \"panic\": null }",
"apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled",
"oc apply -f <your_datagather_definition>.yaml",
"apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled",
"apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: . spec: 1 gatherConfig: disabledGatherers: - all 2",
"spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info",
"apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: . spec: gatherConfig: 1 disabledGatherers: all",
"spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | dataReporting: obfuscation: - workload_names",
"apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job annotations: config.openshift.io/inject-proxy: insights-operator spec: backoffLimit: 6 ttlSecondsAfterFinished: 600 template: spec: restartPolicy: OnFailure serviceAccountName: operator nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 900 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 900 volumes: - name: snapshots emptyDir: {} - name: service-ca-bundle configMap: name: service-ca-bundle optional: true initContainers: - name: insights-operator image: quay.io/openshift/origin-insights-operator:latest terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - name: snapshots mountPath: /var/lib/insights-operator - name: service-ca-bundle mountPath: /var/run/configmaps/service-ca-bundle readOnly: true ports: - containerPort: 8443 name: https resources: requests: cpu: 10m memory: 70Mi args: - gather - -v=4 - --config=/etc/insights-operator/server.yaml containers: - name: sleepy image: quay.io/openshift/origin-base:latest args: - /bin/sh - -c - sleep 10m volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}]",
"oc get -n openshift-insights deployment insights-operator -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: insights-operator namespace: openshift-insights spec: template: spec: containers: - args: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1",
"apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job spec: template: spec: initContainers: - name: insights-operator image: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 terminationMessagePolicy: FallbackToLogsOnError volumeMounts:",
"oc apply -n openshift-insights -f gather-job.yaml",
"oc describe -n openshift-insights job/insights-operator-job",
"Name: insights-operator-job Namespace: openshift-insights Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job-<your_job>",
"oc logs -n openshift-insights insights-operator-job-<your_job> insights-operator",
"I0407 11:55:38.192084 1 diskrecorder.go:34] Wrote 108 records to disk in 33ms",
"oc cp openshift-insights/insights-operator-job- <your_job> :/var/lib/insights-operator ./insights-data",
"oc delete -n openshift-insights job insights-operator-job",
"oc extract secret/pull-secret -n openshift-config --to=.",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \"[email protected]\" } }",
"curl -v -H \"User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> \" -H \"Authorization: Bearer <your_token> \" -F \"upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar\" https://console.redhat.com/api/ingress/v1/upload",
"* Connection #0 to host console.redhat.com left intact {\"request_id\":\"393a7cf1093e434ea8dd4ab3eb28884c\",\"upload\":{\"account_number\":\"6274079\"}}%",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | sca: interval: 2h",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | sca: disabled: true",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | sca: disabled: false",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s",
"oc adm must-gather --run-namespace <namespace> --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9",
"oc import-image is/must-gather -n openshift",
"oc adm must-gather",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 2",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"βββ cluster-logging β βββ clo β β βββ cluster-logging-operator-74dd5994f-6ttgt β β βββ clusterlogforwarder_cr β β βββ cr β β βββ csv β β βββ deployment β β βββ logforwarding_cr β βββ collector β β βββ fluentd-2tr64 β βββ eo β β βββ csv β β βββ deployment β β βββ elasticsearch-operator-7dc7d97b9d-jb4r4 β βββ es β β βββ cluster-elasticsearch β β β βββ aliases β β β βββ health β β β βββ indices β β β βββ latest_documents.json β β β βββ nodes β β β βββ nodes_stats.json β β β βββ thread_pool β β βββ cr β β βββ elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms β β βββ logs β β βββ elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms β βββ install β β βββ co_logs β β βββ install_plan β β βββ olmo_logs β β βββ subscription β βββ kibana β βββ cr β βββ kibana-9d69668d4-2rkvz βββ cluster-scoped-resources β βββ core β βββ nodes β β βββ ip-10-0-146-180.eu-west-1.compute.internal.yaml β βββ persistentvolumes β βββ pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml βββ event-filter.html βββ gather-debug.log βββ namespaces βββ openshift-logging β βββ apps β β βββ daemonsets.yaml β β βββ deployments.yaml β β βββ replicasets.yaml β β βββ statefulsets.yaml β βββ batch β β βββ cronjobs.yaml β β βββ jobs.yaml β βββ core β β βββ configmaps.yaml β β βββ endpoints.yaml β β βββ events β β β βββ elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml β β β βββ elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml β β β βββ elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml β β β βββ elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml β β β βββ elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml β β β βββ elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml β β βββ events.yaml β β βββ persistentvolumeclaims.yaml β β βββ pods.yaml β β βββ replicationcontrollers.yaml β β βββ secrets.yaml β β βββ services.yaml β βββ openshift-logging.yaml β βββ pods β β βββ cluster-logging-operator-74dd5994f-6ttgt β β β βββ cluster-logging-operator β β β β βββ cluster-logging-operator β β β β βββ logs β β β β βββ current.log β β β β βββ previous.insecure.log β β β β βββ previous.log β β β βββ cluster-logging-operator-74dd5994f-6ttgt.yaml β β βββ cluster-logging-operator-registry-6df49d7d4-mxxff β β β βββ cluster-logging-operator-registry β β β β βββ cluster-logging-operator-registry β β β β βββ logs β β β β βββ current.log β β β β βββ previous.insecure.log β β β β βββ previous.log β β β βββ cluster-logging-operator-registry-6df49d7d4-mxxff.yaml β β β βββ mutate-csv-and-generate-sqlite-db β β β βββ mutate-csv-and-generate-sqlite-db β β β βββ logs β β β βββ current.log β β β βββ previous.insecure.log β β β βββ previous.log β β βββ elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms β β βββ elasticsearch-im-app-1596030300-bpgcx β β β βββ elasticsearch-im-app-1596030300-bpgcx.yaml β β β βββ indexmanagement β β β βββ indexmanagement β β β βββ logs β β β βββ current.log β β β βββ previous.insecure.log β β β βββ previous.log β β βββ fluentd-2tr64 β β β βββ fluentd β β β β βββ fluentd β β β β βββ logs β β β β βββ current.log β β β β βββ previous.insecure.log β β β β βββ previous.log β β β βββ fluentd-2tr64.yaml β β β βββ fluentd-init β β β βββ fluentd-init β β β βββ logs β β β βββ current.log β β β βββ previous.insecure.log β β β βββ previous.log β β βββ kibana-9d69668d4-2rkvz β β β βββ kibana β β β β βββ kibana β β β β βββ logs β β β β βββ current.log β β β β βββ previous.insecure.log β β β β βββ previous.log β β β βββ kibana-9d69668d4-2rkvz.yaml β β β βββ kibana-proxy β β β βββ kibana-proxy β β β βββ logs β β β βββ current.log β β β βββ previous.insecure.log β β β βββ previous.log β βββ route.openshift.io β βββ routes.yaml βββ openshift-operators-redhat βββ",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=quay.io/kubevirt/must-gather 2",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc adm must-gather -- gather_network_logs",
"tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1",
"Disk usage exceeds the volume percentage of 30% for mounted directory. Exiting",
"oc adm must-gather --volume-percentage <storage_percentage>",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"oc get nodes",
"oc debug node/my-cluster-node",
"oc new-project dummy",
"oc patch namespace dummy --type=merge -p '{\"metadata\": {\"annotations\": { \"scheduler.alpha.kubernetes.io/defaultTolerations\": \"[{\\\"operator\\\": \\\"Exists\\\"}]\"}}}'",
"oc debug node/my-cluster-node",
"chroot /host",
"toolbox",
"sos report -k crio.all=on -k crio.logs=on -k podman.all=on -k podman.logs=on 1",
"sos report --all-logs",
"Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1 The checksum is: 382ffc167510fd71b4f12a4f40b97a4e",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1",
"ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service",
"ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'",
"oc adm node-logs --role=master -u kubelet 1",
"oc adm node-logs --role=master --path=openshift-apiserver",
"oc adm node-logs --role=master --path=openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log",
"oc adm must-gather --dest-dir /tmp/captures \\ <.> --source-dir '/tmp/tcpdump/' \\ <.> --image registry.redhat.io/openshift4/network-tools-rhel8:latest \\ <.> --node-selector 'node-role.kubernetes.io/worker' \\ <.> --host-network=true \\ <.> --timeout 30s \\ <.> -- tcpdump -i any \\ <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300",
"tmp/captures βββ event-filter.html βββ ip-10-0-192-217-ec2-internal 1 β βββ registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca β βββ 2022-01-13T19:31:31.pcap βββ ip-10-0-201-178-ec2-internal 2 β βββ registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca β βββ 2022-01-13T19:31:30.pcap βββ ip- βββ timestamp",
"oc get nodes",
"oc debug node/my-cluster-node",
"chroot /host",
"ip ad",
"toolbox",
"tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1",
"chroot /host crictl ps",
"chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print USD2}'",
"nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1",
"chroot /host",
"toolbox",
"dnf install -y <package_name>",
"chroot /host",
"vi ~/.toolboxrc",
"REGISTRY=quay.io 1 IMAGE=fedora/fedora:33-x86_64 2 TOOLBOX_NAME=toolbox-fedora-33 3",
"toolbox",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.13.8 True False 8h Cluster version is 4.13.8",
"oc describe clusterversion",
"Name: version Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: ClusterVersion Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce URL: https://access.redhat.com/errata/RHSA-2023:4456 Version: 4.13.8 History: Completion Time: 2023-08-17T13:20:21Z Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce Started Time: 2023-08-17T12:59:45Z State: Completed Verified: false Version: 4.13.8",
"ssh <user_name>@<load_balancer> systemctl status haproxy",
"ssh <user_name>@<load_balancer> netstat -nltupe | grep -E ':80|:443|:6443|:22623'",
"ssh <user_name>@<load_balancer> ss -nltupe | grep -E ':80|:443|:6443|:22623'",
"dig <wildcard_fqdn> @<dns_server>",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level debug 1",
"./openshift-install create ignition-configs --dir=./install_dir",
"tail -f ~/<installation_directory>/.openshift_install.log",
"ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service",
"oc adm node-logs --role=master -u kubelet",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=master -u crio",
"ssh [email protected]_name.sub_domain.domain journalctl -b -f -u crio.service",
"curl -I http://<http_server_fqdn>:<port>/bootstrap.ign 1",
"grep -is 'bootstrap.ign' /var/log/httpd/access_log",
"ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service",
"ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'",
"curl -I http://<http_server_fqdn>:<port>/master.ign 1",
"grep -is 'master.ign' /var/log/httpd/access_log",
"oc get nodes",
"oc describe node <master_node>",
"oc get daemonsets -n openshift-sdn",
"oc get pods -n openshift-sdn",
"oc logs <sdn_pod> -n openshift-sdn",
"oc get network.config.openshift.io cluster -o yaml",
"./openshift-install create manifests",
"oc get pods -n openshift-network-operator",
"oc logs pod/<network_operator_pod_name> -n openshift-network-operator",
"oc adm node-logs --role=master -u kubelet",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=master -u crio",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service",
"oc adm node-logs --role=master --path=openshift-apiserver",
"oc adm node-logs --role=master --path=openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps -a",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"curl https://api-int.<cluster_name>:22623/config/master",
"dig api-int.<cluster_name> @<dns_server>",
"dig -x <load_balancer_mco_ip_address> @<dns_server>",
"ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/master",
"ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking",
"openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text",
"oc get pods -n openshift-etcd",
"oc get pods -n openshift-etcd-operator",
"oc describe pod/<pod_name> -n <namespace>",
"oc logs pod/<pod_name> -n <namespace>",
"oc logs pod/<pod_name> -c <container_name> -n <namespace>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods --name=etcd-",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps | grep '<pod_id>'",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"oc adm node-logs --role=master -u kubelet",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=master -u kubelet | grep -is 'x509: certificate has expired'",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service | grep -is 'x509: certificate has expired'",
"curl -I http://<http_server_fqdn>:<port>/worker.ign 1",
"grep -is 'worker.ign' /var/log/httpd/access_log",
"oc get nodes",
"oc describe node <worker_node>",
"oc get pods -n openshift-machine-api",
"oc describe pod/<machine_api_operator_pod_name> -n openshift-machine-api",
"oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c machine-api-operator",
"oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c kube-rbac-proxy",
"oc adm node-logs --role=worker -u kubelet",
"ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=worker -u crio",
"ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service",
"oc adm node-logs --role=worker --path=sssd",
"oc adm node-logs --role=worker --path=sssd/sssd.log",
"ssh core@<worker-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/sssd/sssd.log",
"ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl ps -a",
"ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"curl https://api-int.<cluster_name>:22623/config/worker",
"dig api-int.<cluster_name> @<dns_server>",
"dig -x <load_balancer_mco_ip_address> @<dns_server>",
"ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/worker",
"ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking",
"openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text",
"oc get clusteroperators",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 1 csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending 2 csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc describe clusteroperator <operator_name>",
"oc get pods -n <operator_namespace>",
"oc describe pod/<operator_pod_name> -n <operator_namespace>",
"oc logs pod/<operator_pod_name> -n <operator_namespace>",
"oc get pod -o \"jsonpath={range .status.containerStatuses[*]}{.name}{'\\t'}{.state}{'\\t'}{.image}{'\\n'}{end}\" <operator_pod_name> -n <operator_namespace>",
"oc adm release info <image_path>:<tag> --commits",
"./openshift-install gather bootstrap --dir <installation_directory> 1",
"./openshift-install gather bootstrap --dir <installation_directory> \\ 1 --bootstrap <bootstrap_address> \\ 2 --master <master_1_address> \\ 3 --master <master_2_address> \\ 4 --master <master_3_address> 5",
"INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here \"<installation_directory>/log-bundle-<timestamp>.tar.gz\"",
"oc get nodes",
"oc adm top nodes",
"oc adm top node my-node",
"oc debug node/my-node",
"chroot /host",
"systemctl is-active kubelet",
"systemctl status kubelet",
"oc adm node-logs --role=master -u kubelet 1",
"oc adm node-logs --role=master --path=openshift-apiserver",
"oc adm node-logs --role=master --path=openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log",
"oc debug node/my-node",
"chroot /host",
"systemctl is-active crio",
"systemctl status crio.service",
"oc adm node-logs --role=master -u crio",
"oc adm node-logs <node_name> -u crio",
"ssh core@<node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service",
"Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container XXX: error recreating the missing symlinks: error reading name of symlink for XXX: open /var/lib/containers/storage/overlay/XXX/link: no such file or directory",
"can't stat lower layer ... because it does not exist. Going through storage to recreate the missing symlinks.",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --ignore-daemonsets --delete-emptydir-data",
"ssh [email protected] sudo -i",
"systemctl stop kubelet",
".. for pod in USD(crictl pods -q); do if [[ \"USD(crictl inspectp USDpod | jq -r .status.linux.namespaces.options.network)\" != \"NODE\" ]]; then crictl rmp -f USDpod; fi; done",
"crictl rmp -fa",
"systemctl stop crio",
"crio wipe -f",
"systemctl start crio systemctl start kubelet",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.28.5",
"oc adm uncordon <node_name>",
"NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.28.5",
"rpm-ostree kargs --append='crashkernel=256M'",
"systemctl enable kdump.service",
"systemctl reboot",
"variant: openshift version: 4.15.0 metadata: name: 99-worker-kdump 1 labels: machineconfiguration.openshift.io/role: worker 2 openshift: kernel_arguments: 3 - crashkernel=256M storage: files: - path: /etc/kdump.conf 4 mode: 0644 overwrite: true contents: inline: | path /var/crash core_collector makedumpfile -l --message-level 7 -d 31 - path: /etc/sysconfig/kdump 5 mode: 0644 overwrite: true contents: inline: | KDUMP_COMMANDLINE_REMOVE=\"hugepages hugepagesz slub_debug quiet log_buf_len swiotlb\" KDUMP_COMMANDLINE_APPEND=\"irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable\" 6 KEXEC_ARGS=\"-s\" KDUMP_IMG=\"vmlinuz\" systemd: units: - name: kdump.service enabled: true",
"nfs server.example.com:/export/cores core_collector makedumpfile -l --message-level 7 -d 31 extra_modules nfs",
"butane 99-worker-kdump.bu -o 99-worker-kdump.yaml",
"oc create -f 99-worker-kdump.yaml",
"systemctl --failed",
"journalctl -u <unit>.service",
"NODEIP_HINT=192.0.2.1",
"echo -n 'NODEIP_HINT=192.0.2.1' | base64 -w0",
"Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx==",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-nodeip-hint-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-nodeip-hint-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration",
"[connection] id=eno1 type=ethernet interface-name=eno1 master=bond1 slave-type=bond autoconnect=true autoconnect-priority=20",
"[connection] id=eno2 type=ethernet interface-name=eno2 master=bond1 slave-type=bond autoconnect=true autoconnect-priority=20",
"[connection] id=bond1 type=bond interface-name=bond1 autoconnect=true connection.autoconnect-slaves=1 autoconnect-priority=20 [bond] mode=802.3ad miimon=100 xmit_hash_policy=\"layer3+4\" [ipv4] method=auto",
"base64 <directory_path>/en01.config",
"base64 <directory_path>/eno2.config",
"base64 <directory_path>/bond1.config",
"export ROLE=<machine_role>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-USD{ROLE}-sec-bridge-cni spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:;base64,<base-64-encoded-contents-for-bond1.conf> path: /etc/NetworkManager/system-connections/bond1.nmconnection filesystem: root mode: 0600 - contents: source: data:;base64,<base-64-encoded-contents-for-eno1.conf> path: /etc/NetworkManager/system-connections/eno1.nmconnection filesystem: root mode: 0600 - contents: source: data:;base64,<base-64-encoded-contents-for-eno2.conf> path: /etc/NetworkManager/system-connections/eno2.nmconnection filesystem: root mode: 0600",
"oc create -f <machine_config_file_name>",
"bond1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-worker-extra-bridge spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/ovnk/extra_bridge mode: 0420 overwrite: true contents: source: data:text/plain;charset=utf-8,bond1 filesystem: root",
"oc create -f <machine_config_file_name>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-worker-br-ex-override spec: config: ignition: version: 3.2.0 storage: files: - path: /var/lib/ovnk/iface_default_hint mode: 0420 overwrite: true contents: source: data:text/plain;charset=utf-8,bond0 1 filesystem: root",
"oc create -f <machine_config_file_name>",
"oc get nodes -o json | grep --color exgw-ip-addresses",
"\"k8s.ovn.org/l3-gateway-config\": \\\"exgw-ip-address\\\":\\\"172.xx.xx.yy/24\\\",\\\"next-hops\\\":[\\\"xx.xx.xx.xx\\\"],",
"oc debug node/<node_name> -- chroot /host sh -c \"ip a | grep mtu | grep br-ex\"",
"Starting pod/worker-1-debug To use host binaries, run `chroot /host` 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 6: br-ex1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000",
"oc debug node/<node_name> -- chroot /host sh -c \"ip a | grep -A1 -E 'br-ex|bond0'",
"Starting pod/worker-1-debug To use host binaries, run `chroot /host` sh-5.1# ip a | grep -A1 -E 'br-ex|bond0' 2: bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000 link/ether fa:16:3e:47:99:98 brd ff:ff:ff:ff:ff:ff -- 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:47:99:98 brd ff:ff:ff:ff:ff:ff inet 10.xx.xx.xx/21 brd 10.xx.xx.255 scope global dynamic noprefixroute br-ex",
"E0514 12:47:17.998892 2281 daemon.go:1350] content mismatch for file /etc/systemd/system/ovs-vswitchd.service: [Unit]",
"oc debug node/<node_name>",
"chroot /host",
"ovs-appctl vlog/list",
"console syslog file ------- ------ ------ backtrace OFF INFO INFO bfd OFF INFO INFO bond OFF INFO INFO bridge OFF INFO INFO bundle OFF INFO INFO bundles OFF INFO INFO cfm OFF INFO INFO collectors OFF INFO INFO command_line OFF INFO INFO connmgr OFF INFO INFO conntrack OFF INFO INFO conntrack_tp OFF INFO INFO coverage OFF INFO INFO ct_dpif OFF INFO INFO daemon OFF INFO INFO daemon_unix OFF INFO INFO dns_resolve OFF INFO INFO dpdk OFF INFO INFO",
"Restart=always ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /var/lib/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /etc/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /run/openvswitch' ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg",
"systemctl daemon-reload",
"systemctl restart ovs-vswitchd",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master 1 name: 99-change-ovs-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - dropins: - contents: | [Service] ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg 2 ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg name: 20-ovs-vswitchd-restart.conf name: ovs-vswitchd.service",
"oc apply -f 99-change-ovs-loglevel.yaml",
"oc adm node-logs <node_name> -u ovs-vswitchd",
"journalctl -b -f -u ovs-vswitchd.service",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc get clusteroperators",
"oc get pod -n <operator_namespace>",
"oc describe pod <operator_pod_name> -n <operator_namespace>",
"oc debug node/my-node",
"chroot /host",
"crictl ps",
"crictl ps --name network-operator",
"oc get pods -n <operator_namespace>",
"oc logs pod/<pod_name> -n <operator_namespace>",
"oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: true 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: false 1",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/master",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/worker",
"oc get machineconfigpool/master --template='{{.spec.paused}}'",
"oc get machineconfigpool/worker --template='{{.spec.paused}}'",
"true",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/master",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/worker",
"oc get machineconfigpool/master --template='{{.spec.paused}}'",
"oc get machineconfigpool/worker --template='{{.spec.paused}}'",
"false",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"message: 'Failed to delete all resource types, 1 remaining: Internal error occurred: error resolving resource'",
"oc get namespaces",
"operator-ns-1 Terminating",
"oc get crds",
"oc delete crd <crd_name>",
"oc get EtcdCluster -n <namespace_name>",
"oc get EtcdCluster --all-namespaces",
"oc delete <cr_name> <cr_instance_name> -n <namespace_name>",
"oc get namespace <namespace_name>",
"oc get sub,csv,installplan -n <namespace>",
"oc project <project_name>",
"oc get pods",
"oc status",
"skopeo inspect docker://<image_reference>",
"oc edit deployment/my-deployment",
"oc get pods -w",
"oc get events",
"oc logs <pod_name>",
"oc logs <pod_name> -c <container_name>",
"oc exec <pod_name> -- ls -alh /var/log",
"total 124K drwxr-xr-x. 1 root root 33 Aug 11 11:23 . drwxr-xr-x. 1 root root 28 Sep 6 2022 .. -rw-rw----. 1 root utmp 0 Jul 10 10:31 btmp -rw-r--r--. 1 root root 33K Jul 17 10:07 dnf.librepo.log -rw-r--r--. 1 root root 69K Jul 17 10:07 dnf.log -rw-r--r--. 1 root root 8.8K Jul 17 10:07 dnf.rpm.log -rw-r--r--. 1 root root 480 Jul 17 10:07 hawkey.log -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 lastlog drwx------. 2 root root 23 Aug 11 11:14 openshift-apiserver drwx------. 2 root root 6 Jul 10 10:31 private drwxr-xr-x. 1 root root 22 Mar 9 08:05 rhsm -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 wtmp",
"oc exec <pod_name> cat /var/log/<path_to_log>",
"2023-07-10T10:29:38+0000 INFO --- logging initialized --- 2023-07-10T10:29:38+0000 DDEBUG timer: config: 13 ms 2023-07-10T10:29:38+0000 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, product-id, repoclosure, repodiff, repograph, repomanage, reposync, subscription-manager, uploadprofile 2023-07-10T10:29:38+0000 INFO Updating Subscription Management repositories. 2023-07-10T10:29:38+0000 INFO Unable to read consumer identity 2023-07-10T10:29:38+0000 INFO Subscription Manager is operating in container mode. 2023-07-10T10:29:38+0000 INFO",
"oc exec <pod_name> -c <container_name> ls /var/log",
"oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log>",
"oc project <namespace>",
"oc rsh <pod_name> 1",
"oc rsh -c <container_name> pod/<pod_name>",
"oc port-forward <pod_name> <host_port>:<pod_port> 1",
"oc get deployment -n <project_name>",
"oc debug deployment/my-deployment --as-root -n <project_name>",
"oc get deploymentconfigs -n <project_name>",
"oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name>",
"oc cp <local_path> <pod_name>:/<path> -c <container_name> 1",
"oc cp <pod_name>:/<path> -c <container_name> <local_path> 1",
"oc get pods -w 1",
"oc logs -f pod/<application_name>-<build_number>-build",
"oc logs -f pod/<application_name>-<build_number>-deploy",
"oc logs -f pod/<application_name>-<build_number>-<random_string>",
"oc describe pod/my-app-1-akdlg",
"oc logs -f pod/my-app-1-akdlg",
"oc exec my-app-1-akdlg -- cat /var/log/my-application.log",
"oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log",
"oc exec -it my-app-1-akdlg /bin/bash",
"oc debug node/my-cluster-node",
"chroot /host",
"crictl ps",
"crictl inspect a7fe32346b120 --output yaml | grep 'pid:' | awk '{print USD2}'",
"nsenter -n -t 31150 -- ip ad",
"Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume \"pvc-8837384d-69d7-40b2-b2e6-5df86943eef9\" Volume is already used by pod(s) sso-mysql-1-ns6b4",
"oc delete pod <old_pod> --force=true --grace-period=0",
"oc logs -f deployment/windows-machine-config-operator -n openshift-windows-machine-config-operator",
"ssh -t -o StrictHostKeyChecking=no -o ProxyCommand='ssh -A -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -W %h:%p core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template=\"{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}\")' <username>@<windows_node_internal_ip> 1 2",
"oc get nodes <node_name> -o jsonpath={.status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}",
"ssh -L 2020:<windows_node_internal_ip>:3389 \\ 1 core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template=\"{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}\")",
"oc get nodes <node_name> -o jsonpath={.status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}",
"C:\\> net user <username> * 1",
"oc adm node-logs -l kubernetes.io/os=windows --path= /ip-10-0-138-252.us-east-2.compute.internal containers /ip-10-0-138-252.us-east-2.compute.internal hybrid-overlay /ip-10-0-138-252.us-east-2.compute.internal kube-proxy /ip-10-0-138-252.us-east-2.compute.internal kubelet /ip-10-0-138-252.us-east-2.compute.internal pods",
"oc adm node-logs -l kubernetes.io/os=windows --path=/kubelet/kubelet.log",
"oc adm node-logs -l kubernetes.io/os=windows --path=journal",
"oc adm node-logs -l kubernetes.io/os=windows --path=journal -u docker",
"C:\\> powershell",
"C:\\> Get-EventLog -LogName Application -Source Docker",
"oc -n ns1 get service prometheus-example-app -o yaml",
"labels: app: prometheus-example-app",
"oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml",
"apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app",
"oc -n openshift-user-workload-monitoring get pods",
"NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m",
"oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator",
"level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))",
"topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))",
"HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath='{.status.ingress[].host}')",
"TOKEN=USD(oc whoami -t)",
"curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/status/tsdb\"",
"\"status\": \"success\",\"data\":{\"headStats\":{\"numSeries\":507473, \"numLabelPairs\":19832,\"chunkCount\":946298,\"minTime\":1712253600010, \"maxTime\":1712257935346},\"seriesCountByMetricName\": [{\"name\":\"etcd_request_duration_seconds_bucket\",\"value\":51840}, {\"name\":\"apiserver_request_sli_duration_seconds_bucket\",\"value\":47718},",
"oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 -c prometheus --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'cd /prometheus/;du -hs USD(ls -dtr */ | grep -Eo \"[0-9|A-Z]{26}\")'",
"308M 01HVKMPKQWZYWS8WVDAYQHNMW6 52M 01HVK64DTDA81799TBR9QDECEZ 102M 01HVK64DS7TRZRWF2756KHST5X 140M 01HVJS59K11FBVAPVY57K88Z11 90M 01HVH2A5Z58SKT810EM6B9AT50 152M 01HV8ZDVQMX41MKCN84S32RRZ1 354M 01HV6Q2N26BK63G4RYTST71FBF 156M 01HV664H9J9Z1FTZD73RD1563E 216M 01HTHXB60A7F239HN7S2TENPNS 104M 01HTHMGRXGS0WXA3WATRXHR36B",
"oc debug prometheus-k8s-0 -n openshift-monitoring -c prometheus --image=USD(oc get po -n openshift-monitoring prometheus-k8s-0 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'ls -latr /prometheus/ | egrep -o \"[0-9|A-Z]{26}\" | head -3 | while read BLOCK; do rm -r /prometheus/USDBLOCK; done'",
"oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- df -h /prometheus/",
"Starting pod/prometheus-k8s-0-debug-j82w4 Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod",
"oc <command> --loglevel <log_level>",
"oc whoami -t",
"sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/support/index |
Chapter 14. Using Persistent Storage in Fuse on OpenShift | Chapter 14. Using Persistent Storage in Fuse on OpenShift Fuse on OpenShift applications are based on OpenShift containers, which do not have a persistent filesystem. Every time you start an application, it is started in a new container with an immutable Docker-formatted image. Hence any persisted data in the file systems is lost when the container stops. But applications need to store some state as data in a persistent store and sometimes applications share access to a common data store. OpenShift platform supports provisioning of external stores as Persistent Storage. 14.1. About volumes and volume types OpenShift allows pods and containers to mount Volumes as file systems which are backed by multiple local or network attached storage endpoints. Volume types include: emptydir (empty directory): This is a default volume type. It is a directory which gets allocated when the pod is created on a local host. It is not copied across the servers and when you delete the pod the directory is removed. configmap: It is a directory with contents populated with key-value pairs from a named configmap. hostPath (host directory): It is a directory with specific path on any host and it requires elevated privileges. secret (mounted secret): Secret volumes mount a named secret to the provided directory. persistentvolumeclaim or pvc (persistent volume claim): This links the volume directory in the container to a persistent volume claim you have allocated by name. A persistent volume claim is a request to allocate storage. Note that if your claim is not bound, your pods will not start. Volumes are configured at the Pod level and can only directly access an external storage using hostPath . Hence it is harder to mange the access to shared resources for multiple Pods as hostPath volumes. 14.2. About PersistentVolumes PersistentVolumes allow cluster administrators to provision cluster wide storage which is backed by various types of network storage like NFS, Ceph RBD, AWS Elastic Block Store (EBS), etc. PersistentVolumes also specify capacity, access modes, and recycling policies. This allows pods from multiple Projects to access persistent storage without worrying about the nature of the underlying resource. See Configuring Persistent Storage for creating various types of PersistentVolumes. 14.3. Configuring persistent volume You can provision a persistent volume by creating a configuration file. This storage then can be accessed by creating a PersistentVolume Claim. Procedure Create a configuration file named pv.yaml using the sample configuration below. This provisions a path on the host machine as a PersistentVolume named pv001. Here the host path is /data/pv0001 and storage capacity is limited to 2MB. For example, when using OpenShift CDK it will provision the directory /data/pv0001 from the virtual machine hosting the OpenShift Cluster. Create the PersistentVolume . Verify the creation of PersistentVolume . This will list all the PersistentVolumes configured in your OpenShift cluster: 14.4. Creating PersistentVolumeClaims A PersistentVolume exposes a storage endpoint as a named entity in an OpenShift cluster. To access this storage from Projects, PersistentVolumeClaims must be created that can access the PersistentVolume . PersistentVolumeClaims are created for each Project with customized claims for a certain amount of storage with certain access modes. Procedure The sample configuration below creates a claim named pvc0001 for 1MB of storage with read-write-once access against a PersistentVolume named pv0001. 14.5. Using persistent volumes in pods Pods use volume mounts to define the filesystem mount location and volumes to define reference PersistentVolumeClaims . Procedure Create a sample container configuration as shown below which mounts PersistentVolumeClaim pvc0001 at /usr/share/data in its filesystem. Any data written by the application to the directory /usr/share/data is now persisted across container restarts. Add this configuration in the file src/main/jkube/deployment.yml in a Fuse on OpenShift application and create OpenShift resources using command: Verify that the created DeploymentConfiguration has the volume mount and the volume. For Fuse on OpenShift quickstarts, replace the <application-dc-name> with the Maven project name, for example spring-boot-camel . | [
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: accessModes: - ReadWriteOnce capacity: storage: 2Mi hostPath: path: /data/pv0001/",
"create -f pv.yaml",
"get pv",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc0001 spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Mi",
"spec: template: spec: containers: - volumeMounts: - name: vol0001 mountPath: /usr/share/data volumes: - name: vol0001 persistentVolumeClaim: claimName: pvc0001",
"mvn oc:resource-apply",
"describe deploymentconfig <application-dc-name>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/fuse_on_openshift_guide/access-persistent-storage-fuse-on-openshift |
Chapter 3. ClusterRole [rbac.authorization.k8s.io/v1] | Chapter 3. ClusterRole [rbac.authorization.k8s.io/v1] Description ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding. Type object 3.1. Specification Property Type Description aggregationRule object AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. rules array Rules holds all the PolicyRules for this ClusterRole rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. 3.1.1. .aggregationRule Description AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole Type object Property Type Description clusterRoleSelectors array (LabelSelector) ClusterRoleSelectors holds a list of selectors which will be used to find ClusterRoles and create the rules. If any of the selectors match, then the ClusterRole's permissions will be added 3.1.2. .rules Description Rules holds all the PolicyRules for this ClusterRole Type array 3.1.3. .rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "" represents the core API group and "*" represents all API groups. nonResourceURLs array (string) NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as "pods" or "secrets") or non-resource URL paths (such as "/api"), but not both. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. '*' represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs. 3.2. API endpoints The following API endpoints are available: /apis/rbac.authorization.k8s.io/v1/clusterroles DELETE : delete collection of ClusterRole GET : list or watch objects of kind ClusterRole POST : create a ClusterRole /apis/rbac.authorization.k8s.io/v1/watch/clusterroles GET : watch individual changes to a list of ClusterRole. deprecated: use the 'watch' parameter with a list operation instead. /apis/rbac.authorization.k8s.io/v1/clusterroles/{name} DELETE : delete a ClusterRole GET : read the specified ClusterRole PATCH : partially update the specified ClusterRole PUT : replace the specified ClusterRole /apis/rbac.authorization.k8s.io/v1/watch/clusterroles/{name} GET : watch changes to an object of kind ClusterRole. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 3.2.1. /apis/rbac.authorization.k8s.io/v1/clusterroles HTTP method DELETE Description delete collection of ClusterRole Table 3.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ClusterRole Table 3.3. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterRole Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body ClusterRole schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 202 - Accepted ClusterRole schema 401 - Unauthorized Empty 3.2.2. /apis/rbac.authorization.k8s.io/v1/watch/clusterroles HTTP method GET Description watch individual changes to a list of ClusterRole. deprecated: use the 'watch' parameter with a list operation instead. Table 3.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/rbac.authorization.k8s.io/v1/clusterroles/{name} Table 3.8. Global path parameters Parameter Type Description name string name of the ClusterRole HTTP method DELETE Description delete a ClusterRole Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterRole Table 3.11. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterRole Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterRole Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.15. Body parameters Parameter Type Description body ClusterRole schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 401 - Unauthorized Empty 3.2.4. /apis/rbac.authorization.k8s.io/v1/watch/clusterroles/{name} Table 3.17. Global path parameters Parameter Type Description name string name of the ClusterRole HTTP method GET Description watch changes to an object of kind ClusterRole. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/rbac_apis/clusterrole-rbac-authorization-k8s-io-v1 |
Chapter 3. Installing the Red Hat Virtualization Manager | Chapter 3. Installing the Red Hat Virtualization Manager 3.1. Installing the Red Hat Virtualization Manager Machine and the Remote Server The Red Hat Virtualization Manager must run on Red Hat Enterprise Linux 8. For detailed installation instructions, see Performing a standard RHEL installation . This machine must meet the minimum Manager hardware requirements . Install a second Red Hat Enterprise Linux machine to use for the databases. This machine will be referred to as the remote server. To install the Red Hat Virtualization Manager on a system that does not have access to the Content Delivery Network, see Configuring an Offline Repository for Installation before configuring the Manager. 3.2. Enabling the Red Hat Virtualization Manager Repositories You need to log in and register the Manager machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: # subscription-manager list --available Use the pool ID to attach the subscription to the system: # subscription-manager attach --pool= pool_id Note To view currently attached subscriptions: # subscription-manager list --consumed To list all enabled repositories: # dnf repolist Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \ --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-tus-rpms \ --enable=rhel-8-for-x86_64-baseos-tus-rpms Set the RHEL version to 8.6: # subscription-manager release --set=8.6 Enable the pki-deps module. # dnf module -y enable pki-deps Enable version 12 of the postgresql module. # dnf module -y enable postgresql:12 Enable version 14 of the nodejs module: # dnf module -y enable nodejs:14 Synchronize installed packages to update them to the latest available versions. # dnf distro-sync --nobest Additional resources For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components Module streams Selecting a stream before installation of packages Resetting module streams Switching to a later stream Before configuring the Red Hat Virtualization Manager, you must manually configure the Manager database on the remote server. You can also use this procedure to manually configure the Data Warehouse database if you do not want the Data Warehouse setup script to configure it automatically. 3.3. Preparing a Remote PostgreSQL Database In a remote database environment, you must create the Manager database manually before running engine-setup . Note The engine-setup and engine-backup --mode=restore commands only support system error messages in the en_US.UTF8 locale, even if the system locale is different. The locale settings in the postgresql.conf file must be set to en_US.UTF8 . Important The database name must contain only numbers, underscores, and lowercase letters. Enabling the Red Hat Virtualization Manager Repositories You need to log in and register the database machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: # subscription-manager list --available Use the pool ID to attach the subscription to the system: # subscription-manager attach --pool= pool_id Note To view currently attached subscriptions: # subscription-manager list --consumed To list all enabled repositories: # dnf repolist Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \ --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-tus-rpms \ --enable=rhel-8-for-x86_64-baseos-tus-rpms Set the RHEL version to 8.6: # subscription-manager release --set=8.6 Enable version 12 of the postgresql module. # dnf module -y enable postgresql:12 Enable version 14 of the nodejs module: # dnf module -y enable nodejs:14 Synchronize installed packages to update them to the latest available versions. # dnf distro-sync --nobest Additional resources For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components Module streams Selecting a stream before installation of packages Resetting module streams Switching to a later stream Initializing the PostgreSQL Database Install the PostgreSQL server package: # dnf install postgresql-server postgresql-contrib Initialize the PostgreSQL database instance: Enable the postgresql service and configure it to start when the machine boots: Connect to the psql command line interface as the postgres user: Create a default user. The Manager's default user is engine : postgres=# create role user_name with login encrypted password ' password '; Create a database. The Manager's default database name is engine : postgres=# create database database_name owner user_name template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8'; Connect to the new database: postgres=# \c database_name Add the uuid-ossp extension: database_name =# CREATE EXTENSION "uuid-ossp"; Add the plpgsql language if it does not exist: database_name =# CREATE LANGUAGE plpgsql; Quit the psql interface: database_name =# \q Edit the /var/lib/pgsql/data/pg_hba.conf file to enable md5 client authentication, so that the engine can access the database remotely. Add the following line immediately below the line that starts with local at the bottom of the file. Replace X.X.X.X with the IP address of the Manager or Data Warehouse machine, and replace 0-32 or 0-128 with the CIDR mask length: host database_name user_name X.X.X.X/0-32 md5 host database_name user_name X.X.X.X::/0-128 md5 For example: # IPv4, 32-bit address: host engine engine 192.168.12.10/32 md5 # IPv6, 128-bit address: host engine engine fe80::7a31:c1ff:0000:0000/96 md5 Allow TCP/IP connections to the database. Edit the /var/lib/pgsql/data/postgresql.conf file and add the following line: listen_addresses='*' This example configures the postgresql service to listen for connections on all interfaces. You can specify an interface by giving its IP address. Update the PostgreSQL server's configuration. In the /var/lib/pgsql/data/postgresql.conf file, add the following lines to the bottom of the file: autovacuum_vacuum_scale_factor=0.01 autovacuum_analyze_scale_factor=0.075 autovacuum_max_workers=6 maintenance_work_mem=65536 max_connections=150 work_mem=8192 Open the default port used for PostgreSQL database connections, and save the updated firewall rules: # firewall-cmd --zone=public --add-service=postgresql # firewall-cmd --permanent --zone=public --add-service=postgresql Restart the postgresql service: # systemctl restart postgresql Optionally, set up SSL to secure database connections. 3.4. Installing and Configuring the Red Hat Virtualization Manager Install the package and dependencies for the Red Hat Virtualization Manager, and configure it using the engine-setup command. The script asks you a series of questions and, after you provide the required values for all questions, applies that configuration and starts the ovirt-engine service. Important The engine-setup command guides you through several distinct configuration stages, each comprising several steps that require user input. Suggested configuration defaults are provided in square brackets; if the suggested value is acceptable for a given step, press Enter to accept that value. You can run engine-setup --accept-defaults to automatically accept all questions that have default answers. This option should be used with caution and only if you are familiar with engine-setup . Procedure Ensure all packages are up to date: # dnf upgrade --nobest Note Reboot the machine if any kernel-related packages were updated. Install the rhvm package and dependencies. # dnf install rhvm Run the engine-setup command to begin configuring the Red Hat Virtualization Manager: # engine-setup Optional: Type Yes and press Enter to set up Cinderlib integration on this machine: Set up Cinderlib integration (Currently in tech preview) (Yes, No) [No]: Important Cinderlib is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features support scope, see Red Hat Technology Preview Features Support Scope . Press Enter to configure the Manager on this machine: Configure Engine on this host (Yes, No) [Yes]: Optional: Install Open Virtual Network (OVN). Selecting Yes installs an OVN server on the Manager machine and adds it to Red Hat Virtualization as an external network provider. This action also configures the Default cluster to use OVN as its default network provider. Important Also see the " steps" in Adding Open Virtual Network (OVN) as an External Network Provider in the Administration Guide . Configuring ovirt-provider-ovn also sets the Default cluster's default network provider to ovirt-provider-ovn. Non-Default clusters may be configured with an OVN after installation. Configure ovirt-provider-ovn (Yes, No) [Yes]: For more information on using OVN networks in Red Hat Virtualization, see Adding Open Virtual Network (OVN) as an External Network Provider in the Administration Guide . Optional: Allow engine-setup to configure a WebSocket Proxy server for allowing users to connect to virtual machines through the noVNC console: Configure WebSocket Proxy on this machine? (Yes, No) [Yes]: Important The WebSocket Proxy and noVNC are Technology Preview features only. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope . To configure Data Warehouse on a remote server, answer No and see Installing and Configuring Data Warehouse on a Separate Machine after completing the Manager configuration. Please note: Data Warehouse is required for the engine. If you choose to not configure it on this host, you have to configure it on a remote host, and then configure the engine on this host so that it can access the database of the remote Data Warehouse host. Configure Data Warehouse on this host (Yes, No) [Yes]: Important Red Hat only supports installing the Data Warehouse database, the Data Warehouse service, and Grafana all on the same machine as each other. To configure Grafana on the same machine as the Data Warehouse service, enter No : Configure Grafana on this host (Yes, No) [Yes]: Optional: Allow access to a virtual machine's serial console from the command line. Configure VM Console Proxy on this host (Yes, No) [Yes]: Additional configuration is required on the client machine to use this feature. See Opening a Serial Console to a Virtual Machine in the Virtual Machine Management Guide . Press Enter to accept the automatically detected host name, or enter an alternative host name and press Enter . Note that the automatically detected host name may be incorrect if you are using virtual hosts. Host fully qualified DNS name of this server [ autodetected host name ]: The engine-setup command checks your firewall configuration and offers to open the ports used by the Manager for external communication, such as ports 80 and 443. If you do not allow engine-setup to modify your firewall configuration, you must manually open the ports used by the Manager. firewalld is configured as the firewall manager. Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]: If you choose to automatically configure the firewall, and no firewall managers are active, you are prompted to select your chosen firewall manager from a list of supported options. Type the name of the firewall manager and press Enter . This applies even in cases where only one option is listed. Specify whether to configure the Manager database on this machine, or on another machine: Where is the Engine database located? (Local, Remote) [Local]: Note Deployment with a remote engine database is now deprecated. This functionality will be removed in a future release. If you select Remote , input the following values for the preconfigured remote database server. Replace localhost with the ip address or FQDN of the remote database server: Engine database host [localhost]: Engine database port [5432]: Engine database secured connection (Yes, No) [No]: Engine database name [engine]: Engine database user [engine]: Engine database password: Set a password for the automatically created administrative user of the Red Hat Virtualization Manager: Engine admin password: Confirm engine admin password: Select Gluster , Virt , or Both : Application mode (Both, Virt, Gluster) [Both]: Both - offers the greatest flexibility. In most cases, select Both . Virt - allows you to run virtual machines in the environment. Gluster - only allows you to manage GlusterFS from the Administration Portal. Note GlusterFS Storage is deprecated, and will no longer be supported in future releases. If you installed the OVN provider, you can choose to use the default credentials, or specify an alternative. Use default credentials (admin@internal) for ovirt-provider-ovn (Yes, No) [Yes]: oVirt OVN provider user[admin@internal]: oVirt OVN provider password: Set the default value for the wipe_after_delete flag, which wipes the blocks of a virtual disk when the disk is deleted. Default SAN wipe after delete (Yes, No) [No]: The Manager uses certificates to communicate securely with its hosts. This certificate can also optionally be used to secure HTTPS communications with the Manager. Provide the organization name for the certificate: Organization name for certificate [ autodetected domain-based name ]: Optionally allow engine-setup to make the landing page of the Manager the default page presented by the Apache web server: Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications. Do you wish to set the application as the default web page of the server? (Yes, No) [Yes]: By default, external SSL (HTTPS) communication with the Manager is secured with the self-signed certificate created earlier in the configuration to securely communicate with hosts. Alternatively, choose another certificate for external HTTPS connections; this does not affect how the Manager communicates with hosts: Setup can configure apache to use SSL using a certificate issued from the internal CA. Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]: You can specify a unique password for the Grafana admin user, or use same one as the Manager admin password: Use Engine admin password as initial Grafana admin password (Yes, No) [Yes]: Review the installation settings, and press Enter to accept the values and proceed with the installation: Please confirm installation settings (OK, Cancel) [OK]: When your environment has been configured, engine-setup displays details about how to access your environment. steps If you chose to manually configure the firewall, engine-setup provides a custom list of ports that need to be opened, based on the options selected during setup. engine-setup also saves your answers to a file that can be used to reconfigure the Manager using the same values, and outputs the location of the log file for the Red Hat Virtualization Manager configuration process. If you intend to link your Red Hat Virtualization environment with a directory server, configure the date and time to synchronize with the system clock used by the directory server to avoid unexpected account expiry issues. See Synchronizing the System Clock with a Remote Server in the Red Hat Enterprise Linux System Administrator's Guide for more information. Install the certificate authority according to the instructions provided by your browser. You can get the certificate authority's certificate by navigating to http://<manager-fqdn>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA , replacing <manager-fqdn> with the FQDN that you provided during the installation. Install the Data Warehouse service and database on the remote server: 3.5. Installing and Configuring Data Warehouse on a Separate Machine This section describes installing and configuring the Data Warehouse service on a separate machine from the Red Hat Virtualization Manager. Installing Data Warehouse on a separate machine helps to reduce the load on the Manager machine. Note Red Hat only supports installing the Data Warehouse database, the Data Warehouse service and Grafana all on the same machine as each other, even though you can install each of these components on separate machines from each other. Prerequisites The Red Hat Virtualization Manager is installed on a separate machine. A physical server or virtual machine running Red Hat Enterprise Linux 8. The Manager database password. Enabling the Red Hat Virtualization Manager Repositories You need to log in and register the Data Warehouse machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: # subscription-manager list --available Use the pool ID to attach the subscription to the system: # subscription-manager attach --pool= pool_id Note To view currently attached subscriptions: # subscription-manager list --consumed To list all enabled repositories: # dnf repolist Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \ --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-tus-rpms \ --enable=rhel-8-for-x86_64-baseos-tus-rpms Set the RHEL version to 8.6: # subscription-manager release --set=8.6 Enable the pki-deps module. # dnf module -y enable pki-deps Enable version 12 of the postgresql module. # dnf module -y enable postgresql:12 Enable version 14 of the nodejs module: # dnf module -y enable nodejs:14 Synchronize installed packages to update them to the latest available versions. # dnf distro-sync --nobest Additional resources For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components Module streams Selecting a stream before installation of packages Resetting module streams Switching to a later stream Installing Data Warehouse on a Separate Machine Procedure Log in to the machine where you want to install the database. Ensure that all packages are up to date: # dnf upgrade --nobest Install the ovirt-engine-dwh-setup package: # dnf install ovirt-engine-dwh-setup Run the engine-setup command to begin the installation: # engine-setup Answer Yes to install Data Warehouse on this machine: Configure Data Warehouse on this host (Yes, No) [Yes]: Answer Yes to install Grafana on this machine: Configure Grafana on this host (Yes, No) [Yes]: Press Enter to accept the automatically-detected host name, or enter an alternative host name and press Enter : Host fully qualified DNS name of this server [ autodetected hostname ]: Press Enter to automatically configure the firewall, or type No and press Enter to maintain existing settings: Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]: If you choose to automatically configure the firewall, and no firewall managers are active, you are prompted to select your chosen firewall manager from a list of supported options. Type the name of the firewall manager and press Enter . This applies even in cases where only one option is listed. Enter the fully qualified domain name of the Manager machine, and then press Enter : Host fully qualified DNS name of the engine server []: Press Enter to allow setup to sign the certificate on the Manager via SSH: Setup will need to do some actions on the remote engine server. Either automatically, using ssh as root to access it, or you will be prompted to manually perform each such action. Please choose one of the following: 1 - Access remote engine server using ssh as root 2 - Perform each action manually, use files to copy content around (1, 2) [1]: Press Enter to accept the default SSH port, or enter an alternative port number and then press Enter : ssh port on remote engine server [22]: Enter the root password for the Manager machine: root password on remote engine server manager.example.com : Specify whether to host the Data Warehouse database on this machine (Local), or on another machine (Remote).: Note Red Hat only supports installing the Data Warehouse database, the Data Warehouse service and Grafana all on the same machine as each other, even though you can install each of these components on separate machines from each other. Where is the DWH database located? (Local, Remote) [Local]: If you select Local , the engine-setup script can configure your database automatically (including adding a user and a database), or it can connect to a preconfigured local database: Setup can configure the local postgresql server automatically for the DWH to run. This may conflict with existing applications. Would you like Setup to automatically configure postgresql and create DWH database, or prefer to perform that manually? (Automatic, Manual) [Automatic]: If you select Automatic by pressing Enter , no further action is required here. If you select Manual , input the following values for the manually-configured local database: DWH database secured connection (Yes, No) [No]: DWH database name [ovirt_engine_history]: DWH database user [ovirt_engine_history]: DWH database password: If you select Remote , you are prompted to provide details about the remote database host. Input the following values for the preconfigured remote database host: DWH database host []: dwh-db-fqdn DWH database port [5432]: DWH database secured connection (Yes, No) [No]: DWH database name [ovirt_engine_history]: DWH database user [ovirt_engine_history]: DWH database password: password If you select Remote , you are prompted to enter the username and password for the Grafana database user: Grafana database user [ovirt_engine_history_grafana]: Grafana database password: Enter the fully qualified domain name and password for the Manager database machine. If you are installing the Data Warehouse database on the same machine where the Manager database is installed, use the same FQDN. Press Enter to accept the default values in each other field: Engine database host []: engine-db-fqdn Engine database port [5432]: Engine database secured connection (Yes, No) [No]: Engine database name [engine]: Engine database user [engine]: Engine database password: password Choose how long Data Warehouse will retain collected data: Please choose Data Warehouse sampling scale: (1) Basic (2) Full (1, 2)[1]: Full uses the default values for the data storage settings listed in Application Settings for the Data Warehouse service in ovirt-engine-dwhd.conf (recommended when Data Warehouse is installed on a remote host). Basic reduces the values of DWH_TABLES_KEEP_HOURLY to 720 and DWH_TABLES_KEEP_DAILY to 0 , easing the load on the Manager machine. Use Basic when the Manager and Data Warehouse are installed on the same machine. Confirm your installation settings: Please confirm installation settings (OK, Cancel) [OK]: After the Data Warehouse configuration is complete, on the Red Hat Virtualization Manager, restart the ovirt-engine service: # systemctl restart ovirt-engine Optionally, set up SSL to secure database connections. Log in to the Administration Portal, where you can add hosts and storage to the environment: 3.6. Connecting to the Administration Portal Access the Administration Portal using a web browser. In a web browser, navigate to https:// manager-fqdn /ovirt-engine , replacing manager-fqdn with the FQDN that you provided during installation. Note You can access the Administration Portal using alternate host names or IP addresses. To do so, you need to add a configuration file under /etc/ovirt-engine/engine.conf.d/ . For example: # vi /etc/ovirt-engine/engine.conf.d/99-custom-sso-setup.conf SSO_ALTERNATE_ENGINE_FQDNS=" alias1.example.com alias2.example.com " The list of alternate host names needs to be separated by spaces. You can also add the IP address of the Manager to the list, but using IP addresses instead of DNS-resolvable host names is not recommended. Click Administration Portal . An SSO login page displays. SSO login enables you to log in to the Administration and VM Portal at the same time. Enter your User Name and Password . If you are logging in for the first time, use the user name admin along with the password that you specified during installation. Select the Domain to authenticate against. If you are logging in using the internal admin user name, select the internal domain. Click Log In . You can view the Administration Portal in multiple languages. The default selection is chosen based on the locale settings of your web browser. If you want to view the Administration Portal in a language other than the default, select your preferred language from the drop-down list on the welcome page. To log out of the Red Hat Virtualization Administration Portal, click your user name in the header bar and click Sign Out . You are logged out of all portals and the Manager welcome screen displays. | [
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= pool_id",
"subscription-manager list --consumed",
"dnf repolist",
"subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-baseos-tus-rpms",
"subscription-manager release --set=8.6",
"dnf module -y enable pki-deps",
"dnf module -y enable postgresql:12",
"dnf module -y enable nodejs:14",
"dnf distro-sync --nobest",
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= pool_id",
"subscription-manager list --consumed",
"dnf repolist",
"subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-baseos-tus-rpms",
"subscription-manager release --set=8.6",
"dnf module -y enable postgresql:12",
"dnf module -y enable nodejs:14",
"dnf distro-sync --nobest",
"dnf install postgresql-server postgresql-contrib",
"postgresql-setup --initdb",
"systemctl enable postgresql systemctl start postgresql",
"su - postgres -c psql",
"postgres=# create role user_name with login encrypted password ' password ';",
"postgres=# create database database_name owner user_name template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';",
"postgres=# \\c database_name",
"database_name =# CREATE EXTENSION \"uuid-ossp\";",
"database_name =# CREATE LANGUAGE plpgsql;",
"database_name =# \\q",
"host database_name user_name X.X.X.X/0-32 md5 host database_name user_name X.X.X.X::/0-128 md5",
"IPv4, 32-bit address: host engine engine 192.168.12.10/32 md5 IPv6, 128-bit address: host engine engine fe80::7a31:c1ff:0000:0000/96 md5",
"listen_addresses='*'",
"autovacuum_vacuum_scale_factor=0.01 autovacuum_analyze_scale_factor=0.075 autovacuum_max_workers=6 maintenance_work_mem=65536 max_connections=150 work_mem=8192",
"firewall-cmd --zone=public --add-service=postgresql firewall-cmd --permanent --zone=public --add-service=postgresql",
"systemctl restart postgresql",
"dnf upgrade --nobest",
"dnf install rhvm",
"engine-setup",
"Set up Cinderlib integration (Currently in tech preview) (Yes, No) [No]:",
"Configure Engine on this host (Yes, No) [Yes]:",
"Configuring ovirt-provider-ovn also sets the Default cluster's default network provider to ovirt-provider-ovn. Non-Default clusters may be configured with an OVN after installation. Configure ovirt-provider-ovn (Yes, No) [Yes]:",
"Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:",
"Please note: Data Warehouse is required for the engine. If you choose to not configure it on this host, you have to configure it on a remote host, and then configure the engine on this host so that it can access the database of the remote Data Warehouse host. Configure Data Warehouse on this host (Yes, No) [Yes]:",
"Configure Grafana on this host (Yes, No) [Yes]:",
"Configure VM Console Proxy on this host (Yes, No) [Yes]:",
"Host fully qualified DNS name of this server [ autodetected host name ]:",
"Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]:",
"Where is the Engine database located? (Local, Remote) [Local]:",
"Engine database host [localhost]: Engine database port [5432]: Engine database secured connection (Yes, No) [No]: Engine database name [engine]: Engine database user [engine]: Engine database password:",
"Engine admin password: Confirm engine admin password:",
"Application mode (Both, Virt, Gluster) [Both]:",
"Use default credentials (admin@internal) for ovirt-provider-ovn (Yes, No) [Yes]: oVirt OVN provider user[admin@internal]: oVirt OVN provider password:",
"Default SAN wipe after delete (Yes, No) [No]:",
"Organization name for certificate [ autodetected domain-based name ]:",
"Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications. Do you wish to set the application as the default web page of the server? (Yes, No) [Yes]:",
"Setup can configure apache to use SSL using a certificate issued from the internal CA. Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:",
"Use Engine admin password as initial Grafana admin password (Yes, No) [Yes]:",
"Please confirm installation settings (OK, Cancel) [OK]:",
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= pool_id",
"subscription-manager list --consumed",
"dnf repolist",
"subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-baseos-tus-rpms",
"subscription-manager release --set=8.6",
"dnf module -y enable pki-deps",
"dnf module -y enable postgresql:12",
"dnf module -y enable nodejs:14",
"dnf distro-sync --nobest",
"dnf upgrade --nobest",
"dnf install ovirt-engine-dwh-setup",
"engine-setup",
"Configure Data Warehouse on this host (Yes, No) [Yes]:",
"Configure Grafana on this host (Yes, No) [Yes]:",
"Host fully qualified DNS name of this server [ autodetected hostname ]:",
"Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]:",
"Host fully qualified DNS name of the engine server []:",
"Setup will need to do some actions on the remote engine server. Either automatically, using ssh as root to access it, or you will be prompted to manually perform each such action. Please choose one of the following: 1 - Access remote engine server using ssh as root 2 - Perform each action manually, use files to copy content around (1, 2) [1]:",
"ssh port on remote engine server [22]:",
"root password on remote engine server manager.example.com :",
"Where is the DWH database located? (Local, Remote) [Local]:",
"Setup can configure the local postgresql server automatically for the DWH to run. This may conflict with existing applications. Would you like Setup to automatically configure postgresql and create DWH database, or prefer to perform that manually? (Automatic, Manual) [Automatic]:",
"DWH database secured connection (Yes, No) [No]: DWH database name [ovirt_engine_history]: DWH database user [ovirt_engine_history]: DWH database password:",
"DWH database host []: dwh-db-fqdn DWH database port [5432]: DWH database secured connection (Yes, No) [No]: DWH database name [ovirt_engine_history]: DWH database user [ovirt_engine_history]: DWH database password: password",
"Grafana database user [ovirt_engine_history_grafana]: Grafana database password:",
"Engine database host []: engine-db-fqdn Engine database port [5432]: Engine database secured connection (Yes, No) [No]: Engine database name [engine]: Engine database user [engine]: Engine database password: password",
"Please choose Data Warehouse sampling scale: (1) Basic (2) Full (1, 2)[1]:",
"Please confirm installation settings (OK, Cancel) [OK]:",
"systemctl restart ovirt-engine",
"vi /etc/ovirt-engine/engine.conf.d/99-custom-sso-setup.conf SSO_ALTERNATE_ENGINE_FQDNS=\" alias1.example.com alias2.example.com \""
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/Installing_the_Red_Hat_Virtualization_Manager_SM_remoteDB_deploy |
4.303. spice-vdagent | 4.303. spice-vdagent 4.303.1. RHBA-2011:1577 - spice-vdagent bug fix and enhancement update An updated spice-vdagent package that fixes multiple bugs and adds various enhancements is now available for Red Hat Enterprise Linux 6. The spice-vdagent package provides a SPICE agent for Linux guests. The spice-vdagent package has been upgraded to upstream version 0.8.1, which provides a number of bug fixes and enhancements over the version. (BZ# 722477 ) Note: the system must be rebooted in order for these changes to take effect. Users are advised to upgrade to this updated package, which fixes these bugs and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/spice-vdagent |
Chapter 114. KafkaUserStatus schema reference | Chapter 114. KafkaUserStatus schema reference Used in: KafkaUser Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer username Username. string secret The name of Secret where the credentials are stored. string | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkauserstatus-reference |
7.45. dvd+rw-tools | 7.45. dvd+rw-tools 7.45.1. RHBA-2012:1320 - dvd+rw-tools bug fix update Updated dvd+rw-tools packages that fix one bug are now available for Red Hat Enterprise Linux 6. The dvd+rw-tools packages contain a collection of tools to master DVD+RW/+R media. BZ#807474 Prior to this update, the growisofs utility wrote chunks of 32KB and reported an error during the last chunk when burning ISO image files that were not aligned to 32KB. This update allows the written chunk to be smaller than a multiple of 16 blocks. All users of dvd+rw-tools are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/dvd-rw-tools |
Chapter 1. Authenticating with the Guest user | Chapter 1. Authenticating with the Guest user To explore Developer Hub features, you can skip configuring authentication and authorization. You can configure Developer Hub to log in as a Guest user and access Developer Hub features. 1.1. Authenticating with the Guest user on an Operator-based installation After an Operator-based installation, you can configure Developer Hub to log in as a Guest user and access Developer Hub features. Prerequisites You installed Developer Hub by using the Operator . You added a custom Developer Hub application configuration , and have sufficient permissions to modify it. Procedure To enable the guest user in your Developer Hub custom configuration, edit your Developer Hub application configuration with following content: app-config-rhdh.yaml fragment auth: environment: development providers: guest: dangerouslyAllowOutsideDevelopment: true Verification Go to the Developer Hub login page. To log in with the Guest user account, click Enter in the Guest tile. In the Developer Hub Settings page, your profile name is Guest . You can use Developer Hub features. 1.2. Authenticating with the Guest user on a Helm-based installation On a Helm-based installation, you can configure Developer Hub to log in as a Guest user and access Developer Hub features. Prerequisites You Installed Developer Hub by using the Helm Chart . Procedure To enable the guest user in your Developer Hub custom configuration, configure your Red Hat Developer Hub Helm Chart with following content: Red Hat Developer Hub Helm Chart configuration fragment upstream: backstage: appConfig: app: baseUrl: 'https://{{- include "janus-idp.hostname" . }}' auth: environment: development providers: guest: dangerouslyAllowOutsideDevelopment: true Verification Go to the Developer Hub login page. To log in with the Guest user account, click Enter in the Guest tile. In the Developer Hub Settings page, your profile name is Guest . You can use Developer Hub features. | [
"auth: environment: development providers: guest: dangerouslyAllowOutsideDevelopment: true",
"upstream: backstage: appConfig: app: baseUrl: 'https://{{- include \"janus-idp.hostname\" . }}' auth: environment: development providers: guest: dangerouslyAllowOutsideDevelopment: true"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/authentication/authenticating-with-the-guest-user_title-authentication |
Chapter 2. Viewing, starting and stopping the Identity Management services | Chapter 2. Viewing, starting and stopping the Identity Management services Identity Management (IdM) servers are Red Hat Enterprise Linux systems that work as domain controllers (DCs). A number of different services are running on IdM servers, most notably the Directory Server, Certificate Authority (CA), DNS, and Kerberos. 2.1. The IdM services There are many different services that can be installed and run on the IdM servers and clients. List of services hosted by IdM servers Most of the following services are not strictly required to be installed on the IdM server. For example, you can install services such as a certificate authority (CA) or DNS server on an external server outside the IdM domain. Kerberos the krb5kdc and kadmin services IdM uses the Kerberos protocol to support single sign-on. With Kerberos, users only need to present the correct username and password once and can access IdM services without the system prompting for credentials again. Kerberos is divided into two parts: The krb5kdc service is the Kerberos Authentication service and Key Distribution Center (KDC) daemon. The kadmin service is the Kerberos database administration program. For information about how to authenticate using Kerberos in IdM, see Logging in to Identity Management from the command line and Logging in to IdM in the Web UI: Using a Kerberos ticket . LDAP directory server the dirsrv service The IdM LDAP directory server instance stores all IdM information, such as information related to Kerberos, user accounts, host entries, services, policies, DNS, and others. The LDAP directory server instance is based on the same technology as Red Hat Directory Server . However, it is tuned to IdM-specific tasks. Certificate Authority the pki-tomcatd service The integrated certificate authority (CA) is based on the same technology as Red Hat Certificate System . pki is the command-line interface for accessing Certificate System services. You can also install the server without the integrated CA if you create and provide all required certificates independently. For more information, see Planning your CA services . Domain Name System (DNS) the named service IdM uses DNS for dynamic service discovery. The IdM client installation utility can use information from DNS to automatically configure the client machine. After the client is enrolled in the IdM domain, it uses DNS to locate IdM servers and services within the domain. The BIND (Berkeley Internet Name Domain) implementation of the DNS (Domain Name System) protocols in Red Hat Enterprise Linux includes the named DNS server. named-pkcs11 is a version of the BIND DNS server built with native support for the PKCS#11 cryptographic standard. For information, see Planning your DNS services and host names . Apache HTTP Server the httpd service The Apache HTTP web server provides the IdM Web UI, and also manages communication between the Certificate Authority and other IdM services. Samba / Winbind smb and winbind services Samba implements the Server Message Block (SMB) protocol, also known as the Common Internet File System (CIFS) protocol, in Red Hat Enterprise Linux. Via the smb service, the SMB protocol enables you to access resources on a server, such as file shares and shared printers. If you have configured a Trust with an Active Directory (AD) environment, the`Winbind` service manages communication between IdM servers and AD servers. One-time password (OTP) authentication the ipa-otpd services One-time passwords (OTP) are passwords that are generated by an authentication token for only one session, as part of two-factor authentication. OTP authentication is implemented in Red Hat Enterprise Linux via the ipa-otpd service. For more information, see Logging in to the Identity Management Web UI using one time passwords . OpenDNSSEC the ipa-dnskeysyncd service OpenDNSSEC is a DNS manager that automates the process of keeping track of DNS security extensions (DNSSEC) keys and the signing of zones. The ipa-dnskeysyncd service manages synchronization between the IdM Directory Server and OpenDNSSEC. List of services hosted by IdM clients System Security Services Daemon : the sssd service The System Security Services Daemon (SSSD) is the client-side application that manages user authentication and caching credentials. Caching enables the local system to continue normal authentication operations if the IdM server becomes unavailable or if the client goes offline. For more information, see Understanding SSSD and its benefits . Certmonger : the certmonger service The certmonger service monitors and renews the certificates on the client. It can request new certificates for the services on the system. For more information, see Obtaining an IdM certificate for a service using certmonger . 2.2. Viewing the status of IdM services To view the status of the IdM services that are configured on your IdM server, run the ipactl status command: The output of the ipactl status command on your server depends on your IdM configuration. For example, if an IdM deployment does not include a DNS server, the named service is not present in the list. Note You cannot use the IdM web UI to view the status of all the IdM services running on a particular IdM server. Kerberized services running on different servers can be viewed in the Identity Services tab of the IdM web UI. You can start or stop the entire server, or an individual service only. To start, stop, or restart the entire IdM server, see: Starting and stopping the entire Identity Management server To start, stop, or restart an individual IdM service, see: Starting and stopping an individual Identity Management service To display the version of IdM software, see: Methods for displaying IdM software version 2.3. Starting and stopping the entire Identity Management server Use the ipa systemd service to stop, start, or restart the entire IdM server along with all the installed services. Using the systemctl utility to control the ipa systemd service ensures all services are stopped, started, or restarted in the appropriate order. The ipa systemd service also upgrades the RHEL IdM configuration before starting the IdM services, and it uses the proper SELinux contexts when administrating with IdM services. You do not need to have a valid Kerberos ticket to run the systemctl ipa commands. ipa systemd service commands To start the entire IdM server: To stop the entire IdM server: To restart the entire IdM server: To show the status of all the services that make up IdM, use the ipactl utility: Important Do not directly use the ipactl utility to start, stop, or restart IdM services. Use the systemctl ipa commands instead, which call the ipactl utility in a predictable environment. You cannot use the IdM web UI to perform the ipactl commands. 2.4. Starting and stopping an individual Identity Management service Changing IdM configuration files manually is generally not recommended. However, certain situations require that an administrator performs a manual configuration of specific services. In such situations, use the systemctl utility to stop, start, or restart an individual IdM service. For example, use systemctl after customizing the Directory Server behavior, without modifying the other IdM services: Also, when initially deploying an IdM trust with Active Directory, modify the /etc/sssd/sssd.conf file, adding: Specific parameters to tune the timeout configuration options in an environment where remote servers have a high latency Specific parameters to tune the Active Directory site affinity Overrides for certain configuration options that are not provided by the global IdM settings To apply the changes you have made in the /etc/sssd/sssd.conf file: Running systemctl restart sssd.service is required because the System Security Services Daemon (SSSD) does not automatically re-read or re-apply its configuration. Note that for changes that affect IdM identity ranges, a complete server reboot is recommended. Important To restart multiple IdM domain services, always use systemctl restart ipa . Because of dependencies between the services installed with the IdM server, the order in which they are started and stopped is critical. The ipa systemd service ensures that the services are started and stopped in the appropriate order. Useful systemctl commands To start a particular IdM service: To stop a particular IdM service: To restart a particular IdM service: To view the status of a particular IdM service: Important You cannot use the IdM web UI to start or stop the individual services running on IdM servers. You can only use the web UI to modify the settings of a Kerberized service by navigating to Identity Services and selecting the service. Additional resources Starting and stopping the entire Identity Management server 2.5. Methods for displaying IdM software version You can display the IdM version number with: The IdM WebUI ipa commands rpm commands Displaying version through the WebUI In the IdM WebUI, the software version can be displayed by choosing About from the username menu at the upper-right. Displaying version with ipa commands From the command line, use the ipa --version command. Displaying version with rpm commands If IdM services are not operating properly, you can use the rpm utility to determine the version number of the ipa-server package that is currently installed. | [
"ipactl status Directory Service: RUNNING krb5kdc Service: RUNNING kadmin Service: RUNNING named Service: RUNNING httpd Service: RUNNING pki-tomcatd Service: RUNNING smb Service: RUNNING winbind Service: RUNNING ipa-otpd Service: RUNNING ipa-dnskeysyncd Service: RUNNING ipa: INFO: The ipactl command was successful",
"systemctl start ipa",
"systemctl stop ipa",
"systemctl restart ipa",
"ipactl status",
"systemctl restart [email protected]",
"systemctl restart sssd.service",
"systemctl start name .service",
"systemctl stop name .service",
"systemctl restart name .service",
"systemctl status name .service",
"ipa --version VERSION: 4.8.0 , API_VERSION: 2.233",
"rpm -q ipa-server ipa-server-4.8.0-11 .module+el8.1.0+4247+9f3fd721.x86_64"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/accessing_identity_management_services/viewing-starting-and-stopping-the-ipa-server_accessing-idm-services |
Chapter 5. FirmwareSchema [metal3.io/v1alpha1] | Chapter 5. FirmwareSchema [metal3.io/v1alpha1] Description FirmwareSchema is the Schema for the firmwareschemas API. Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object FirmwareSchemaSpec defines the desired state of FirmwareSchema. 5.1.1. .spec Description FirmwareSchemaSpec defines the desired state of FirmwareSchema. Type object Required schema Property Type Description hardwareModel string The hardware model associated with this schema hardwareVendor string The hardware vendor associated with this schema schema object Map of firmware name to schema schema{} object Additional data describing the firmware setting. 5.1.2. .spec.schema Description Map of firmware name to schema Type object 5.1.3. .spec.schema{} Description Additional data describing the firmware setting. Type object Property Type Description allowable_values array (string) The allowable value for an Enumeration type setting. attribute_type string The type of setting. lower_bound integer The lowest value for an Integer type setting. max_length integer Maximum length for a String type setting. min_length integer Minimum length for a String type setting. read_only boolean Whether or not this setting is read only. unique boolean Whether or not this setting's value is unique to this node, e.g. a serial number. upper_bound integer The highest value for an Integer type setting. 5.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/firmwareschemas GET : list objects of kind FirmwareSchema /apis/metal3.io/v1alpha1/namespaces/{namespace}/firmwareschemas DELETE : delete collection of FirmwareSchema GET : list objects of kind FirmwareSchema POST : create a FirmwareSchema /apis/metal3.io/v1alpha1/namespaces/{namespace}/firmwareschemas/{name} DELETE : delete a FirmwareSchema GET : read the specified FirmwareSchema PATCH : partially update the specified FirmwareSchema PUT : replace the specified FirmwareSchema 5.2.1. /apis/metal3.io/v1alpha1/firmwareschemas HTTP method GET Description list objects of kind FirmwareSchema Table 5.1. HTTP responses HTTP code Reponse body 200 - OK FirmwareSchemaList schema 401 - Unauthorized Empty 5.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/firmwareschemas HTTP method DELETE Description delete collection of FirmwareSchema Table 5.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind FirmwareSchema Table 5.3. HTTP responses HTTP code Reponse body 200 - OK FirmwareSchemaList schema 401 - Unauthorized Empty HTTP method POST Description create a FirmwareSchema Table 5.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.5. Body parameters Parameter Type Description body FirmwareSchema schema Table 5.6. HTTP responses HTTP code Reponse body 200 - OK FirmwareSchema schema 201 - Created FirmwareSchema schema 202 - Accepted FirmwareSchema schema 401 - Unauthorized Empty 5.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/firmwareschemas/{name} Table 5.7. Global path parameters Parameter Type Description name string name of the FirmwareSchema HTTP method DELETE Description delete a FirmwareSchema Table 5.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified FirmwareSchema Table 5.10. HTTP responses HTTP code Reponse body 200 - OK FirmwareSchema schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified FirmwareSchema Table 5.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.12. HTTP responses HTTP code Reponse body 200 - OK FirmwareSchema schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified FirmwareSchema Table 5.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.14. Body parameters Parameter Type Description body FirmwareSchema schema Table 5.15. HTTP responses HTTP code Reponse body 200 - OK FirmwareSchema schema 201 - Created FirmwareSchema schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/provisioning_apis/firmwareschema-metal3-io-v1alpha1 |
2.43. RHEA-2011:0611 - new package: subscription-manager | 2.43. RHEA-2011:0611 - new package: subscription-manager New subscription-manager packages that provide GUI and command line tools for the new Subscription Manager system are now available for Red Hat Enterprise Linux 6. The new Subscription Management tooling will allow users to understand the specific products which have been installed on their machines, and the specific subscriptions which their machines are consuming. This enhancement update adds new subscription-manager packages to Red Hat Enterprise Linux 6. (BZ# 567635 ) All users should install these newly-released packages, which add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/subscription-manager_new |
7.115. logrotate | 7.115. logrotate 7.115.1. RHBA-2015:1293 - logrotate bug fix and enhancement update Updated logrotate packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The logrotate utility simplifies the administration of multiple log files, allowing the automatic rotation, compression, removal, and mailing of log files. Bug Fixes BZ# 625034 When the logrotate utility attempted to write its status file while insufficient disk space was available, logrotate wrote only part of the status file and stopped. When the disk space became free again, and log rotate attempted to read its records, logrotate terminated unexpectedly. This bug has been fixed, and logrotate no longer crashes in the aforementioned scenario. BZ# 722209 Previously, the daily cronjob of logrotate redirected all error messages to the /dev/null device file, thus suppressing all the relevant information for troubleshooting. With this update, all error messages containing detailed error reports are mailed to the root user. In addition, the /etc/cron.daily/logrotate file has been marked as a configuration file in RPM. BZ# 1012485 Previously, the /etc/cron.daily/logrotate file had incorrect permissions set. This update changes the permissions to 0700, and /etc/cron.daily/logrotate now conforms to Red Hat security policy GEN003080. BZ# 1117189 The logrotate utility incorrectly deleted data files alphabetically instead of based on their age when the when the "-%d-%m-%Y" date format was used. This update sorts files returned by the glob() function according to the date extension. As a result, when the aforementioned date format is used, the oldest log is now removed as expected. Enhancements BZ# 1125769 The logrotate "olddir" directive now automatically creates a directory if it is not already present. BZ# 1047899 This update adds logrotate features for "size" directive parsing and "maxsize" directive. Users of logrotate are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-logrotate |
Machine management | Machine management OpenShift Container Platform 4.17 Adding and maintaining cluster machines Red Hat OpenShift Documentation Team | [
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: ami: id: ami-046fe691f52a953f9 4 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 5 region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node - filters: - name: tag:Name values: - <infrastructure_id>-lb subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned - name: <custom_tag_name> 8 value: <custom_tag_value> userDataSecret: name: worker-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-<role>-<zone>",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5 placementGroupPartition: <placement_group_partition_number> 6",
"providerSpec: value: metadataServiceOptions: authentication: Required 1",
"providerSpec: placement: tenancy: dedicated",
"providerSpec: value: spotMarketOptions: {}",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-52-50.us-east-2.compute.internal Ready worker 3d17h v1.30.3 ip-10-0-58-24.us-east-2.compute.internal Ready control-plane,master 3d17h v1.30.3 ip-10-0-68-148.us-east-2.compute.internal Ready worker 3d17h v1.30.3 ip-10-0-68-68.us-east-2.compute.internal Ready control-plane,master 3d17h v1.30.3 ip-10-0-72-170.us-east-2.compute.internal Ready control-plane,master 3d17h v1.30.3 ip-10-0-74-50.us-east-2.compute.internal Ready worker 3d17h v1.30.3",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE preserve-dsoc12r4-ktjfc-worker-us-east-2a 1 1 1 1 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b 2 2 2 2 3d11h",
"oc get machines -n openshift-machine-api | grep worker",
"preserve-dsoc12r4-ktjfc-worker-us-east-2a-dts8r Running m5.xlarge us-east-2 us-east-2a 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-dkv7w Running m5.xlarge us-east-2 us-east-2b 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-k58cw Running m5.xlarge us-east-2 us-east-2b 3d11h",
"oc get machineset preserve-dsoc12r4-ktjfc-worker-us-east-2a -n openshift-machine-api -o json > <output_file.json>",
"jq .spec.template.spec.providerSpec.value.instanceType preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json \"g4dn.xlarge\"",
"oc -n openshift-machine-api get preserve-dsoc12r4-ktjfc-worker-us-east-2a -o json | diff preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json -",
"10c10 < \"name\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\", --- > \"name\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\", 21c21 < \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\" 31c31 < \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\" 60c60 < \"instanceType\": \"g4dn.xlarge\", --- > \"instanceType\": \"m5.xlarge\",",
"oc create -f preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json",
"machineset.machine.openshift.io/preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a created",
"oc -n openshift-machine-api get machinesets | grep gpu",
"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a 1 1 1 1 4m21s",
"oc -n openshift-machine-api get machines | grep gpu",
"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a running g4dn.xlarge us-east-2 us-east-2a 4m36s",
"oc get pods -n openshift-nfd",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d",
"oc get pods -n openshift-nfd",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d",
"oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'",
"Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg sshPrivateKey: \"\" sshPublicKey: \"\" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: \"1\" 8",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700",
"providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1",
"providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2",
"providerSpec: value: spotVMOptions: {}",
"oc edit machineset <machine-set-name>",
"providerSpec: value: osDisk: diskSettings: 1 ephemeralStorageLocation: Local 2 cachingType: ReadOnly 3 managedDisk: storageAccountType: Standard_LRS 4",
"oc create -f <machine-set-config>.yaml",
"oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2",
"\"storage\": { \"disks\": [ 1 { \"device\": \"/dev/disk/azure/scsi1/lun0\", 2 \"partitions\": [ 3 { \"label\": \"lun0p1\", 4 \"sizeMiB\": 1024, 5 \"startMiB\": 0 } ] } ], \"filesystems\": [ 6 { \"device\": \"/dev/disk/by-partlabel/lun0p1\", \"format\": \"xfs\", \"path\": \"/var/lib/lun0p1\" } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var/lib/lun0p1\\nWhat=/dev/disk/by-partlabel/lun0p1\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", 8 \"enabled\": true, \"name\": \"var-lib-lun0p1.mount\" } ] }",
"oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt",
"oc -n openshift-machine-api create secret generic <role>-user-data-x5 \\ 1 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt",
"oc edit machineset <machine-set-name>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4",
"oc create -f <machine-set-name>.yaml",
"oc get machines",
"oc debug node/<node-name> -- chroot /host lsblk",
"apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: lun0p1 mountPath: \"/tmp\" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd",
"StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.",
"failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code=\"BadRequest\" Message=\"Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>.\"",
"providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: osDisk: # managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1",
"oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 6h9m myclustername-worker-centralus2 1 1 1 1 6h9m myclustername-worker-centralus3 1 1 1 1 6h9m",
"oc get machineset -n openshift-machine-api myclustername-worker-centralus1 -o yaml > machineset-azure.yaml",
"cat machineset-azure.yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: \"0\" machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" creationTimestamp: \"2023-02-06T14:08:19Z\" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-worker-centralus1 namespace: openshift-machine-api resourceVersion: \"23601\" uid: acd56e0c-7612-473a-ae37-8704f34b80de spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: myclustername-vnet zone: \"1\" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1",
"cp machineset-azure.yaml machineset-azure-gpu.yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: \"1\" machine.openshift.io/memoryMb: \"28672\" machine.openshift.io/vCPU: \"4\" creationTimestamp: \"2023-02-06T20:27:12Z\" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-nc4ast4-gpu-worker-centralus1 namespace: openshift-machine-api resourceVersion: \"166285\" uid: 4eedce7f-6a57-4abe-b529-031140f02ffa spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_NC4as_T4_v3 vnet: myclustername-vnet zone: \"1\" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1",
"diff machineset-azure.yaml machineset-azure-gpu.yaml",
"14c14 < name: myclustername-worker-centralus1 --- > name: myclustername-nc4ast4-gpu-worker-centralus1 23c23 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 30c30 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 67c67 < vmSize: Standard_D4s_v3 --- > vmSize: Standard_NC4as_T4_v3",
"oc create -f machineset-azure-gpu.yaml",
"machineset.machine.openshift.io/myclustername-nc4ast4-gpu-worker-centralus1 created",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE clustername-n6n4r-nc4ast4-gpu-worker-centralus1 1 1 1 1 122m clustername-n6n4r-worker-centralus1 1 1 1 1 8h clustername-n6n4r-worker-centralus2 1 1 1 1 8h clustername-n6n4r-worker-centralus3 1 1 1 1 8h",
"oc get machines -n openshift-machine-api",
"NAME PHASE TYPE REGION ZONE AGE myclustername-master-0 Running Standard_D8s_v3 centralus 2 6h40m myclustername-master-1 Running Standard_D8s_v3 centralus 1 6h40m myclustername-master-2 Running Standard_D8s_v3 centralus 3 6h40m myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running centralus 1 21m myclustername-worker-centralus1-rbh6b Running Standard_D4s_v3 centralus 1 6h38m myclustername-worker-centralus2-dbz7w Running Standard_D4s_v3 centralus 2 6h38m myclustername-worker-centralus3-p9b8c Running Standard_D4s_v3 centralus 3 6h38m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION myclustername-master-0 Ready control-plane,master 6h39m v1.30.3 myclustername-master-1 Ready control-plane,master 6h41m v1.30.3 myclustername-master-2 Ready control-plane,master 6h39m v1.30.3 myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Ready worker 14m v1.30.3 myclustername-worker-centralus1-rbh6b Ready worker 6h29m v1.30.3 myclustername-worker-centralus2-dbz7w Ready worker 6h29m v1.30.3 myclustername-worker-centralus3-p9b8c Ready worker 6h31m v1.30.3",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h",
"oc create -f machineset-azure-gpu.yaml",
"get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h",
"oc get machineset -n openshift-machine-api | grep gpu",
"myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m",
"oc -n openshift-machine-api get machines | grep gpu",
"myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running Standard_NC4as_T4_v3 centralus 1 21m",
"oc get pods -n openshift-nfd",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d",
"oc get pods -n openshift-nfd",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d",
"oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'",
"Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true",
"providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 11 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 12 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: \"1\" 21",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1",
"providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2",
"providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: disks: type: <pd-disk-type> 1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3",
"providerSpec: value: preemptible: true",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"gcloud kms keys add-iam-policy-binding <key_name> --keyring <key_ring_name> --location <key_ring_location> --member \"serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com\" --role roles/cloudkms.cryptoKeyEncrypterDecrypter",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5",
"providerSpec: value: machineType: a2-highgpu-1g 1 onHostMaintenance: Terminate 2 restartPolicy: Always 3",
"providerSpec: value: gpus: - count: 1 1 type: nvidia-tesla-p100 2 machineType: n1-standard-1 3 onHostMaintenance: Terminate 4 restartPolicy: Always 5",
"machineType: a2-highgpu-1g onHostMaintenance: Terminate",
"{ \"apiVersion\": \"machine.openshift.io/v1beta1\", \"kind\": \"MachineSet\", \"metadata\": { \"annotations\": { \"machine.openshift.io/GPU\": \"0\", \"machine.openshift.io/memoryMb\": \"16384\", \"machine.openshift.io/vCPU\": \"4\" }, \"creationTimestamp\": \"2023-01-13T17:11:02Z\", \"generation\": 1, \"labels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\" }, \"name\": \"myclustername-2pt9p-worker-gpu-a\", \"namespace\": \"openshift-machine-api\", \"resourceVersion\": \"20185\", \"uid\": \"2daf4712-733e-4399-b4b4-d43cb1ed32bd\" }, \"spec\": { \"replicas\": 1, \"selector\": { \"matchLabels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\", \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" } }, \"template\": { \"metadata\": { \"labels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\", \"machine.openshift.io/cluster-api-machine-role\": \"worker\", \"machine.openshift.io/cluster-api-machine-type\": \"worker\", \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" } }, \"spec\": { \"lifecycleHooks\": {}, \"metadata\": {}, \"providerSpec\": { \"value\": { \"apiVersion\": \"machine.openshift.io/v1beta1\", \"canIPForward\": false, \"credentialsSecret\": { \"name\": \"gcp-cloud-credentials\" }, \"deletionProtection\": false, \"disks\": [ { \"autoDelete\": true, \"boot\": true, \"image\": \"projects/rhcos-cloud/global/images/rhcos-412-86-202212081411-0-gcp-x86-64\", \"labels\": null, \"sizeGb\": 128, \"type\": \"pd-ssd\" } ], \"kind\": \"GCPMachineProviderSpec\", \"machineType\": \"a2-highgpu-1g\", \"onHostMaintenance\": \"Terminate\", \"metadata\": { \"creationTimestamp\": null }, \"networkInterfaces\": [ { \"network\": \"myclustername-2pt9p-network\", \"subnetwork\": \"myclustername-2pt9p-worker-subnet\" } ], \"preemptible\": true, \"projectID\": \"myteam\", \"region\": \"us-central1\", \"serviceAccounts\": [ { \"email\": \"[email protected]\", \"scopes\": [ \"https://www.googleapis.com/auth/cloud-platform\" ] } ], \"tags\": [ \"myclustername-2pt9p-worker\" ], \"userDataSecret\": { \"name\": \"worker-user-data\" }, \"zone\": \"us-central1-a\" } } } } }, \"status\": { \"availableReplicas\": 1, \"fullyLabeledReplicas\": 1, \"observedGeneration\": 1, \"readyReplicas\": 1, \"replicas\": 1 } }",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.30.3 myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.30.3 myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.30.3 myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.30.3 myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.30.3 myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.30.3 myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.30.3",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-2pt9p-worker-a 1 1 1 1 8h myclustername-2pt9p-worker-b 1 1 1 1 8h myclustername-2pt9p-worker-c 1 1 8h myclustername-2pt9p-worker-f 0 0 8h",
"oc get machines -n openshift-machine-api | grep worker",
"myclustername-2pt9p-worker-a-mxtnz Running n2-standard-4 us-central1 us-central1-a 8h myclustername-2pt9p-worker-b-9pzzn Running n2-standard-4 us-central1 us-central1-b 8h myclustername-2pt9p-worker-c-6pbg6 Running n2-standard-4 us-central1 us-central1-c 8h",
"oc get machineset myclustername-2pt9p-worker-a -n openshift-machine-api -o json > <output_file.json>",
"jq .spec.template.spec.providerSpec.value.machineType ocp_4.17_machineset-a2-highgpu-1g.json \"a2-highgpu-1g\"",
"\"machineType\": \"a2-highgpu-1g\", \"onHostMaintenance\": \"Terminate\",",
"oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o json | diff ocp_4.17_machineset-a2-highgpu-1g.json -",
"15c15 < \"name\": \"myclustername-2pt9p-worker-gpu-a\", --- > \"name\": \"myclustername-2pt9p-worker-a\", 25c25 < \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-a\" 34c34 < \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-a\" 59,60c59 < \"machineType\": \"a2-highgpu-1g\", < \"onHostMaintenance\": \"Terminate\", --- > \"machineType\": \"n2-standard-4\",",
"oc create -f ocp_4.17_machineset-a2-highgpu-1g.json",
"machineset.machine.openshift.io/myclustername-2pt9p-worker-gpu-a created",
"oc -n openshift-machine-api get machinesets | grep gpu",
"myclustername-2pt9p-worker-gpu-a 1 1 1 1 5h24m",
"oc -n openshift-machine-api get machines | grep gpu",
"myclustername-2pt9p-worker-gpu-a-wxcr6 Running a2-highgpu-1g us-central1 us-central1-a 5h25m",
"oc get pods -n openshift-nfd",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d",
"oc get pods -n openshift-nfd",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d",
"oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'",
"Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: powervs-credentials image: name: rhcos-<infrastructure_id> 11 type: Name keyPairName: <infrastructure_id>-key kind: PowerVSMachineProviderConfig memoryGiB: 32 network: regex: ^DHCPSERVER[0-9a-z]{32}_PrivateUSD type: RegEx processorType: Shared processors: \"0.5\" serviceInstance: id: <ibm_power_vs_service_instance_id> type: ID 12 systemType: s922 userDataSecret: name: <role>-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 11 userDataSecret: name: <user_data_secret> 12 vcpuSockets: 4 13 vcpusPerSocket: 1 14",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 10 spec: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 11 kind: OpenstackProviderSpec networks: 12 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 13 primarySubnet: <rhosp_subnet_UUID> 14 securityGroups: - filter: {} name: <infrastructure_id>-worker 15 serverMetadata: Name: <infrastructure_id>-worker 16 openshiftClusterID: <infrastructure_id> 17 tags: - openshiftClusterID=<infrastructure_id> 18 trunk: true userDataSecret: name: worker-user-data 19 availabilityZone: <optional_openstack_availability_zone>",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> kind: OpenstackProviderSpec networks: - subnets: - UUID: <machines_subnet_UUID> ports: - networkID: <radio_network_UUID> 1 nameSuffix: radio fixedIPs: - subnetID: <radio_subnet_UUID> 2 tags: - sriov - radio vnicType: direct 3 portSecurity: false 4 - networkID: <uplink_network_UUID> 5 nameSuffix: uplink fixedIPs: - subnetID: <uplink_subnet_UUID> 6 tags: - sriov - uplink vnicType: direct 7 portSecurity: false 8 primarySubnet: <machines_subnet_UUID> securityGroups: - filter: {} name: <infrastructure_id>-<node_role> serverMetadata: Name: <infrastructure_id>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone>",
"oc label node <NODE_NAME> feature.node.kubernetes.io/network-sriov.capable=\"true\"",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> kind: OpenstackProviderSpec ports: - allowedAddressPairs: 1 - ipAddress: <API_VIP_port_IP> - ipAddress: <ingress_VIP_port_IP> fixedIPs: - subnetID: <machines_subnet_UUID> 2 nameSuffix: nodes networkID: <machines_network_UUID> 3 securityGroups: - <compute_security_group_UUID> 4 - networkID: <SRIOV_network_UUID> nameSuffix: sriov fixedIPs: - subnetID: <SRIOV_subnet_UUID> tags: - sriov vnicType: direct portSecurity: False primarySubnet: <machines_subnet_UUID> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: false userDataSecret: name: worker-user-data",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 11 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> 12 datastore: <vcenter_datastore_name> 13 folder: <vcenter_vm_folder_path> 14 resourcepool: <vsphere_resource_pool> 15 server: <vcenter_server_ip> 16",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}'",
"oc get secret -n openshift-machine-api vsphere-cloud-credentials -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'",
"<vcenter-server>.password=<openshift-user-password> <vcenter-server>.username=<openshift-user>",
"oc create secret generic vsphere-cloud-credentials -n openshift-machine-api --from-literal=<vcenter-server>.username=<openshift-user> --from-literal=<vcenter-server>.password=<openshift-user-password>",
"oc get secret -n openshift-machine-api worker-user-data -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'",
"disableTemplating: false userData: 1 { \"ignition\": { }, }",
"oc create secret generic worker-user-data -n openshift-machine-api --from-file=<installation_directory>/worker.ign",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: \"<vm_network_name>\" numCPUs: 4 numCoresPerSocket: 4 snapshot: \"\" template: <vm_template_name> 2 userDataSecret: name: worker-user-data 3 workspace: datacenter: <vcenter_data_center_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_address> 4",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"https://vcenter.example.com/ui/app/tags/tag/urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL/permissions",
"urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: tagIDs: 1 - <tag_id_value> 2",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 10 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 11 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data-managed",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"oc get machinesets.machine.openshift.io -n openshift-machine-api",
"oc get machines.machine.openshift.io -n openshift-machine-api",
"oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"",
"oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api",
"oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2",
"oc get machines.machine.openshift.io",
"spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>",
"oc get machinesets.machine.openshift.io -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <compute_machine_set_name_1> 1 1 1 1 55m <compute_machine_set_name_2> 1 1 1 1 55m",
"oc edit machinesets.machine.openshift.io <machine_set_name> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-machine-api spec: replicas: 2 1",
"oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>",
"NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h",
"oc annotate machine.machine.openshift.io/<machine_name_original_1> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"",
"oc scale --replicas=4 \\ 1 machineset.machine.openshift.io <machine_set_name> -n openshift-machine-api",
"oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>",
"NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Provisioned m6i.xlarge us-west-1 us-west-1a 55s <machine_name_updated_2> Provisioning m6i.xlarge us-west-1 us-west-1a 55s",
"oc scale --replicas=2 \\ 1 machineset.machine.openshift.io <machine_set_name> -n openshift-machine-api",
"oc describe machine.machine.openshift.io <machine_name_updated_1> -n openshift-machine-api",
"oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>",
"NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 5m41s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 5m41s",
"NAME PHASE TYPE REGION ZONE AGE <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 6m30s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 6m30s",
"oc get machine -n openshift-machine-api",
"NAME PHASE TYPE REGION ZONE AGE mycluster-5kbsp-master-0 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-master-1 Running m6i.xlarge us-west-1 us-west-1b 4h55m mycluster-5kbsp-master-2 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-worker-us-west-1a-fmx8t Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1a-m889l Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1b-c8qzm Running m6i.xlarge us-west-1 us-west-1b 4h51m",
"apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: mycluster-5kbsp-worker-us-west-1a-fmx8t status: phase: Running 1",
"oc get machine -n openshift-machine-api",
"oc delete machine <machine> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preDrain: - name: <hook_name> 1 owner: <hook_owner> 2",
"apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preTerminate: - name: <hook_name> 1 owner: <hook_owner> 2",
"apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preDrain: 1 - name: MigrateImportantApp owner: my-app-migration-controller preTerminate: 2 - name: BackupFileSystem owner: my-backup-controller - name: CloudProviderSpecialCase owner: my-custom-storage-detach-controller 3 - name: WaitForStorageDetach owner: my-custom-storage-detach-controller",
"apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preDrain: - name: EtcdQuorumOperator 1 owner: clusteroperator/etcd 2",
"apiVersion: \"autoscaling.openshift.io/v1\" kind: \"ClusterAutoscaler\" metadata: name: \"default\" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: <gpu_type> 7 min: 0 8 max: 16 9 logVerbosity: 4 10 scaleDown: 11 enabled: true 12 delayAfterAdd: 10m 13 delayAfterDelete: 5m 14 delayAfterFailure: 30s 15 unneededTime: 5m 16 utilizationThreshold: \"0.4\" 17 expanders: [\"Random\"] 18",
"oc get machinesets.machine.openshift.io",
"NAME DESIRED CURRENT READY AVAILABLE AGE archive-agl030519-vplxk-worker-us-east-1c 1 1 1 1 25m fast-01-agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m fast-02-agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m fast-03-agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m fast-04-agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m prod-01-agl030519-vplxk-worker-us-east-1a 1 1 1 1 33m prod-02-agl030519-vplxk-worker-us-east-1c 1 1 1 1 33m",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-autoscaler-priority-expander 1 namespace: openshift-machine-api 2 data: priorities: |- 3 10: - .*fast.* - .*archive.* 40: - .*prod.*",
"oc create configmap cluster-autoscaler-priority-expander --from-file=<location_of_config_map_file>/cluster-autoscaler-priority-expander.yml",
"oc get configmaps cluster-autoscaler-priority-expander -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"oc create -f <filename>.yaml 1",
"apiVersion: \"autoscaling.openshift.io/v1beta1\" kind: \"MachineAutoscaler\" metadata: name: \"worker-us-east-1a\" 1 namespace: \"openshift-machine-api\" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6",
"oc create -f <filename>.yaml 1",
"oc get MachineAutoscaler -n openshift-machine-api",
"NAME REF KIND REF NAME MIN MAX AGE compute-us-east-1a MachineSet compute-us-east-1a 1 12 39m compute-us-west-1a MachineSet compute-us-west-1a 2 4 37m",
"oc get MachineAutoscaler/<machine_autoscaler_name> \\ 1 -n openshift-machine-api -o yaml> <machine_autoscaler_name_backup>.yaml 2",
"oc delete MachineAutoscaler/<machine_autoscaler_name> -n openshift-machine-api",
"machineautoscaler.autoscaling.openshift.io \"compute-us-east-1a\" deleted",
"oc get MachineAutoscaler -n openshift-machine-api",
"oc get ClusterAutoscaler",
"NAME AGE default 42m",
"oc get ClusterAutoscaler/default \\ 1 -o yaml> <cluster_autoscaler_backup_name>.yaml 2",
"oc delete ClusterAutoscaler/default",
"clusterautoscaler.autoscaling.openshift.io \"default\" deleted",
"oc get ClusterAutoscaler",
"No resources found",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: infra 3 machine.openshift.io/cluster-api-machine-type: infra machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: ami: id: ami-046fe691f52a953f9 4 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 5 region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node - filters: - name: tag:Name values: - <infrastructure_id>-lb subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned - name: <custom_tag_name> 8 value: <custom_tag_value> userDataSecret: name: worker-user-data taints: 9 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-<role>-<zone>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: infra 2 machine.openshift.io/cluster-api-machine-type: infra name: <infrastructure_id>-infra-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: infra machine.openshift.io/cluster-api-machine-type: infra machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg sshPrivateKey: \"\" sshPublicKey: \"\" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: \"1\" 8 taints: 9 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 13 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 18 19 userDataSecret: name: worker-user-data 20 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 21 zone: \"1\" 22",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 7 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> name: <infrastructure_id>-<infra>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 11 userDataSecret: name: <user_data_secret> 12 vcpuSockets: 4 13 vcpusPerSocket: 1 14 taints: 15 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone>",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc label node <node-name> node-role.kubernetes.io/app=\"\"",
"oc label node <node-name> node-role.kubernetes.io/infra=\"\"",
"oc get nodes",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1",
"oc label node <node_name> <label>",
"oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=",
"cat infra.mcp.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2",
"oc create -f infra.mcp.yaml",
"oc get machineconfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d",
"cat infra.mc.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra",
"oc create -f infra.mc.yaml",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m",
"oc describe nodes <node_name>",
"describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoSchedule",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoSchedule value: reserved",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved",
"tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 value: reserved 3 - effect: NoExecute 4 key: node-role.kubernetes.io/infra 5 operator: Exists 6 value: reserved 7",
"spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get ingresscontroller default -n openshift-ingress-operator -o yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default",
"oc edit ingresscontroller default -n openshift-ingress-operator",
"spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pod -n openshift-ingress -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>",
"oc get node <node_name> 1",
"NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.30.3",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pods -o wide -n openshift-image-registry",
"oc describe node <node_name>",
"oc edit configmap cluster-monitoring-config -n openshift-monitoring",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute metricsServer: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute monitoringPlugin: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute",
"watch 'oc get pod -n openshift-monitoring -o wide'",
"oc delete pod -n openshift-monitoring <pod>",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-master-1 <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-master-1 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-master-0 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-master-1 <none> <none>",
"oc edit Subscription vertical-pod-autoscaler -n openshift-vertical-pod-autoscaler",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"node-role.kubernetes.io/infra\" operator: \"Exists\" effect: \"NoSchedule\"",
"oc edit VerticalPodAutoscalerController default -n openshift-vertical-pod-autoscaler",
"apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" 1 recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" 2 updater: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" 3",
"apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 2 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" updater: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 3 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\"",
"oc get pods -n openshift-vertical-pod-autoscaler -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-infra-eastus3-2bndt <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-infra-eastus1-lrgj8 <none> <none>",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES clusterresourceoverride-786b8c898c-9wrdq 1/1 Running 0 23s 10.128.2.32 ip-10-0-14-183.us-west-2.compute.internal <none> <none> clusterresourceoverride-786b8c898c-vn2lf 1/1 Running 0 26s 10.130.2.10 ip-10-0-20-140.us-west-2.compute.internal <none> <none> clusterresourceoverride-operator-6b8b8b656b-lvr62 1/1 Running 0 56m 10.131.0.33 ip-10-0-2-39.us-west-2.compute.internal <none> <none>",
"NAME STATUS ROLES AGE VERSION ip-10-0-14-183.us-west-2.compute.internal Ready control-plane,master 65m v1.30.4 ip-10-0-2-39.us-west-2.compute.internal Ready worker 58m v1.30.4 ip-10-0-20-140.us-west-2.compute.internal Ready control-plane,master 65m v1.30.4 ip-10-0-23-244.us-west-2.compute.internal Ready infra 55m v1.30.4 ip-10-0-77-153.us-west-2.compute.internal Ready control-plane,master 65m v1.30.4 ip-10-0-99-108.us-west-2.compute.internal Ready worker 24m v1.30.4 ip-10-0-24-233.us-west-2.compute.internal Ready infra 55m v1.30.4 ip-10-0-88-109.us-west-2.compute.internal Ready worker 24m v1.30.4 ip-10-0-67-453.us-west-2.compute.internal Ready infra 55m v1.30.4",
"oc edit -n clusterresourceoverride-operator subscriptions.operators.coreos.com clusterresourceoverride",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"node-role.kubernetes.io/infra\" operator: \"Exists\" effect: \"NoSchedule\"",
"oc edit ClusterResourceOverride cluster -n clusterresourceoverride-operator",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster resourceVersion: \"37952\" spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 deploymentOverrides: replicas: 1 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 deploymentOverrides: replicas: 3 nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 1 - key: \"key\" operator: \"Equal\" value: \"value\" effect: \"NoSchedule\"",
"oc get pods -n clusterresourceoverride-operator -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES clusterresourceoverride-786b8c898c-9wrdq 1/1 Running 0 23s 10.127.2.25 ip-10-0-23-244.us-west-2.compute.internal <none> <none> clusterresourceoverride-786b8c898c-vn2lf 1/1 Running 0 26s 10.128.0.80 ip-10-0-24-233.us-west-2.compute.internal <none> <none> clusterresourceoverride-operator-6b8b8b656b-lvr62 1/1 Running 0 56m 10.129.0.71 ip-10-0-67-453.us-west-2.compute.internal <none> <none>",
"aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-8.8*\" \\ 3 --region us-east-1 \\ 4 --output table 5",
"------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.8.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.8.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.8.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.8.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.17-for-rhel-8-x86_64-rpms\"",
"yum install openshift-ansible openshift-clients jq",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --disable=\"*\"",
"yum repolist",
"yum-config-manager --disable <repo_id>",
"yum-config-manager --disable \\*",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.17-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"",
"systemctl disable --now firewalld.service",
"[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"oc get nodes -o wide",
"oc adm cordon <node_name> 1",
"oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1",
"oc delete nodes <node_name> 1",
"oc get nodes -o wide",
"aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-8.8*\" \\ 3 --region us-east-1 \\ 4 --output table 5",
"------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.8.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.8.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.8.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.8.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --disable=\"*\"",
"yum repolist",
"yum-config-manager --disable <repo_id>",
"yum-config-manager --disable \\*",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.17-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"",
"systemctl disable --now firewalld.service",
"[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com [new_workers] mycluster-rhel8-2.example.com mycluster-rhel8-3.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"aws cloudformation describe-stacks --stack-name <name>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k http://<HTTP_server>/worker.ign",
"RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"oc get machine -n openshift-machine-api -l machine.openshift.io/cluster-api-machine-role=master",
"NAME PHASE TYPE REGION ZONE AGE <infrastructure_id>-master-0 Running m6i.xlarge us-west-1 us-west-1a 5h19m <infrastructure_id>-master-1 Running m6i.xlarge us-west-1 us-west-1b 5h19m <infrastructure_id>-master-2 Running m6i.xlarge us-west-1 us-west-1a 5h19m",
"No resources found in openshift-machine-api namespace.",
"oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api",
"oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 1 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 2 strategy: type: RollingUpdate 3 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 4 <platform_failure_domains> 5 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> 6 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 7",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc create -f <control_plane_machine_set>.yaml",
"oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api",
"openstack compute service set <target_node_host_name> nova-compute --disable",
"oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api",
"oc delete machine -n openshift-machine-api <control_plane_machine_name> 1",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster 1 namespace: openshift-machine-api spec: replicas: 3 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 3 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 4 strategy: type: RollingUpdate 5 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 6 <platform_failure_domains> 7 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 8",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: ami: id: ami-<ami_id_string> 1 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: 2 encrypted: true iops: 0 kmsKey: arn: \"\" volumeSize: 120 volumeType: gp3 credentialsSecret: name: aws-cloud-credentials 3 deviceIndex: 0 iamInstanceProfile: id: <cluster_id>-master-profile 4 instanceType: m6i.xlarge 5 kind: AWSMachineProviderConfig 6 loadBalancers: 7 - name: <cluster_id>-int type: network - name: <cluster_id>-ext type: network metadata: creationTimestamp: null metadataServiceOptions: {} placement: 8 region: <region> 9 availabilityZone: \"\" 10 tenancy: 11 securityGroups: - filters: - name: tag:Name values: - <cluster_id>-master-sg 12 subnet: {} 13 userDataSecret: name: master-user-data 14",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: aws: - placement: availabilityZone: <aws_zone_a> 1 subnet: 2 filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_a> 3 type: Filters 4 - placement: availabilityZone: <aws_zone_b> 5 subnet: filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_b> 6 type: Filters platform: AWS 7",
"providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network",
"providerSpec: value: instanceType: <compatible_aws_instance_type> 1",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5 placementGroupPartition: <placement_group_partition_number> 6",
"providerSpec: value: metadataServiceOptions: authentication: Required 1",
"providerSpec: placement: tenancy: dedicated",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials 1 namespace: openshift-machine-api diagnostics: {} image: 2 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<cluster_id>-rg/providers/Microsoft.Compute/galleries/gallery_<cluster_id>/images/<cluster_id>-gen2/versions/412.86.20220930 3 sku: \"\" version: \"\" internalLoadBalancer: <cluster_id>-internal 4 kind: AzureMachineProviderSpec 5 location: <region> 6 managedIdentity: <cluster_id>-identity metadata: creationTimestamp: null name: <cluster_id> networkResourceGroup: <cluster_id>-rg osDisk: 7 diskSettings: {} diskSizeGB: 1024 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <cluster_id> 8 resourceGroup: <cluster_id>-rg subnet: <cluster_id>-master-subnet 9 userDataSecret: name: master-user-data 10 vmSize: Standard_D8s_v3 vnet: <cluster_id>-vnet zone: \"1\" 11",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: azure: - zone: \"1\" 1 - zone: \"2\" - zone: \"3\" platform: Azure 2",
"providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700",
"providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1",
"providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2",
"oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2",
"\"storage\": { \"disks\": [ 1 { \"device\": \"/dev/disk/azure/scsi1/lun0\", 2 \"partitions\": [ 3 { \"label\": \"lun0p1\", 4 \"sizeMiB\": 1024, 5 \"startMiB\": 0 } ] } ], \"filesystems\": [ 6 { \"device\": \"/dev/disk/by-partlabel/lun0p1\", \"format\": \"xfs\", \"path\": \"/var/lib/lun0p1\" } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var/lib/lun0p1\\nWhat=/dev/disk/by-partlabel/lun0p1\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", 8 \"enabled\": true, \"name\": \"var-lib-lun0p1.mount\" } ] }",
"oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt",
"oc -n openshift-machine-api create secret generic <role>-user-data-x5 \\ 1 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt",
"oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: ControlPlaneMachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4",
"oc get machines",
"oc debug node/<node-name> -- chroot /host lsblk",
"StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.",
"failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code=\"BadRequest\" Message=\"Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>.\"",
"providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: osDisk: # managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1",
"oc get machine -n openshift-machine-api -l machine.openshift.io/cluster-api-machine-role=master",
"providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get ControlPlaneMachineSet/cluster",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials 1 deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 2 labels: null sizeGb: 200 type: pd-ssd kind: GCPMachineProviderSpec 3 machineType: e2-standard-4 metadata: creationTimestamp: null metadataServiceOptions: {} networkInterfaces: - network: <cluster_id>-network subnetwork: <cluster_id>-master-subnet projectID: <project_name> 4 region: <region> 5 serviceAccounts: 6 - email: <cluster_id>-m@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform shieldedInstanceConfig: {} tags: - <cluster_id>-master targetPools: - <cluster_id>-api userDataSecret: name: master-user-data 7 zone: \"\" 8",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: gcp: - zone: <gcp_zone_a> 1 - zone: <gcp_zone_b> 2 - zone: <gcp_zone_c> - zone: <gcp_zone_d> platform: GCP 3",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: disks: type: pd-ssd 1",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"gcloud kms keys add-iam-policy-binding <key_name> --keyring <key_ring_name> --location <key_ring_location> --member \"serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com\" --role roles/cloudkms.cryptoKeyEncrypterDecrypter",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 1 categories: 2 - key: <category_name> value: <category_value> cluster: 3 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials 4 image: 5 name: <cluster_id>-rhcos type: name kind: NutanixMachineProviderConfig 6 memorySize: 16Gi 7 metadata: creationTimestamp: null project: 8 type: name name: <project_name> subnets: 9 - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 10 userDataSecret: name: master-user-data 11 vcpuSockets: 8 12 vcpusPerSocket: 1 13",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials 1 namespace: openshift-machine-api flavor: m1.xlarge 2 image: ocp1-2g2xs-rhcos kind: OpenstackProviderSpec 3 metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: ocp1-2g2xs-nodes tags: openshiftClusterID=ocp1-2g2xs securityGroups: - filter: {} name: ocp1-2g2xs-master 4 serverGroupName: ocp1-2g2xs-master serverMetadata: Name: ocp1-2g2xs-master openshiftClusterID: ocp1-2g2xs tags: - openshiftClusterID=ocp1-2g2xs trunk: true userDataSecret: name: master-user-data",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: platform: OpenStack openstack: - availabilityZone: nova-az0 rootVolume: availabilityZone: cinder-az0 - availabilityZone: nova-az1 rootVolume: availabilityZone: cinder-az1 - availabilityZone: nova-az2 rootVolume: availabilityZone: cinder-az2",
"providerSpec: value: flavor: m1.xlarge 1",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 2 kind: VSphereMachineProviderSpec 3 memoryMiB: 16384 4 metadata: creationTimestamp: null network: 5 devices: - networkName: <vm_network_name> numCPUs: 4 6 numCoresPerSocket: 4 7 snapshot: \"\" template: <vm_template_name> 8 userDataSecret: name: master-user-data 9 workspace: 10 datacenter: <vcenter_data_center_name> 11 datastore: <vcenter_datastore_name> 12 folder: <path_to_vcenter_vm_folder> 13 resourcePool: <vsphere_resource_pool> 14 server: <vcenter_server_ip> 15",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: 1 platform: VSphere vsphere: 2 - name: <failure_domain_name1> - name: <failure_domain_name2>",
"oc get infrastructure cluster -o=jsonpath={.spec.platformSpec.vsphere.failureDomains[0].name}",
"https://vcenter.example.com/ui/app/tags/tag/urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL/permissions",
"urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: tagIDs: 1 - <tag_id_value> 2",
"apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preDrain: - name: EtcdQuorumOperator 1 owner: clusteroperator/etcd 2",
"oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api",
"oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api",
"oc edit machine <control_plane_machine_name>",
"oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api",
"oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api -o wide",
"oc edit machine <control_plane_machine_name>",
"oc edit machine/<cluster_id>-master-0 -n openshift-machine-api",
"providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 availabilityZone: az0 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: m1.xlarge image: rhcos-4.14 kind: OpenstackProviderSpec metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: refarch-lv7q9-nodes tags: openshiftClusterID=refarch-lv7q9 rootVolume: availabilityZone: nova 1 diskSize: 30 sourceUUID: rhcos-4.12 volumeType: fast-0 securityGroups: - filter: {} name: refarch-lv7q9-master serverGroupName: refarch-lv7q9-master serverMetadata: Name: refarch-lv7q9-master openshiftClusterID: refarch-lv7q9 tags: - openshiftClusterID=refarch-lv7q9 trunk: true userDataSecret: name: master-user-data",
"oc describe controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api",
"oc edit controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api",
"oc edit machine/<cluster_id>-master-1 -n openshift-machine-api",
"providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 availabilityZone: az0 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: m1.xlarge image: rhcos-4.17 kind: OpenstackProviderSpec metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: refarch-lv7q9-nodes tags: openshiftClusterID=refarch-lv7q9 securityGroups: - filter: {} name: refarch-lv7q9-master serverGroupName: refarch-lv7q9-master-az0 1 serverMetadata: Name: refarch-lv7q9-master openshiftClusterID: refarch-lv7q9 tags: - openshiftClusterID=refarch-lv7q9 trunk: true userDataSecret: name: master-user-data",
"oc describe controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api",
"oc edit controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api",
"oc delete controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api",
"oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api",
"oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}'",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 2 name: <cluster_name> namespace: openshift-cluster-api",
"oc create -f <cluster_resource_file>.yaml",
"oc get cluster",
"NAME PHASE AGE VERSION <cluster_name> Provisioning 4h6m",
"apiVersion: infrastructure.cluster.x-k8s.io/<version> 1 kind: <infrastructure_kind> 2 metadata: name: <cluster_name> 3 namespace: openshift-cluster-api spec: 4",
"oc create -f <infrastructure_resource_file>.yaml",
"oc get <infrastructure_kind>",
"NAME CLUSTER READY <cluster_name> <cluster_name> true",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <machine_template_kind> 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3",
"oc create -f <machine_template_resource_file>.yaml",
"oc get <machine_template_kind>",
"NAME AGE <template_name> 77m",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: 3",
"oc create -f <machine_set_resource_file>.yaml",
"oc get machineset -n openshift-cluster-api 1",
"NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <machine_set_name> <cluster_name> 1 1 1 17m",
"oc get machine -n openshift-cluster-api 1",
"NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_set_name>-<string_id> <cluster_name> <ip_address>.<region>.compute.internal <provider_id> Running 8m23s",
"oc get node",
"NAME STATUS ROLES AGE VERSION <ip_address_1>.<region>.compute.internal Ready worker 5h14m v1.28.5 <ip_address_2>.<region>.compute.internal Ready master 5h19m v1.28.5 <ip_address_3>.<region>.compute.internal Ready worker 7m v1.28.5",
"oc get <machine_template_kind> 1",
"NAME AGE <template_name> 77m",
"oc get <machine_template_kind> <template_name> -o yaml > <template_name>.yaml",
"oc apply -f <modified_template_name>.yaml 1",
"oc get machinesets.cluster.x-k8s.io -n openshift-cluster-api",
"NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <compute_machine_set_name_1> <cluster_name> 1 1 1 26m <compute_machine_set_name_2> <cluster_name> 1 1 1 26m",
"oc edit machinesets.cluster.x-k8s.io <machine_set_name> -n openshift-cluster-api",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-cluster-api spec: replicas: 2 1",
"oc get machines.cluster.x-k8s.io -n openshift-cluster-api -l cluster.x-k8s.io/set-name=<machine_set_name>",
"NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h",
"oc annotate machines.cluster.x-k8s.io/<machine_name_original_1> -n openshift-cluster-api cluster.x-k8s.io/delete-machine=\"true\"",
"oc scale --replicas=4 \\ 1 machinesets.cluster.x-k8s.io <machine_set_name> -n openshift-cluster-api",
"oc get machines.cluster.x-k8s.io -n openshift-cluster-api -l cluster.x-k8s.io/set-name=<machine_set_name>",
"NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Provisioned 55s <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Provisioning 55s",
"oc scale --replicas=2 \\ 1 machinesets.cluster.x-k8s.io <machine_set_name> -n openshift-cluster-api",
"oc describe machines.cluster.x-k8s.io <machine_name_updated_1> -n openshift-cluster-api",
"oc get machines.cluster.x-k8s.io -n openshift-cluster-api cluster.x-k8s.io/set-name=<machine_set_name>",
"NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m",
"NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: controlPlaneEndpoint: 2 host: <control_plane_endpoint_address> port: 6443 infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 3 name: <cluster_name> namespace: openshift-cluster-api",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta2 kind: AWSCluster 1 metadata: name: <cluster_name> 2 namespace: openshift-cluster-api spec: controlPlaneEndpoint: 3 host: <control_plane_endpoint_address> port: 6443 region: <region> 4",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta2 kind: AWSMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 uncompressedUserData: true iamInstanceProfile: # instanceType: m5.large ignition: storageType: UnencryptedUserData version: \"3.2\" ami: id: # subnet: filters: - name: tag:Name values: - # additionalSecurityGroups: - filters: - name: tag:Name values: - #",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSMachineTemplate 4 name: <template_name> 5",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPCluster 1 metadata: name: <cluster_name> 2 spec: controlPlaneEndpoint: 3 host: <control_plane_endpoint_address> port: 6443 network: name: <cluster_name>-network project: <project> 4 region: <region> 5",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 rootDeviceType: pd-ssd rootDeviceSize: 128 instanceType: n1-standard-4 image: projects/rhcos-cloud/global/images/rhcos-411-85-202203181601-0-gcp-x86-64 subnet: <cluster_name>-worker-subnet serviceAccounts: email: <service_account_email_address> scopes: - https://www.googleapis.com/auth/cloud-platform additionalLabels: kubernetes-io-cluster-<cluster_name>: owned additionalNetworkTags: - <cluster_name>-worker ipForwarding: Disabled",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 4 name: <template_name> 5 failureDomain: <failure_domain> 6",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: OpenStackCluster 1 metadata: name: <cluster_name> 2 namespace: openshift-cluster-api labels: cluster.x-k8s.io/cluster-name: <cluster_name> spec: controlPlaneEndpoint: <control_plane_endpoint_address> 3 disableAPIServerFloatingIP: true tags: - openshiftClusterID=<cluster_name> network: id: <api_service_network_id> 4 externalNetwork: id: <floating_network_id> 5 identityRef: cloudName: openstack name: openstack-cloud-credentials",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: OpenStackMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 flavor: <openstack_node_machine_flavor> 4 image: filter: name: <openstack_image> 5",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> template: metadata: labels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> node-role.kubernetes.io/<role>: \"\" spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: OpenStackMachineTemplate 4 name: <template_name> 5 failureDomain: <nova_availability_zone> 6",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: VSphereCluster 1 metadata: name: <cluster_name> 2 spec: controlPlaneEndpoint: 3 host: <control_plane_endpoint_address> port: 6443 identityRef: kind: Secret name: <cluster_name> server: <vsphere_server> 4",
"oc get infrastructure cluster -o jsonpath=\"{.spec.platformSpec.vsphere.vcenters[0].server}\"",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: VSphereMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 template: <vm_template_name> 4 server: <vcenter_server_ip> 5 diskGiB: 128 cloneMode: linkedClone 6 datacenter: <vcenter_data_center_name> 7 datastore: <vcenter_datastore_name> 8 folder: <vcenter_vm_folder_path> 9 resourcePool: <vsphere_resource_pool> 10 numCPUs: 4 memoryMiB: 16384 network: devices: - dhcp4: true networkName: \"<vm_network_name>\" 11",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: VSphereMachineTemplate 4 name: <template_name> 5 failureDomain: 6 - name: <failure_domain_name> region: <region_a> zone: <zone_a> server: <vcenter_server_name> topology: datacenter: <region_a_data_center> computeCluster: \"</region_a_data_center/host/zone_a_cluster>\" resourcePool: \"</region_a_data_center/host/zone_a_cluster/Resources/resource_pool>\" datastore: \"</region_a_data_center/datastore/datastore_a>\" networks: - port-group",
"oc delete machine.machine.openshift.io <machine_name>",
"oc delete machine.cluster.x-k8s.io <machine_name>",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8",
"oc apply -f healthcheck.yml",
"oc apply -f healthcheck.yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api annotations: machine.openshift.io/remediation-strategy: external-baremetal 2 spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> 4 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 5 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 6 status: \"False\" - type: \"Ready\" timeout: \"300s\" 7 status: \"Unknown\" maxUnhealthy: \"40%\" 8 nodeStartupTimeout: \"10m\" 9",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> remediationTemplate: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3RemediationTemplate name: metal3-remediation-template namespace: openshift-machine-api unhealthyConditions: - type: \"Ready\" timeout: \"300s\"",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3RemediationTemplate metadata: name: metal3-remediation-template namespace: openshift-machine-api spec: template: spec: strategy: type: Reboot retryLimit: 1 timeout: 5m0s"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/machine_management/index |
Deploying installer-provisioned clusters on bare metal | Deploying installer-provisioned clusters on bare metal OpenShift Container Platform 4.17 Deploying installer-provisioned OpenShift Container Platform clusters on bare metal Red Hat OpenShift Documentation Team | [
"<cluster_name>.<base_domain>",
"test-cluster.example.com",
"useradd kni",
"passwd kni",
"echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni",
"chmod 0440 /etc/sudoers.d/kni",
"su - kni -c \"ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''\"",
"su - kni",
"sudo subscription-manager register --username=<user> --password=<pass> --auto-attach",
"sudo subscription-manager repos --enable=rhel-9-for-<architecture>-appstream-rpms --enable=rhel-9-for-<architecture>-baseos-rpms",
"sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool",
"sudo usermod --append --groups libvirt <user>",
"sudo systemctl start firewalld",
"sudo firewall-cmd --zone=public --add-service=http --permanent",
"sudo firewall-cmd --reload",
"sudo systemctl enable libvirtd --now",
"sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images",
"sudo virsh pool-start default",
"sudo virsh pool-autostart default",
"vim pull-secret.txt",
"chronyc sources",
"MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^+ time.cloudflare.com 3 10 377 187 -209us[ -209us] +/- 32ms ^+ t1.time.ir2.yahoo.com 2 10 377 185 -4382us[-4382us] +/- 23ms ^+ time.cloudflare.com 3 10 377 198 -996us[-1220us] +/- 33ms ^* brenbox.westnet.ie 1 10 377 193 -9538us[-9761us] +/- 24ms",
"ping time.cloudflare.com",
"PING time.cloudflare.com (162.159.200.123) 56(84) bytes of data. 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=1 ttl=54 time=32.3 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=2 ttl=54 time=30.9 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=3 ttl=54 time=36.7 ms",
"export PUB_CONN=<baremetal_nic_name>",
"sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge <con_name> baremetal bridge.stp no 1 nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal pkill dhclient;dhclient baremetal \"",
"sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no ipv4.method manual ipv4.addr \"x.x.x.x/yy\" ipv4.gateway \"a.a.a.a\" ipv4.dns \"b.b.b.b\" 1 nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal nmcli con up baremetal \"",
"export PROV_CONN=<prov_nic_name>",
"sudo nohup bash -c \" nmcli con down \\\"USDPROV_CONN\\\" nmcli con delete \\\"USDPROV_CONN\\\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \\\"USDPROV_CONN\\\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning \"",
"nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual",
"ssh kni@provisioner.<cluster-name>.<domain>",
"sudo nmcli con show",
"NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2",
"interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false",
"cat <nmstate_configuration>.yaml | base64 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml",
"oc edit mc <machineconfig_custom_resource_name>",
"oc apply -f ./extraworker-secret.yaml",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret",
"oc project openshift-machine-api",
"oc get machinesets",
"oc scale machineset <machineset_name> --replicas=<n> 1",
"sudo su -",
"nmcli dev status",
"nmcli connection modify <interface_name> +ipv4.routes \"192.168.0.0/24 via <gateway>\"",
"nmcli connection modify eth0 +ipv4.routes \"192.168.0.0/24 via 192.168.0.1\"",
"nmcli connection up <interface_name>",
"ip route",
"sudo su -",
"nmcli dev status",
"nmcli connection modify <interface_name> +ipv4.routes \"10.0.0.0/24 via <gateway>\"",
"nmcli connection modify eth0 +ipv4.routes \"10.0.0.0/24 via 10.0.0.1\"",
"nmcli connection up <interface_name>",
"ip route",
"ping <remote_node_ip_address>",
"ping <control_plane_node_ip_address>",
"export VERSION=stable-4.17",
"export RELEASE_ARCH=<architecture>",
"export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')",
"export cmd=openshift-baremetal-install",
"export pullsecret_file=~/pull-secret.txt",
"export extract_dir=USD(pwd)",
"curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc",
"sudo cp oc /usr/local/bin",
"oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}",
"sudo cp openshift-baremetal-install /usr/local/bin",
"sudo dnf install -y podman",
"sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent",
"sudo firewall-cmd --reload",
"mkdir /home/kni/rhcos_image_cache",
"sudo semanage fcontext -a -t httpd_sys_content_t \"/home/kni/rhcos_image_cache(/.*)?\"",
"sudo restorecon -Rv /home/kni/rhcos_image_cache/",
"export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk.location')",
"export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/}",
"export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk[\"uncompressed-sha256\"]')",
"curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME}",
"ls -Z /home/kni/rhcos_image_cache",
"podman run -d --name rhcos_image_cache \\ 1 -v /home/kni/rhcos_image_cache:/var/www/html -p 8080:8080/tcp registry.access.redhat.com/ubi9/httpd-24",
"export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d\"/\" -f1)",
"export BOOTSTRAP_OS_IMAGE=\"http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}\"",
"echo \" bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}\"",
"platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"platform: baremetal: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public_cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIPs: - <api_ip> ingressVIPs: - <wildcard_ip> provisioningNetworkCIDR: <CIDR> bootstrapExternalStaticIP: <bootstrap_static_ip_address> 2 bootstrapExternalStaticGateway: <bootstrap_static_gateway> 3 bootstrapExternalStaticDNS: <bootstrap_static_dns> 4 hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out_of_band_ip> 5 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" 6 - name: <openshift_master_1> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" - name: <openshift_master_2> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" - name: <openshift_worker_0> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> - name: <openshift_worker_1> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'",
"ironic-inspector inspection failed: No disks satisfied root device hints",
"mkdir ~/clusterconfigs",
"cp install-config.yaml ~/clusterconfigs",
"ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"metadata: name:",
"networking: machineNetwork: - cidr:",
"compute: - name: worker",
"compute: replicas: 2",
"controlPlane: name: master",
"controlPlane: replicas: 3",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"export SERVER=<ip_address> 1",
"export SystemID=<system_id> 1",
"curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{\"ResetType\": \"On\"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset",
"curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{\"ResetType\": \"ForceOff\"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: <ETAG>\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"pxe\", \"BootSourceOverrideEnabled\": \"Once\"}}",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: <ETAG>\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideMode\":\"UEFI\"}}",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: <ETAG>\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"cd\", \"BootSourceOverrideEnabled\": \"Once\"}}'",
"curl -u USDUSER:USDPASS -X POST -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{\"Image\": \"https://example.com/test.iso\", \"TransferProtocolType\": \"HTTPS\", \"UserName\": \"\", \"Password\":\"\"}'",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: <ETAG>\" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{\"Image\": \"https://example.com/test.iso\", \"TransferProtocolType\": \"HTTPS\", \"UserName\": \"\", \"Password\":\"\"}'",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password>",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password> disableCertificateVerification: True",
"- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"",
"apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR>",
"noProxy: .example.com,172.22.0.0/24,10.10.0.0/24",
"platform: baremetal: apiVIPs: - <api_VIP> ingressVIPs: - <ingress_VIP> provisioningNetwork: \"Disabled\" 1",
"machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112",
"networkConfig: nmstate: interfaces: - name: <interface_name> wait-ip: ipv4+ipv6",
"platform: baremetal: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6>",
"interfaces: - name: <nic1_name> 1 type: ethernet state: up ipv4: address: - ip: <ip_address> 2 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 4 next-hop-interface: <next_hop_nic1_name> 5",
"nmstatectl gc <nmstate_yaml_file>",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: \"/dev/sda\" networkConfig: 1 interfaces: - name: <nic1_name> 2 type: ethernet state: up ipv4: address: - ip: <ip_address> 3 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 4 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 5 next-hop-interface: <next_hop_nic1_name> 6",
"networking: machineNetwork: - cidr: 10.0.0.0/24 - cidr: 192.168.0.0/24 networkType: OVNKubernetes",
"networkConfig: interfaces: - name: <interface_name> 1 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: <node_ip> 2 prefix-length: 24 gateway: <gateway_ip> 3 dns-resolver: config: server: - <dns_ip> 4",
"interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1",
"nmstatectl gc <nmstate_yaml_file> 1",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: \"/dev/sda\" networkConfig: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1",
"hosts: - name: worker-0 role: worker bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: false bootMACAddress: <NIC1_mac_address> bootMode: UEFI networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false dhcp: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.19.17.254 next-hop-interface: bond0 14 table-id: 254",
"hosts: - name: ostest-master-0 [...] networkConfig: &BOND interfaces: - name: bond0 type: bond state: up ipv4: dhcp: true enabled: true link-aggregation: mode: active-backup port: - enp2s0 - enp3s0 - name: ostest-master-1 [...] networkConfig: *BOND - name: ostest-master-2 [...] networkConfig: *BOND",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <username> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"/dev/sda\" bootMode: UEFISecureBoot 2",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated",
"sudo dnf -y install butane",
"variant: openshift version: 4.17.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all compute nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan",
"butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml",
"variant: openshift version: 4.17.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony",
"butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml",
"cd ~/clusterconfigs",
"cd manifests",
"touch cluster-network-avoid-workers-99-config.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:,",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: \"\"",
"sed -i \"s;mastersSchedulable: false;mastersSchedulable: true;g\" clusterconfigs/manifests/cluster-scheduler-02-config.yml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\"",
"cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml",
"vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml",
"spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true",
"vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml",
"spec: raid: hardwareRAIDVolumes: - level: \"0\" 1 name: \"sda\" numberOfPhysicalDisks: 1 rotational: true sizeGibibytes: 0",
"spec: raid: hardwareRAIDVolumes: []",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: primary name: 10_primary_storage_config spec: config: ignition: version: 3.2.0 storage: disks: - device: </dev/xxyN> partitions: - label: recovery startMiB: 32768 sizeMiB: 16384 filesystems: - device: /dev/disk/by-partlabel/recovery label: recovery format: xfs",
"cp ~/<MachineConfig_manifest> ~/clusterconfigs/openshift",
"sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent",
"sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent",
"sudo firewall-cmd --reload",
"sudo yum -y install python3 podman httpd httpd-tools jq",
"sudo mkdir -p /opt/registry/{auth,certs,data}",
"OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<cluster_architecture> 1",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}\"",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"openshift-baremetal-install",
"echo \"additionalTrustBundle: |\" >> install-config.yaml",
"sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml",
"echo \"imageContentSources:\" >> install-config.yaml",
"echo \"- mirrors:\" >> install-config.yaml",
"echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml",
"echo \" source: quay.io/openshift-release-dev/ocp-release\" >> install-config.yaml",
"echo \"- mirrors:\" >> install-config.yaml",
"echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml",
"echo \" source: quay.io/openshift-release-dev/ocp-v4.0-art-dev\" >> install-config.yaml",
"ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json .openshift_install.log .openshift_install_state.json",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster",
"tail -f /path/to/install-dir/.openshift_install.log",
"curl -s -o /dev/null -I -w \"%{http_code}\\n\" http://webserver.example.com:8080/rhcos-44.81.202004250133-0-qemu.<architecture>.qcow2.gz?sha256=7d884b46ee54fe87bbc3893bf2aa99af3b2d31f2e19ab5529c60636fbd0f1ce7",
"sudo virsh list",
"Id Name State -------------------------------------------- 12 openshift-xf6fq-bootstrap running",
"systemctl status libvirtd",
"β libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2020-03-03 21:21:07 UTC; 3 weeks 5 days ago Docs: man:libvirtd(8) https://libvirt.org Main PID: 9850 (libvirtd) Tasks: 20 (limit: 32768) Memory: 74.8M CGroup: /system.slice/libvirtd.service ββ 9850 /usr/sbin/libvirtd",
"sudo virsh console example.com",
"Connected to domain example.com Escape character is ^] Red Hat Enterprise Linux CoreOS 43.81.202001142154.0 (Ootpa) 4.3 SSH host key: SHA256:BRWJktXZgQQRY5zjuAV0IKZ4WM7i4TiUyMVanqu9Pqg (ED25519) SSH host key: SHA256:7+iKGA7VtG5szmk2jB5gl/5EZ+SNcJ3a2g23o0lnIio (ECDSA) SSH host key: SHA256:DH5VWhvhvagOTaLsYiVNse9ca+ZSW/30OOMed8rIGOc (RSA) ens3: fd35:919d:4042:2:c7ed:9a9f:a9ec:7 ens4: 172.22.0.2 fe80::1d05:e52e:be5d:263f localhost login:",
"ssh [email protected]",
"ssh [email protected]",
"[core@localhost ~]USD sudo podman logs -f <container_name>",
"ipmitool -I lanplus -U root -P <password> -H <out_of_band_ip> power off",
"bootstrapOSImage: http://<ip:port>/rhcos-43.81.202001142154.0-qemu.<architecture>.qcow2.gz?sha256=9d999f55ff1d44f7ed7c106508e5deecd04dc3c06095d34d36bf1cd127837e0c clusterOSImage: http://<ip:port>/rhcos-43.81.202001142154.0-openstack.<architecture>.qcow2.gz?sha256=a1bda656fa0892f7b936fdc6b6a6086bddaed5dafacedcd7a1e811abb78fe3b0",
"ssh [email protected]",
"[core@localhost ~]USD sudo podman logs -f coreos-downloader",
"[core@localhost ~]USD journalctl -xe",
"[core@localhost ~]USD journalctl -b -f -u bootkube.service",
"[core@localhost ~]USD sudo podman ps",
"[core@localhost ~]USD sudo podman logs ironic",
"sudo crictl logs USD(sudo crictl ps --pod=USD(sudo crictl pods --name=etcd-member --quiet) --quiet)",
"sudo crictl pods --name=etcd-member",
"hostname",
"sudo hostnamectl set-hostname <hostname>",
"dig api.<cluster_name>.example.com",
"; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el8 <<>> api.<cluster_name>.example.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37551 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: 866929d2f8e8563582af23f05ec44203d313e50948d43f60 (good) ;; QUESTION SECTION: ;api.<cluster_name>.example.com. IN A ;; ANSWER SECTION: api.<cluster_name>.example.com. 10800 IN A 10.19.13.86 ;; AUTHORITY SECTION: <cluster_name>.example.com. 10800 IN NS <cluster_name>.example.com. ;; ADDITIONAL SECTION: <cluster_name>.example.com. 10800 IN A 10.19.14.247 ;; Query time: 0 msec ;; SERVER: 10.19.14.247#53(10.19.14.247) ;; WHEN: Tue May 19 20:30:59 UTC 2020 ;; MSG SIZE rcvd: 140",
"oc --kubeconfig=USD{INSTALL_DIR}/auth/kubeconfig get clusterversion -o yaml",
"apiVersion: config.openshift.io/v1 kind: ClusterVersion metadata: creationTimestamp: 2019-02-27T22:24:21Z generation: 1 name: version resourceVersion: \"19927\" selfLink: /apis/config.openshift.io/v1/clusterversions/version uid: 6e0f4cf8-3ade-11e9-9034-0a923b47ded4 spec: channel: stable-4.1 clusterID: 5ec312f9-f729-429d-a454-61d4906896ca status: availableUpdates: null conditions: - lastTransitionTime: 2019-02-27T22:50:30Z message: Done applying 4.1.1 status: \"True\" type: Available - lastTransitionTime: 2019-02-27T22:50:30Z status: \"False\" type: Failing - lastTransitionTime: 2019-02-27T22:50:30Z message: Cluster version is 4.1.1 status: \"False\" type: Progressing - lastTransitionTime: 2019-02-27T22:24:31Z message: 'Unable to retrieve available updates: unknown version 4.1.1 reason: RemoteFailed status: \"False\" type: RetrievedUpdates desired: image: registry.svc.ci.openshift.org/openshift/origin-release@sha256:91e6f754975963e7db1a9958075eb609ad226968623939d262d1cf45e9dbc39a version: 4.1.1 history: - completionTime: 2019-02-27T22:50:30Z image: registry.svc.ci.openshift.org/openshift/origin-release@sha256:91e6f754975963e7db1a9958075eb609ad226968623939d262d1cf45e9dbc39a startedTime: 2019-02-27T22:24:31Z state: Completed version: 4.1.1 observedGeneration: 1 versionHash: Wa7as_ik1qE=",
"oc --kubeconfig=USD{INSTALL_DIR}/auth/kubeconfig get clusterversion version -o=jsonpath='{range .status.conditions[*]}{.type}{\" \"}{.status}{\" \"}{.message}{\"\\n\"}{end}'",
"Available True Done applying 4.1.1 Failing False Progressing False Cluster version is 4.0.0-0.alpha-2019-02-26-194020 RetrievedUpdates False Unable to retrieve available updates: unknown version 4.1.1",
"oc --kubeconfig=USD{INSTALL_DIR}/auth/kubeconfig get clusteroperator",
"NAME VERSION AVAILABLE PROGRESSING FAILING SINCE cluster-baremetal-operator True False False 17m cluster-autoscaler True False False 17m cluster-storage-operator True False False 10m console True False False 7m21s dns True False False 31m image-registry True False False 9m58s ingress True False False 10m kube-apiserver True False False 28m kube-controller-manager True False False 21m kube-scheduler True False False 25m machine-api True False False 17m machine-config True False False 17m marketplace-operator True False False 10m monitoring True False False 8m23s network True False False 13m node-tuning True False False 11m openshift-apiserver True False False 15m openshift-authentication True False False 20m openshift-cloud-credential-operator True False False 18m openshift-controller-manager True False False 10m openshift-samples True False False 8m42s operator-lifecycle-manager True False False 17m service-ca True False False 30m",
"oc --kubeconfig=USD{INSTALL_DIR}/auth/kubeconfig get clusteroperator <operator> -oyaml 1",
"apiVersion: config.openshift.io/v1 kind: ClusterOperator metadata: creationTimestamp: 2019-02-27T22:47:04Z generation: 1 name: monitoring resourceVersion: \"24677\" selfLink: /apis/config.openshift.io/v1/clusteroperators/monitoring uid: 9a6a5ef9-3ae1-11e9-bad4-0a97b6ba9358 spec: {} status: conditions: - lastTransitionTime: 2019-02-27T22:49:10Z message: Successfully rolled out the stack. status: \"True\" type: Available - lastTransitionTime: 2019-02-27T22:49:10Z status: \"False\" type: Progressing - lastTransitionTime: 2019-02-27T22:49:10Z status: \"False\" type: Failing extension: null relatedObjects: null version: \"\"",
"oc --kubeconfig=USD{INSTALL_DIR}/auth/kubeconfig get clusteroperator <operator> -o=jsonpath='{range .status.conditions[*]}{.type}{\" \"}{.status}{\" \"}{.message}{\"\\n\"}{end}'",
"Available True Successfully rolled out the stack Progressing False Failing False",
"--kubeconfig=USD{INSTALL_DIR}/auth/kubeconfig get clusteroperator kube-apiserver -o=jsonpath='{.status.relatedObjects}'",
"[map[resource:kubeapiservers group:operator.openshift.io name:cluster] map[group: name:openshift-config resource:namespaces] map[group: name:openshift-config-managed resource:namespaces] map[group: name:openshift-kube-apiserver-operator resource:namespaces] map[group: name:openshift-kube-apiserver resource:namespaces]]",
"oc --kubeconfig=USD{INSTALL_DIR}/auth/kubeconfig get clusteroperator console -oyaml",
"apiVersion: config.openshift.io/v1 kind: ClusterOperator metadata: creationTimestamp: 2019-02-27T22:46:57Z generation: 1 name: console resourceVersion: \"19682\" selfLink: /apis/config.openshift.io/v1/clusteroperators/console uid: 960364aa-3ae1-11e9-bad4-0a97b6ba9358 spec: {} status: conditions: - lastTransitionTime: 2019-02-27T22:46:58Z status: \"False\" type: Failing - lastTransitionTime: 2019-02-27T22:50:12Z status: \"False\" type: Progressing - lastTransitionTime: 2019-02-27T22:50:12Z status: \"True\" type: Available - lastTransitionTime: 2019-02-27T22:46:57Z status: \"True\" type: Upgradeable extension: null relatedObjects: - group: operator.openshift.io name: cluster resource: consoles - group: config.openshift.io name: cluster resource: consoles - group: oauth.openshift.io name: console resource: oauthclients - group: \"\" name: openshift-console-operator resource: namespaces - group: \"\" name: openshift-console resource: namespaces versions: null",
"oc --kubeconfig=USD{INSTALL_DIR}/auth/kubeconfig get route console -n openshift-console -o=jsonpath='{.spec.host}' console-openshift-console.apps.adahiya-1.devcluster.openshift.com",
"oc --kubeconfig=USD{INSTALL_DIR}/auth/kubeconfig get configmaps default-ingress-cert -n openshift-config-managed -o=jsonpath='{.data.ca-bundle\\.crt}'",
"-----BEGIN CERTIFICATE----- MIIC/TCCAeWgAwIBAgIBATANBgkqhkiG9w0BAQsFADAuMSwwKgYDVQQDDCNjbHVz dGVyLWluZ3Jlc3Mtb3BlcmF0b3JAMTU1MTMwNzU4OTAeFw0xOTAyMjcyMjQ2Mjha Fw0yMTAyMjYyMjQ2MjlaMC4xLDAqBgNVBAMMI2NsdXN0ZXItaW5ncmVzcy1vcGVy YXRvckAxNTUxMzA3NTg5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA uCA4fQ+2YXoXSUL4h/mcvJfrgpBfKBW5hfB8NcgXeCYiQPnCKblH1sEQnI3VC5Pk 2OfNCF3PUlfm4i8CHC95a7nCkRjmJNg1gVrWCvS/ohLgnO0BvszSiRLxIpuo3C4S EVqqvxValHcbdAXWgZLQoYZXV7RMz8yZjl5CfhDaaItyBFj3GtIJkXgUwp/5sUfI LDXW8MM6AXfuG+kweLdLCMm3g8WLLfLBLvVBKB+4IhIH7ll0buOz04RKhnYN+Ebw tcvFi55vwuUCWMnGhWHGEQ8sWm/wLnNlOwsUz7S1/sW8nj87GFHzgkaVM9EOnoNI gKhMBK9ItNzjrP6dgiKBCQIDAQABoyYwJDAOBgNVHQ8BAf8EBAMCAqQwEgYDVR0T AQH/BAgwBgEB/wIBADANBgkqhkiG9w0BAQsFAAOCAQEAq+vi0sFKudaZ9aUQMMha CeWx9CZvZBblnAWT/61UdpZKpFi4eJ2d33lGcfKwHOi2NP/iSKQBebfG0iNLVVPz vwLbSG1i9R9GLdAbnHpPT9UG6fLaDIoKpnKiBfGENfxeiq5vTln2bAgivxrVlyiq +MdDXFAWb6V4u2xh6RChI7akNsS3oU9PZ9YOs5e8vJp2YAEphht05X0swA+X8V8T C278FFifpo0h3Q0Dbv8Rfn4UpBEtN4KkLeS+JeT+0o2XOsFZp7Uhr9yFIodRsnNo H/Uwmab28ocNrGNiEVaVH6eTTQeeZuOdoQzUbClElpVmkrNGY0M42K0PvOQ/e7+y AQ== -----END CERTIFICATE-----",
"bootMACAddress: 24:6E:96:1B:96:90 # MAC of bootable provisioning NIC",
"bootMACAddress: 24:6E:96:1B:96:90 # MAC of bootable provisioning NIC",
"oc --kubeconfig=USD{INSTALL_DIR}/auth/kubeconfig --namespace=openshift-machine-api get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE cluster-autoscaler-operator 1/1 1 1 86m cluster-baremetal-operator 1/1 1 1 86m machine-api-controllers 1/1 1 1 85m machine-api-operator 1/1 1 1 86m",
"oc --kubeconfig=USD{INSTALL_DIR}/auth/kubeconfig --namespace=openshift-machine-api logs deployments/machine-api-controllers --container=machine-controller",
"oc get network -o yaml cluster",
"openshift-install create manifests",
"oc get po -n openshift-network-operator",
"ProvisioningError 51s metal3-baremetal-controller Image provisioning failed: Deploy step deploy.deploy failed with BadRequestError: HTTP POST https://<bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia returned code 400. Base.1.8.GeneralError: A general error has occurred. See ExtendedInfo for more information Extended information: [ { \"Message\": \"Unable to mount remote share https://<ironic_address>/redfish/boot-<uuid>.iso.\", \"MessageArgs\": [ \"https://<ironic_address>/redfish/boot-<uuid>.iso\" ], \"[email protected]\": 1, \"MessageId\": \"IDRAC.2.5.RAC0720\", \"RelatedProperties\": [ \"#/Image\" ], \"[email protected]\": 1, \"Resolution\": \"Retry the operation.\", \"Severity\": \"Informational\" } ].",
"sudo nano /etc/dnsmasq.conf",
"address=/api-int.<cluster_name>.<base_domain>/<IP_address> address=/api-int.mycluster.example.com/192.168.1.10 address=/api-int.mycluster.example.com/2001:0db8:85a3:0000:0000:8a2e:0370:7334",
"sudo nano /etc/dnsmasq.conf",
"ptr-record=<IP_address>.in-addr.arpa,api-int.<cluster_name>.<base_domain> ptr-record=10.1.168.192.in-addr.arpa,api-int.mycluster.example.com",
"sudo systemctl restart dnsmasq",
"ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json .openshift_install.log .openshift_install_state.json",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"/usr/local/bin/oc adm release mirror -a pull-secret-update.json --from=USDUPSTREAM_REPO --to-release-image=USDLOCAL_REG/USDLOCAL_REPO:USD{VERSION} --to=USDLOCAL_REG/USDLOCAL_REPO",
"UPSTREAM_REPO=USD{RELEASE_IMAGE} LOCAL_REG=<registry_FQDN>:<registry_port> LOCAL_REPO='ocp4/openshift4'",
"curl -k -u <user>:<password> https://registry.example.com:<registry_port>/v2/_catalog {\"repositories\":[\"<Repo_Name>\"]}",
"`runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network`",
"oc get all -n openshift-network-operator",
"NAME READY STATUS RESTARTS AGE pod/network-operator-69dfd7b577-bg89v 0/1 ContainerCreating 0 149m",
"kubectl get network.config.openshift.io cluster -oyaml",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNetwork: - 172.30.0.0/16 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networkType: OVNKubernetes",
"openshift-install create manifests",
"kubectl -n openshift-network-operator get pods",
"kubectl -n openshift-network-operator logs -l \"name=network-operator\"",
"No disk found with matching rootDeviceHints",
"udevadm info /dev/sda",
"This is a dnsmasq dhcp reservation, 'id:00:03:00:01' is the client id and '18:db:f2:8c:d5:9f' is the MAC Address for the NIC id:00:03:00:01:18:db:f2:8c:d5:9f,openshift-master-1,[2620:52:0:1302::6]",
"Failed Units: 2 NetworkManager-wait-online.service nodeip-configuration.service",
"[core@master-X ~]USD hostname",
"[core@master-X ~]USD sudo nmcli con up \"<bare_metal_nic>\"",
"[core@master-X ~]USD hostname",
"[core@master-X ~]USD sudo systemctl restart NetworkManager",
"[core@master-X ~]USD sudo systemctl restart nodeip-configuration.service",
"[core@master-X ~]USD sudo systemctl daemon-reload",
"[core@master-X ~]USD sudo systemctl restart kubelet.service",
"[core@master-X ~]USD sudo journalctl -fu kubelet.service",
"oc get csr",
"oc get csr <pending_csr> -o jsonpath='{.spec.request}' | base64 --decode | openssl req -noout -text",
"oc delete csr <wrong_csr>",
"oc get route oauth-openshift",
"oc get svc oauth-openshift",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE oauth-openshift ClusterIP 172.30.19.162 <none> 443/TCP 59m",
"[core@master0 ~]USD curl -k https://172.30.19.162",
"{ \"kind\": \"Status\", \"apiVersion\": \"v1\", \"metadata\": { }, \"status\": \"Failure\", \"message\": \"forbidden: User \\\"system:anonymous\\\" cannot get path \\\"/\\\"\", \"reason\": \"Forbidden\", \"details\": { }, \"code\": 403",
"oc logs deployment/authentication-operator -n openshift-authentication-operator",
"Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"openshift-authentication-operator\", Name:\"authentication-operator\", UID:\"225c5bd5-b368-439b-9155-5fd3c0459d98\", APIVersion:\"apps/v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from \"IngressStateEndpointsDegraded: All 2 endpoints for oauth-server are reporting\"",
"Failed Units: 1 machine-config-daemon-firstboot.service",
"[core@worker-X ~]USD sudo systemctl restart machine-config-daemon-firstboot.service",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0.cloud.example.com Ready master 145m v1.30.3 master-1.cloud.example.com Ready master 135m v1.30.3 master-2.cloud.example.com Ready master 145m v1.30.3 worker-2.cloud.example.com Ready worker 100m v1.30.3",
"oc get bmh -n openshift-machine-api",
"master-1 error registering master-1 ipmi://<out_of_band_ip>",
"sudo timedatectl",
"Local time: Tue 2020-03-10 18:20:02 UTC Universal time: Tue 2020-03-10 18:20:02 UTC RTC time: Tue 2020-03-10 18:36:53 Time zone: UTC (UTC, +0000) System clock synchronized: no NTP service: active RTC in local TZ: no",
"variant: openshift version: 4.17.0 metadata: name: 99-master-chrony labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | server <NTP_server> iburst 1 stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony",
"butane 99-master-chrony.bu -o 99-master-chrony.yaml",
"oc apply -f 99-master-chrony.yaml",
"sudo timedatectl",
"Local time: Tue 2020-03-10 19:10:02 UTC Universal time: Tue 2020-03-10 19:10:02 UTC RTC time: Tue 2020-03-10 19:36:53 Time zone: UTC (UTC, +0000) System clock synchronized: yes NTP service: active RTC in local TZ: no",
"cp chrony-masters.yaml ~/clusterconfigs/openshift/99_masters-chrony-configuration.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0.example.com Ready master,worker 4h v1.30.3 master-1.example.com Ready master,worker 4h v1.30.3 master-2.example.com Ready master,worker 4h v1.30.3",
"oc get pods --all-namespaces | grep -iv running | grep -iv complete",
"sudo dnf -y install butane",
"variant: openshift version: 4.17.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all compute nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan",
"butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml",
"variant: openshift version: 4.17.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony",
"butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml",
"oc apply -f 99-master-chrony-conf-override.yaml",
"machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created",
"oc apply -f 99-worker-chrony-conf-override.yaml",
"machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created",
"oc describe machineconfigpool",
"oc get provisioning -o yaml > enable-provisioning-nw.yaml",
"vim ~/enable-provisioning-nw.yaml",
"apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: 1 provisioningIP: 2 provisioningNetworkCIDR: 3 provisioningDHCPRange: 4 provisioningInterface: 5 watchAllNameSpaces: 6",
"oc apply -f enable-provisioning-nw.yaml",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: worker-0-br-ex 1 spec: nodeSelector: kubernetes.io/hostname: worker-0 desiredState: interfaces: - name: enp2s0 2 type: ethernet 3 state: up 4 ipv4: enabled: false 5 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 6 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true address: - ip: \"169.254.169.2\" prefix-length: 29 ipv6: enabled: false dhcp: false address: - ip: \"fd69::2\" prefix-length: 125",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"platform: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"bmc: address: credentialsName: disableCertificateVerification:",
"image: url: checksum: checksumType: format:",
"raid: hardwareRAIDVolumes: softwareRAIDVolumes:",
"spec: raid: hardwareRAIDVolume: []",
"rootDeviceHints: deviceName: hctl: model: vendor: serialNumber: minSizeGigabytes: wwn: wwnWithExtension: wwnVendorExtension: rotational:",
"hardware: cpu arch: model: clockMegahertz: flags: count:",
"hardware: firmware:",
"hardware: nics: - ip: name: mac: speedGbps: vlans: vlanId: pxe:",
"hardware: ramMebibytes:",
"hardware: storage: - name: rotational: sizeBytes: serialNumber:",
"hardware: systemVendor: manufacturer: productName: serialNumber:",
"provisioning: state: id: image: raid: firmware: rootDeviceHints:",
"oc get bmh -n openshift-machine-api -o yaml",
"oc get bmh -n openshift-machine-api",
"oc get bmh <host_name> -n openshift-machine-api -o yaml",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: creationTimestamp: \"2022-06-16T10:48:33Z\" finalizers: - baremetalhost.metal3.io generation: 2 name: openshift-worker-0 namespace: openshift-machine-api resourceVersion: \"30099\" uid: 1513ae9b-e092-409d-be1b-ad08edeb1271 spec: automatedCleaningMode: metadata bmc: address: redfish://10.46.61.19:443/redfish/v1/Systems/1 credentialsName: openshift-worker-0-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:c7:f7:b0 bootMode: UEFI consumerRef: apiVersion: machine.openshift.io/v1beta1 kind: Machine name: ocp-edge-958fk-worker-0-nrfcg namespace: openshift-machine-api customDeploy: method: install_coreos online: true rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> userData: name: worker-user-data-managed namespace: openshift-machine-api status: errorCount: 0 errorMessage: \"\" goodCredentials: credentials: name: openshift-worker-0-bmc-secret namespace: openshift-machine-api credentialsVersion: \"16120\" hardware: cpu: arch: x86_64 clockMegahertz: 2300 count: 64 flags: - 3dnowprefetch - abm - acpi - adx - aes model: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz firmware: bios: date: 10/26/2020 vendor: HPE version: U30 hostname: openshift-worker-0 nics: - mac: 48:df:37:c7:f7:b3 model: 0x8086 0x1572 name: ens1f3 ramMebibytes: 262144 storage: - hctl: \"0:0:0:0\" model: VK000960GWTTB name: /dev/disk/by-id/scsi-<serial_number> sizeBytes: 960197124096 type: SSD vendor: ATA systemVendor: manufacturer: HPE productName: ProLiant DL380 Gen10 (868703-B21) serialNumber: CZ200606M3 lastUpdated: \"2022-06-16T11:41:42Z\" operationalStatus: OK poweredOn: true provisioning: ID: 217baa14-cfcf-4196-b764-744e184a3413 bootMode: UEFI customDeploy: method: install_coreos image: url: \"\" raid: hardwareRAIDVolumes: null softwareRAIDVolumes: [] rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> state: provisioned triedCredentials: credentials: name: openshift-worker-0-bmc-secret namespace: openshift-machine-api credentialsVersion: \"16120\"",
"oc get bmh -n openshift-machine-api",
"oc annotate baremetalhost <node_name> -n openshift-machine-api 'baremetalhost.metal3.io/detached=true' 1",
"oc edit bmh <node_name> -n openshift-machine-api",
"oc annotate baremetalhost <node_name> -n openshift-machine-api 'baremetalhost.metal3.io/detached'-",
"apiVersion: metal3.io/v1alpha1 kind: DataImage metadata: name: <node_name> 1 spec: url: \"http://dataimage.example.com/non-bootable.iso\" 2",
"vim <node_name>-dataimage.yaml",
"oc apply -f <node_name>-dataimage.yaml -n <node_namespace> 1",
"oc get dataimage <node_name> -n openshift-machine-api -o yaml",
"apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: DataImage metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"metal3.io/v1alpha1\",\"kind\":\"DataImage\",\"metadata\":{\"annotations\":{},\"name\":\"bmh-node-1\",\"namespace\":\"openshift-machine-api\"},\"spec\":{\"url\":\"http://dataimage.example.com/non-bootable.iso\"}} creationTimestamp: \"2024-06-10T12:00:00Z\" finalizers: - dataimage.metal3.io generation: 1 name: bmh-node-1 namespace: openshift-machine-api ownerReferences: - apiVersion: metal3.io/v1alpha1 blockOwnerDeletion: true controller: true kind: BareMetalHost name: bmh-node-1 uid: 046cdf8e-0e97-485a-8866-e62d20e0f0b3 resourceVersion: \"21695581\" uid: c5718f50-44b6-4a22-a6b7-71197e4b7b69 spec: url: http://dataimage.example.com/non-bootable.iso status: attachedImage: url: http://dataimage.example.com/non-bootable.iso error: count: 0 message: \"\" lastReconciled: \"2024-06-10T12:05:00Z\"",
"spec: settings: ProcTurboMode: Disabled 1",
"status: conditions: - lastTransitionTime: message: observedGeneration: reason: status: type:",
"status: schema: name: namespace: lastUpdated:",
"status: settings:",
"oc get hfs -n openshift-machine-api -o yaml",
"oc get hfs -n openshift-machine-api",
"oc get hfs <host_name> -n openshift-machine-api -o yaml",
"oc get hfs -n openshift-machine-api",
"oc edit hfs <host_name> -n openshift-machine-api",
"spec: settings: name: value 1",
"oc get bmh <host_name> -n openshift-machine name",
"oc annotate machine <machine_name> machine.openshift.io/delete-machine=true -n openshift-machine-api",
"oc get nodes",
"oc get machinesets -n openshift-machine-api",
"oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n-1>",
"oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n>",
"oc get hfs -n openshift-machine-api",
"oc describe hfs <host_name> -n openshift-machine-api",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ValidationFailed 2m49s metal3-hostfirmwaresettings-controller Invalid BIOS setting: Setting ProcTurboMode is invalid, unknown enumeration value - Foo",
"<BIOS_setting_name> attribute_type: allowable_values: lower_bound: upper_bound: min_length: max_length: read_only: unique:",
"oc get firmwareschema -n openshift-machine-api",
"oc get firmwareschema <instance_name> -n openshift-machine-api -o yaml",
"updates: component: url:",
"components: component: initialVersion: currentVersion: lastVersionFlashed: updatedAt:",
"updates: component: url:",
"oc get hostfirmwarecomponents -n openshift-machine-api -o yaml",
"oc get hostfirmwarecomponents -n openshift-machine-api",
"oc get hostfirmwarecomponents <host_name> -n openshift-machine-api -o yaml",
"--- apiVersion: metal3.io/v1alpha1 kind: HostFirmwareComponents metadata: creationTimestamp: 2024-04-25T20:32:06Z\" generation: 1 name: ostest-master-2 namespace: openshift-machine-api ownerReferences: - apiVersion: metal3.io/v1alpha1 blockOwnerDeletion: true controller: true kind: BareMetalHost name: ostest-master-2 uid: 16022566-7850-4dc8-9e7d-f216211d4195 resourceVersion: \"2437\" uid: 2038d63f-afc0-4413-8ffe-2f8e098d1f6c spec: updates: [] status: components: - component: bios currentVersion: 1.0.0 initialVersion: 1.0.0 - component: bmc currentVersion: \"1.00\" initialVersion: \"1.00\" conditions: - lastTransitionTime: \"2024-04-25T20:32:06Z\" message: \"\" observedGeneration: 1 reason: OK status: \"True\" type: Valid - lastTransitionTime: \"2024-04-25T20:32:06Z\" message: \"\" observedGeneration: 1 reason: OK status: \"False\" type: ChangeDetected lastUpdated: \"2024-04-25T20:32:06Z\" updates: []",
"oc get hostfirmwarecomponents -n openshift-machine-api -o yaml",
"oc edit <host_name> hostfirmwarecomponents -n openshift-machine-api 1",
"--- apiVersion: metal3.io/v1alpha1 kind: HostFirmwareComponents metadata: creationTimestamp: 2024-04-25T20:32:06Z\" generation: 1 name: ostest-master-2 namespace: openshift-machine-api ownerReferences: - apiVersion: metal3.io/v1alpha1 blockOwnerDeletion: true controller: true kind: BareMetalHost name: ostest-master-2 uid: 16022566-7850-4dc8-9e7d-f216211d4195 resourceVersion: \"2437\" uid: 2038d63f-afc0-4413-8ffe-2f8e098d1f6c spec: updates: - name: bios 1 url: https://myurl.with.firmware.for.bios 2 - name: bmc 3 url: https://myurl.with.firmware.for.bmc 4 status: components: - component: bios currentVersion: 1.0.0 initialVersion: 1.0.0 - component: bmc currentVersion: \"1.00\" initialVersion: \"1.00\" conditions: - lastTransitionTime: \"2024-04-25T20:32:06Z\" message: \"\" observedGeneration: 1 reason: OK status: \"True\" type: Valid - lastTransitionTime: \"2024-04-25T20:32:06Z\" message: \"\" observedGeneration: 1 reason: OK status: \"False\" type: ChangeDetected lastUpdated: \"2024-04-25T20:32:06Z\"",
"oc get bmh <host_name> -n openshift-machine name 1",
"oc annotate machine <machine_name> machine.openshift.io/delete-machine=true -n openshift-machine-api 1",
"oc get nodes",
"oc get machinesets -n openshift-machine-api",
"oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n-1> 1",
"oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n> 1",
"curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux-USDVERSION.tar.gz | tar zxvf - oc",
"sudo cp oc /usr/local/bin",
"echo -ne \"root\" | base64",
"echo -ne \"password\" | base64",
"vim bmh.yaml",
"--- apiVersion: v1 1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret 2 namespace: openshift-machine-api type: Opaque stringData: nmstate: | 3 interfaces: 4 - name: <nic1_name> 5 type: ethernet state: up ipv4: address: - ip: <ip_address> 6 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 7 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 8 next-hop-interface: <next_hop_nic1_name> 9 --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 10 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 11 password: <base64_of_pwd> 12 --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> 13 namespace: openshift-machine-api spec: online: True bootMACAddress: <nic1_mac_address> 14 bmc: address: <protocol>://<bmc_url> 15 credentialsName: openshift-worker-<num>-bmc-secret 16 disableCertificateVerification: True 17 username: <bmc_username> 18 password: <bmc_password> 19 rootDeviceHints: deviceName: <root_device_hint> 20 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret 21",
"--- interfaces: - name: <nic_name> type: ethernet state: up ipv4: enabled: false ipv6: enabled: false",
"--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 1 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 2 password: <base64_of_pwd> 3 --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> 4 namespace: openshift-machine-api spec: online: True bootMACAddress: <nic1_mac_address> 5 bmc: address: <protocol>://<bmc_url> 6 credentialsName: openshift-worker-<num>-bmc-secret 7 disableCertificateVerification: True 8 username: <bmc_username> 9 password: <bmc_password> 10 rootDeviceHints: deviceName: <root_device_hint> 11 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret 12",
"oc -n openshift-machine-api create -f bmh.yaml",
"secret/openshift-worker-<num>-network-config-secret created secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created",
"oc -n openshift-machine-api get bmh openshift-worker-<num>",
"NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> available true",
"oc get clusteroperator baremetal",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.17 True False False 3d15h",
"oc delete bmh -n openshift-machine-api <host_name> oc delete machine -n openshift-machine-api <machine_name>",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: control-plane-<num>-bmc-secret 1 namespace: openshift-machine-api data: username: <base64_of_uid> 2 password: <base64_of_pwd> 3 type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: control-plane-<num> 4 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: <protocol>://<bmc_ip> 5 credentialsName: control-plane-<num>-bmc-secret 6 bootMACAddress: <NIC1_mac_address> 7 bootMode: UEFI externallyProvisioned: false online: true EOF",
"oc get bmh -n openshift-machine-api",
"NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com available control-plane-1 true 1h10m control-plane-2.example.com externally provisioned control-plane-2 true 4h53m control-plane-3.example.com externally provisioned control-plane-3 true 4h53m compute-1.example.com provisioned compute-1-ktmmx true 4h53m compute-1.example.com provisioned compute-2-l2zmb true 4h53m",
"cat <<EOF | oc apply -f - apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: metal3.io/BareMetalHost: openshift-machine-api/control-plane-<num> 1 labels: machine.openshift.io/cluster-api-cluster: control-plane-<num> 2 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: control-plane-<num> 3 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: \"\" url: \"\" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed EOF",
"oc get bmh -A",
"NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com provisioned control-plane-1 true 2h53m control-plane-2.example.com externally provisioned control-plane-2 true 5h53m control-plane-3.example.com externally provisioned control-plane-3 true 5h53m compute-1.example.com provisioned compute-1-ktmmx true 5h53m compute-2.example.com provisioned compute-2-l2zmb true 5h53m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane-1.example.com available master 4m2s v1.30.3 control-plane-2.example.com available master 141m v1.30.3 control-plane-3.example.com available master 141m v1.30.3 compute-1.example.com available worker 87m v1.30.3 compute-2.example.com available worker 87m v1.30.3",
"edit provisioning",
"apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: creationTimestamp: \"2021-08-05T18:51:50Z\" finalizers: - provisioning.metal3.io generation: 8 name: provisioning-configuration resourceVersion: \"551591\" uid: f76e956f-24c6-4361-aa5b-feaf72c5b526 spec: provisioningDHCPRange: 172.22.0.10,172.22.0.254 provisioningIP: 172.22.0.3 provisioningInterface: enp1s0 provisioningNetwork: Managed provisioningNetworkCIDR: 172.22.0.0/24 virtualMediaViaExternalNetwork: true 1 status: generations: - group: apps hash: \"\" lastGeneration: 7 name: metal3 namespace: openshift-machine-api resource: deployments - group: apps hash: \"\" lastGeneration: 1 name: metal3-image-cache namespace: openshift-machine-api resource: daemonsets observedGeneration: 8 readyReplicas: 0",
"edit machineset",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: \"2021-08-05T18:51:52Z\" generation: 11 labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: ostest-hwmdt-worker-0 namespace: openshift-machine-api resourceVersion: \"551513\" uid: fad1c6e0-b9da-4d4a-8d73-286f78788931 spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 1 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 2 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data status: availableReplicas: 2 fullyLabeledReplicas: 2 observedGeneration: 11 readyReplicas: 2 replicas: 2",
"oc get bmh -n openshift-machine-api",
"NAME STATUS PROVISIONING STATUS CONSUMER openshift-master-0 OK externally provisioned openshift-zpwpq-master-0 openshift-master-1 OK externally provisioned openshift-zpwpq-master-1 openshift-master-2 OK externally provisioned openshift-zpwpq-master-2 openshift-worker-0 OK provisioned openshift-zpwpq-worker-0-lv84n openshift-worker-1 OK provisioned openshift-zpwpq-worker-0-zd8lm openshift-worker-2 error registering",
"oc get -n openshift-machine-api bmh <bare_metal_host_name> -o yaml",
"status: errorCount: 12 errorMessage: MAC address b4:96:91:1d:7c:20 conflicts with existing node openshift-worker-1 errorType: registration error",
"oc -n openshift-machine-api get bmh openshift-worker-<num>",
"NAME STATE ONLINE ERROR AGE openshift-worker available true 34h",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION openshift-master-1.openshift.example.com Ready master 30h v1.30.3 openshift-master-2.openshift.example.com Ready master 30h v1.30.3 openshift-master-3.openshift.example.com Ready master 30h v1.30.3 openshift-worker-0.openshift.example.com Ready worker 30h v1.30.3 openshift-worker-1.openshift.example.com Ready worker 30h v1.30.3",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE openshift-worker-0.example.com 1 1 1 1 55m openshift-worker-1.example.com 1 1 1 1 55m",
"oc scale --replicas=<num> machineset <machineset> -n openshift-machine-api",
"oc -n openshift-machine-api get bmh openshift-worker-<num>",
"NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioning openshift-worker-<num>-65tjz true",
"NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioned openshift-worker-<num>-65tjz true",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION openshift-master-1.openshift.example.com Ready master 30h v1.30.3 openshift-master-2.openshift.example.com Ready master 30h v1.30.3 openshift-master-3.openshift.example.com Ready master 30h v1.30.3 openshift-worker-0.openshift.example.com Ready worker 30h v1.30.3 openshift-worker-1.openshift.example.com Ready worker 30h v1.30.3 openshift-worker-<num>.openshift.example.com Ready worker 3m27s v1.30.3",
"ssh openshift-worker-<num>",
"[kni@openshift-worker-<num>]USD journalctl -fu kubelet"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/deploying_installer-provisioned_clusters_on_bare_metal/index |
A.2. Troubleshooting sudo with SSSD and sudo Debugging Logs | A.2. Troubleshooting sudo with SSSD and sudo Debugging Logs A.2.1. SSSD and sudo Debug Logging The debug logging feature enables you to log additional information about SSSD and sudo. The sudo Debug Log File To enable sudo debugging: Add the following lines to /etc/sudo.conf : Run the sudo command as the user you want to debug. The /var/log/sudo_debug.log file is created automatically and provides detailed information to answer questions like: What information is available about the user and the environment when running the sudo command? What data sources are used to fetch sudo rules? SSSD plug-in starts with this line: How many rules did SSSD return? Does a rule match or not? The SSSD Debug Log Files To enable SSSD debugging: Add the debug_level option to the [sudo] and [domain/ domain_name ] sections of your /etc/sssd/sssd.conf file: Restart SSSD: Run the sudo command to write the debug information to the log files. The following log files are created: The domain log file: /var/log/sssd/sssd_ domain_name .log This log file helps you to answer questions like: How many rules did SSSD return? What sudo rules did SSSD download from the server? Are the matching rules stored in the cache? What filter was used to download the rules from the server? Use this filter to look up the rules in the IdM database: The sudo responder log file: /var/log/sssd/sssd_sudo.log This log file helps you to answer questions like: How many rules did SSSD return? What filter was applied for searching the cache of SSSD? How do I look up the rules returned from the SSSD cache? Use the following filter to look up the rules: Note The ldbsearch utility is included in the ldb-tools package. | [
"Debug sudo /var/log/sudo_debug.log all@debug Debug sudoers.so /var/log/sudo_debug.log all@debug",
"sudo[22259] settings: debug_flags=all@debug sudo[22259] settings: run_shell=true sudo[22259] settings: progname=sudo sudo[22259] settings: network_addrs=192.0.2.1/255.255.255.0 fe80::250:56ff:feb9:7d6/ffff:ffff:ffff:ffff:: sudo[22259] user_info: user=user_name sudo[22259] user_info: pid=22259 sudo[22259] user_info: ppid=22172 sudo[22259] user_info: pgid=22259 sudo[22259] user_info: tcpgid=22259 sudo[22259] user_info: sid=22172 sudo[22259] user_info: uid=10000 sudo[22259] user_info: euid=0 sudo[22259] user_info: gid=554801393 sudo[22259] user_info: egid=554801393 sudo[22259] user_info: groups=498,6004,6005,7001,106501,554800513,554801107,554801108,554801393,554801503,554802131,554802244,554807670 sudo[22259] user_info: cwd=/ sudo[22259] user_info: tty=/dev/pts/1 sudo[22259] user_info: host=client sudo[22259] user_info: lines=31 sudo[22259] user_info: cols=237",
"sudo[22259] <- sudo_parseln @ ./fileops.c:178 := sudoers: files sss",
"sudo[22259] <- sudo_sss_open @ ./sssd.c:305 := 0",
"sudo[22259] Received 3 rule(s)",
"sudo[22259] sssd/ldap sudoHost 'ALL' ... MATCH! sudo[22259] <- user_in_group @ ./pwutil.c:1010 := false",
"[domain/ domain_name ] debug_level = 0x3ff0 [sudo] debug_level = 0x3ff0",
"systemctl restart sssd",
"[sdap_sudo_refresh_load_done] (0x0400): Received 4-rules rules",
"[sssd[be[LDAP.PB]]] [sysdb_save_sudorule] (0x0400): Adding sudo rule demo-name",
"[sdap_sudo_refresh_load_done] (0x0400): Sudoers is successfully stored in cache",
"[sdap_get_generic_ext_step] (0x0400): calling ldap_search_ext with [(&(objectClass=sudoRole)(|(!(sudoHost=*))(sudoHost=ALL)(sudoHost=client.example.com)(sudoHost=client)(sudoHost=192.0.2.1)(sudoHost=192.0.2.0/24)(sudoHost=2620:52:0:224e:21a:4aff:fe23:1394)(sudoHost=2620:52:0:224e::/64)(sudoHost=fe80::21a:4aff:fe23:1394)(sudoHost=fe80::/64)(sudoHost=+*)(|(sudoHost=*\\\\*)(sudoHost=*?*)(sudoHost=*\\2A*)(sudoHost=*[*]*))))][dc=example,dc=com]",
"ldapsearch -x -D \"cn=Directory Manager\" -W -H ldap:// server.example.com -b dc=example,dc=com ' (&(objectClass=sudoRole)...) '",
"[sssd[sudo]] [sudosrv_get_sudorules_from_cache] (0x0400): Returning 4-rules rules for [[email protected]]",
"[sudosrv_get_sudorules_query_cache] (0x0200): Searching sysdb with [(&(objectClass=sudoRule)(|(sudoUser=ALL)(sudoUser=user)(sudoUser=#10001)(sudoUser=%group-1)(sudoUser=%user)(sudoUser=+*)))]",
"ldbsearch -H /var/lib/sss/db/cache_ domain_name .ldb -b cn=sysdb ' (&(objectClass=sudoRule)...) '"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/troubleshooting-sudo |
Chapter 2. Deleting an OpenShift Dedicated cluster on AWS | Chapter 2. Deleting an OpenShift Dedicated cluster on AWS As cluster owner, you can delete your OpenShift Dedicated clusters. 2.1. Deleting your cluster You can delete your OpenShift Dedicated cluster in Red Hat OpenShift Cluster Manager. Prerequisites You logged in to OpenShift Cluster Manager . You created an OpenShift Dedicated cluster. Procedure From OpenShift Cluster Manager , click on the cluster you want to delete. Select Delete cluster from the Actions drop-down menu. Type the name of the cluster highlighted in bold, then click Delete . Cluster deletion occurs automatically. | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/openshift_dedicated_clusters_on_aws/osd-deleting-a-cluster |
Chapter 1. Support policy for Eclipse Temurin | Chapter 1. Support policy for Eclipse Temurin Red Hat will support select major versions of Eclipse Temurin in its products. For consistency, these versions remain similar to Oracle JDK versions that Oracle designates as long-term support (LTS). A major version of Eclipse Temurin will be supported for a minimum of six years from the time that version is first introduced. For more information, see the Eclipse Temurin Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Eclipse Temurin does not support RHEL 6 as a supported configuration. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.9/rn-openjdk-temurin-support-policy |
CI/CD overview | CI/CD overview OpenShift Container Platform 4.18 Contains information about CI/CD for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/cicd_overview/index |
Chapter 13. Troubleshooting builds | Chapter 13. Troubleshooting builds Use the following to troubleshoot build issues. 13.1. Resolving denial for access to resources If your request for access to resources is denied: Issue A build fails with: requested access to the resource is denied Resolution You have exceeded one of the image quotas set on your project. Check your current quota and verify the limits applied and storage in use: USD oc describe quota 13.2. Service certificate generation failure If your request for access to resources is denied: Issue If a service certificate generation fails with (service's service.beta.openshift.io/serving-cert-generation-error annotation contains): Example output secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60 Resolution The service that generated the certificate no longer exists, or has a different serviceUID . You must force certificates regeneration by removing the old secret, and clearing the following annotations on the service: service.beta.openshift.io/serving-cert-generation-error and service.beta.openshift.io/serving-cert-generation-error-num . To clear the annotations, enter the following commands: USD oc delete secret <secret_name> USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error- USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num- Note The command removing an annotation has a - after the annotation name to be removed. | [
"requested access to the resource is denied",
"oc describe quota",
"secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60",
"oc delete secret <secret_name>",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/builds_using_buildconfig/troubleshooting-builds_build-configuration |
Chapter 20. JbodStorage schema reference | Chapter 20. JbodStorage schema reference Used in: KafkaClusterSpec , KafkaNodePoolSpec The type property is a discriminator that distinguishes use of the JbodStorage type from EphemeralStorage , PersistentClaimStorage . It must have the value jbod for the type JbodStorage . Property Description type Must be jbod . string volumes List of volumes as Storage objects representing the JBOD disks array. EphemeralStorage , PersistentClaimStorage array | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-jbodstorage-reference |
4.56. expat | 4.56. expat 4.56.1. RHSA-2012:0731 - Moderate: expat security update Updated expat packages that fix two security issues are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Expat is a C library written by James Clark for parsing XML documents. Security Fixes CVE-2012-0876 A denial of service flaw was found in the implementation of hash arrays in Expat. An attacker could use this flaw to make an application using Expat consume an excessive amount of CPU time by providing a specially-crafted XML file that triggers multiple hash function collisions. To mitigate this issue, randomization has been added to the hash function to reduce the chance of an attacker successfully causing intentional collisions. CVE-2012-1148 A memory leak flaw was found in Expat. If an XML file processed by an application linked against Expat triggered a memory re-allocation failure, Expat failed to free the previously allocated memory. This could cause the application to exit unexpectedly or crash when all available memory is exhausted. All Expat users should upgrade to these updated packages, which contain backported patches to correct these issues. After installing the updated packages, applications using the Expat library must be restarted for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/expat |
Chapter 5. IDEs in workspaces | Chapter 5. IDEs in workspaces 5.1. Supported IDEs The default IDE in a new workspace is Microsoft Visual Studio Code - Open Source. Alternatively, you can choose another supported IDE: Table 5.1. Supported IDEs IDE id Note Microsoft Visual Studio Code - Open Source che-incubator/che-code/latest This is the default IDE that loads in a new workspace when the URL parameter or che-editor.yaml is not used. JetBrains IntelliJ IDEA Community Edition che-incubator/che-idea/latest Technology Preview . Use the Dashboard to select this IDE. 5.2. Repository-level IDE configuration in OpenShift Dev Spaces You can store IDE configuration files directly in the remote Git repository that contains your project source code. This way, one common IDE configuration is applied to all new workspaces that feature a clone of that repository. Such IDE configuration files might include the following: The /.che/che-editor.yaml file that stores a definition of the chosen IDE. IDE-specific configuration files that one would typically store locally for a desktop IDE. For example, the /.vscode/extensions.json file. 5.3. Microsoft Visual Studio Code - Open Source The OpenShift Dev Spaces build of Microsoft Visual Studio Code - Open Source is the default IDE of a new workspace. You can automate installation of Microsoft Visual Studio Code extensions from the Open VSX registry at workspace startup. See Automating installation of Microsoft Visual Studio Code extensions at workspace startup . Tip Use Tasks to find and run the commands specified in devfile.yaml . Use Dev Spaces commands by clicking Dev Spaces in the Status Bar or finding them through the Command Palette : Dev Spaces: Open Dashboard Dev Spaces: Open OpenShift Console Dev Spaces: Stop Workspace Dev Spaces: Restart Workspace Dev Spaces: Restart Workspace from Local Devfile Dev Spaces: Open Documentation Tip Configure IDE preferences on a per-workspace basis by invoking the Command Palette and selecting Preferences: Open Workspace Settings . Note You might see your organization's branding in this IDE if your organization customized it through a branded build. 5.3.1. Automating installation of Microsoft Visual Studio Code extensions at workspace startup To have the Microsoft Visual Studio Code - Open Source IDE automatically install chosen extensions, you can add an extensions.json file to the remote Git repository that contains your project source code and that will be cloned into workspaces. Prerequisites The public OpenVSX registry at open-vsx.org is selected and accessible over the internet. See Selecting an Open VSX registry instance . Tip To install recommended extensions in a restricted environment, consider the following options instead: https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.16/html-single/administration_guide/index#administration-guide:configuring-the-open-vsx-registry-url to point to your OpenVSX registry. Section 5.4, "Defining a common IDE" . Installing extensions from VSX files . Procedure Get the publisher and extension names of each chosen extension: Find the extension on the Open VSX registry website and copy the URL of the extension's listing page. Extract the <publisher> and <extension> names from the copied URL: Create a .vscode/extensions.json file in the remote Git repository. Add the <publisher> and <extension> names to the extensions.json file as follows: { "recommendations": [ " <publisher_A> . <extension_B> ", " <publisher_C> . <extension_D> ", " <publisher_E> . <extension_F> " ] } Verification Start a new workspace by using the URL of the remote Git repository that contains the created extensions.json file. In the IDE of the workspace, press Ctrl + Shift + X or go to Extensions to find each of the extensions listed in the file. The extension has the label This extension is enabled globally . Additional resources Open VSX registry - Extensions for Microsoft Visual Studio Code compatible editors Microsoft Visual Studio Code - Workspace recommended extensions 5.4. Defining a common IDE While the Section 1.1.1.2, "URL parameter for the IDE" enables you to start a workspace with your personal choice of the supported IDE, you might find it more convenient to define the same IDE for all workspaces for the same source code Git repository. To do so, use the che-editor.yaml file. This file supports even a detailed IDE configuration. Tip If you intend to start most or all of your organization's workspaces with the same IDE other than Microsoft Visual Studio Code - Open Source, an alternative is for the administrator of your organization's OpenShift Dev Spaces instance to specify another supported IDE as the default IDE at the OpenShift Dev Spaces instance level. This can be done with .spec.devEnvironments.defaultEditor in the CheCluster Custom Resource. 5.4.1. Setting up che-editor.yaml By using the che-editor.yaml file, you can set a common default IDE for your team and provide new contributors with the most suitable IDE for your project source code. You can also use the che-editor.yaml file when you need to set a different IDE default for a particular source code Git repository rather than the default IDE of your organization's OpenShift Dev Spaces instance. Procedure In the remote Git repository of your project source code, create a /.che/che-editor.yaml file with lines that specify the relevant parameter. Verification Start a new workspace with a clone of the Git repository . Verify that the specified IDE loads in the browser tab of the started workspace. 5.4.2. Parameters for che-editor.yaml The simplest way to select an IDE in the che-editor.yaml is to specify the id of an IDE from the table of supported IDEs: Table 5.2. Supported IDEs IDE id Note Microsoft Visual Studio Code - Open Source che-incubator/che-code/latest This is the default IDE that loads in a new workspace when the URL parameter or che-editor.yaml is not used. JetBrains IntelliJ IDEA Community Edition che-incubator/che-idea/latest Technology Preview . Use the Dashboard to select this IDE. Example 5.1. id selects an IDE from the plugin registry id: che-incubator/che-idea/latest As alternatives to providing the id parameter, the che-editor.yaml file supports a reference to the URL of another che-editor.yaml file or an inline definition for an IDE outside of a plugin registry: Example 5.2. reference points to a remote che-editor.yaml file reference: https:// <hostname_and_path_to_a_remote_file> /che-editor.yaml Example 5.3. inline specifies a complete definition for a customized IDE without a plugin registry inline: schemaVersion: 2.1.0 metadata: name: JetBrains IntelliJ IDEA Community IDE components: - name: intellij container: image: 'quay.io/che-incubator/che-idea:' volumeMounts: - name: projector-user path: /home/projector-user mountSources: true memoryLimit: 2048M memoryRequest: 32Mi cpuLimit: 1500m cpuRequest: 100m endpoints: - name: intellij attributes: type: main cookiesAuthEnabled: true urlRewriteSupported: true discoverable: false path: /?backgroundColor=434343&wss targetPort: 8887 exposure: public secure: false protocol: https attributes: {} - name: projector-user volume: {} For more complex scenarios, the che-editor.yaml file supports the registryUrl and override parameters: Example 5.4. registryUrl points to a custom plugin registry rather than to the default OpenShift Dev Spaces plugin registry id: <editor_id> 1 registryUrl: <url_of_custom_plugin_registry> 1 The id of the IDE in the custom plugin registry. Example 5.5. override of the default value of one or more defined properties of the IDE ... 1 override: containers: - name: che-idea memoryLimit: 1280Mi cpuLimit: 1510m cpuRequest: 102m ... 1 id: , registryUrl: , or reference: . | [
"https://www.open-vsx.org/extension/ <publisher> / <extension>",
"{ \"recommendations\": [ \" <publisher_A> . <extension_B> \", \" <publisher_C> . <extension_D> \", \" <publisher_E> . <extension_F> \" ] }",
"id: che-incubator/che-idea/latest",
"reference: https:// <hostname_and_path_to_a_remote_file> /che-editor.yaml",
"inline: schemaVersion: 2.1.0 metadata: name: JetBrains IntelliJ IDEA Community IDE components: - name: intellij container: image: 'quay.io/che-incubator/che-idea:next' volumeMounts: - name: projector-user path: /home/projector-user mountSources: true memoryLimit: 2048M memoryRequest: 32Mi cpuLimit: 1500m cpuRequest: 100m endpoints: - name: intellij attributes: type: main cookiesAuthEnabled: true urlRewriteSupported: true discoverable: false path: /?backgroundColor=434343&wss targetPort: 8887 exposure: public secure: false protocol: https attributes: {} - name: projector-user volume: {}",
"id: <editor_id> 1 registryUrl: <url_of_custom_plugin_registry>",
"... 1 override: containers: - name: che-idea memoryLimit: 1280Mi cpuLimit: 1510m cpuRequest: 102m"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/user_guide/ides-in-workspaces |
Appendix F. Swift request headers | Appendix F. Swift request headers Table F.1. Request Headers Name Description Type Required X-Auth-User The key Ceph Object Gateway username to authenticate. String Yes X-Auth-Key The key associated to a Ceph Object Gateway username. String Yes | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/developer_guide/swift-request-headers_dev |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/viewing_reports_about_your_ansible_automation_environment/providing-feedback |
Chapter 149. StrimziPodSetSpec schema reference | Chapter 149. StrimziPodSetSpec schema reference Used in: StrimziPodSet Property Description selector Selector is a label query which matches all the pods managed by this StrimziPodSet . Only matchLabels is supported. If matchExpressions is set, it will be ignored. For more information, see the external documentation for meta/v1 labelselector . LabelSelector pods The Pods managed by this StrimziPodSet. For more information, see the external documentation for core/v1 pods . Map array | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-strimzipodsetspec-reference |
Chapter 4. action | Chapter 4. action This chapter describes the commands under the action command. 4.1. action definition create Create new action. Usage: Table 4.1. Positional arguments Value Summary definition Action definition file Table 4.2. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --public With this flag action will be marked as "public". --namespace [NAMESPACE] Namespace to create the action within. Table 4.3. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 4.4. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 4.5. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.2. action definition definition show Show action definition. Usage: Table 4.7. Positional arguments Value Summary name Action name Table 4.8. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace of the action. 4.3. action definition delete Delete action. Usage: Table 4.9. Positional arguments Value Summary action Name or id of action(s). Table 4.10. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace of the action(s). 4.4. action definition list List all actions. Usage: Table 4.11. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --namespace [NAMESPACE] Namespace of the actions. Table 4.12. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 4.13. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 4.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.15. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.5. action definition show Show specific action. Usage: Table 4.16. Positional arguments Value Summary action Action (name or id) Table 4.17. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to create the action within. Table 4.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 4.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 4.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.6. action definition update Update action. Usage: Table 4.22. Positional arguments Value Summary definition Action definition file Table 4.23. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --id ID Action id. --public With this flag action will be marked as "public". --namespace [NAMESPACE] Namespace of the action. Table 4.24. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 4.25. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 4.26. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.27. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.7. action execution delete Delete action execution. Usage: Table 4.28. Positional arguments Value Summary action_execution Id of action execution identifier(s). Table 4.29. Command arguments Value Summary -h, --help Show this help message and exit 4.8. action execution input show Show Action execution input data. Usage: Table 4.30. Positional arguments Value Summary id Action execution id. Table 4.31. Command arguments Value Summary -h, --help Show this help message and exit 4.9. action execution list List all Action executions. Usage: Table 4.32. Positional arguments Value Summary task_execution_id Task execution id. Table 4.33. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --oldest Display the executions starting from the oldest entries instead of the newest Table 4.34. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 4.35. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 4.36. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.37. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.10. action execution output show Show Action execution output data. Usage: Table 4.38. Positional arguments Value Summary id Action execution id. Table 4.39. Command arguments Value Summary -h, --help Show this help message and exit 4.11. action execution run Create new Action execution or just run specific action. Usage: Table 4.40. Positional arguments Value Summary name Action name to execute. input Action input. Table 4.41. Command arguments Value Summary -h, --help Show this help message and exit -s, --save-result Save the result into db. --run-sync Run the action synchronously. -t TARGET, --target TARGET Action will be executed on <target> executor. --namespace [NAMESPACE] Namespace of the action(s). Table 4.42. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 4.43. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.44. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 4.45. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.12. action execution show Show specific Action execution. Usage: Table 4.46. Positional arguments Value Summary action_execution Action execution id. Table 4.47. Command arguments Value Summary -h, --help Show this help message and exit Table 4.48. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 4.49. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.50. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 4.51. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 4.13. action execution update Update specific Action execution. Usage: Table 4.52. Positional arguments Value Summary id Action execution id. Table 4.53. Command arguments Value Summary -h, --help Show this help message and exit --state {PAUSED,RUNNING,SUCCESS,ERROR,CANCELLED} Action execution state --output OUTPUT Action execution output Table 4.54. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 4.55. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 4.56. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 4.57. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack action definition create [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--public] [--namespace [NAMESPACE]] definition",
"openstack action definition definition show [-h] [--namespace [NAMESPACE]] name",
"openstack action definition delete [-h] [--namespace [NAMESPACE]] action [action ...]",
"openstack action definition list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--namespace [NAMESPACE]]",
"openstack action definition show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--namespace [NAMESPACE]] action",
"openstack action definition update [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--id ID] [--public] [--namespace [NAMESPACE]] definition",
"openstack action execution delete [-h] action_execution [action_execution ...]",
"openstack action execution input show [-h] id",
"openstack action execution list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--oldest] [task_execution_id]",
"openstack action execution output show [-h] id",
"openstack action execution run [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-s] [--run-sync] [-t TARGET] [--namespace [NAMESPACE]] name [input]",
"openstack action execution show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] action_execution",
"openstack action execution update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--state {PAUSED,RUNNING,SUCCESS,ERROR,CANCELLED}] [--output OUTPUT] id"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/action |
Chapter 6. Monitoring the Dev Workspace operator | Chapter 6. Monitoring the Dev Workspace operator This chapter describes how to configure an example monitoring stack to process metrics exposed by the Dev Workspace operator. You must enable the Dev Workspace operator to follow the instructions in this chapter. See https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#enabling-dev-workspace-operator.adoc . 6.1. Collecting Dev Workspace operator metrics with Prometheus This section describes how to use the Prometheus to collect, store, and query metrics about the Dev Workspace operator. Prerequisites The devworkspace-controller-metrics service is exposing metrics on port 8443 . The devworkspace-webhookserver service is exposing metrics on port 9443 . By default, the service exposes metrics on port 9443 . Prometheus 2.26.0 or later is running. The Prometheus console is running on port 9090 with a corresponding service and route . See First steps with Prometheus . Procedure Create a ClusterRoleBinding to bind the ServiceAccount associated with Prometheus to the devworkspace-controller-metrics-reader ClusterRole . Without the ClusterRoleBinding , you cannot access Dev Workspace metrics because they are protected with role-based access control (RBAC). Example 6.1. ClusterRole example apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: devworkspace-controller-metrics-reader rules: - nonResourceURLs: - /metrics verbs: - get Example 6.2. ClusterRoleBinding example apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: devworkspace-controller-metrics-binding subjects: - kind: ServiceAccount name: <ServiceAccount name associated with the Prometheus Pod> namespace: <Prometheus namespace> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: devworkspace-controller-metrics-reader Configure Prometheus to scrape metrics from the 8443 port exposed by the devworkspace-controller-metrics service, and 9443 port exposed by the devworkspace-webhookserver service. Example 6.3. Prometheus configuration example apiVersion: v1 kind: ConfigMap metadata: name: prometheus-config data: prometheus.yml: |- global: scrape_interval: 5s 1 evaluation_interval: 5s 2 scrape_configs: 3 - job_name: 'DevWorkspace' authorization: type: Bearer credentials_file: '/var/run/secrets/kubernetes.io/serviceaccount/token' tls_config: insecure_skip_verify: true static_configs: - targets: ['devworkspace-controller-metrics:8443'] 4 - job_name: 'DevWorkspace webhooks' authorization: type: Bearer credentials_file: '/var/run/secrets/kubernetes.io/serviceaccount/token' tls_config: insecure_skip_verify: true static_configs: - targets: ['devworkspace-webhookserver:9443'] 5 1 Rate at which a target is scraped. 2 Rate at which recording and alerting rules are re-checked. 3 Resources that Prometheus monitors. In the default configuration, two jobs ( DevWorkspace and DevWorkspace webhooks ), scrape the time series data exposed by the devworkspace-controller-metrics and devworkspace-webhookserver services. 4 Scrape metrics from the 8443 port. 5 Scrape metrics from the 9443 port. Verification steps Use the Prometheus console to view targets and metrics. For more information, see Using the expression browser . Additional resources First steps with Prometheus . Configuring Prometheus . Querying Prometheus . Prometheus metric types . 6.2. Dev Workspace-specific metrics This section describes the Dev Workspace-specific metrics exposed by the devworkspace-controller-metrics service. Table 6.1. Metrics Name Type Description Labels devworkspace_started_total Counter Number of Dev Workspace starting events. source , routingclass devworkspace_started_success_total Counter Number of Dev Workspaces successfully entering the Running phase. source , routingclass devworkspace_fail_total Counter Number of failed Dev Workspaces. source , reason devworkspace_startup_time Histogram Total time taken to start a Dev Workspace, in seconds. source , routingclass Table 6.2. Labels Name Description Values source The controller.devfile.io/devworkspace-source label of the Dev Workspace. string routingclass The spec.routingclass of the Dev Workspace. "basic|cluster|cluster-tls|web-terminal" reason The workspace startup failure reason. "BadRequest|InfrastructureFailure|Unknown" Table 6.3. Startup failure reasons Name Description BadRequest Startup failure due to an invalid devfile used to create a Dev Workspace. InfrastructureFailure Startup failure due to the following errors: CreateContainerError , RunContainerError , FailedScheduling , FailedMount . Unknown Unknown failure reason. | [
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: devworkspace-controller-metrics-reader rules: - nonResourceURLs: - /metrics verbs: - get",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: devworkspace-controller-metrics-binding subjects: - kind: ServiceAccount name: <ServiceAccount name associated with the Prometheus Pod> namespace: <Prometheus namespace> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: devworkspace-controller-metrics-reader",
"apiVersion: v1 kind: ConfigMap metadata: name: prometheus-config data: prometheus.yml: |- global: scrape_interval: 5s 1 evaluation_interval: 5s 2 scrape_configs: 3 - job_name: 'DevWorkspace' authorization: type: Bearer credentials_file: '/var/run/secrets/kubernetes.io/serviceaccount/token' tls_config: insecure_skip_verify: true static_configs: - targets: ['devworkspace-controller-metrics:8443'] 4 - job_name: 'DevWorkspace webhooks' authorization: type: Bearer credentials_file: '/var/run/secrets/kubernetes.io/serviceaccount/token' tls_config: insecure_skip_verify: true static_configs: - targets: ['devworkspace-webhookserver:9443'] 5"
] | https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/administration_guide/monitoring-the-dev-workspace-operator |
Chapter 3. Deploying a Red Hat Enterprise Linux image as a Google Compute Engine instance on Google Cloud Platform | Chapter 3. Deploying a Red Hat Enterprise Linux image as a Google Compute Engine instance on Google Cloud Platform To set up a deployment of Red Hat Enterprise Linux 8 (RHEL 8) on Google Cloud Platform (GCP), you can deploy RHEL 8 as a Google Compute Engine (GCE) instance on GCP. Note For a list of Red Hat product certifications for GCP, see Red Hat on Google Cloud Platform . Important You can create a custom VM from an ISO image, but Red Hat recommends that you use the Red Hat Image Builder product to create customized images for use on specific cloud providers. See Composing a Customized RHEL System Image for more information. Prerequisites You need a Red Hat Customer Portal account to complete the procedures in this chapter. Create an account with GCP to access the Google Cloud Platform Console. See Google Cloud for more information. 3.1. Red Hat Enterprise Linux image options on GCP You can use multiple types of images for deploying RHEL 8 on Google Cloud Platform. Based on your requirements, consider which option is optimal for your use case. Table 3.1. Image options Image option Subscriptions Sample scenario Considerations Deploy a Red Hat Gold Image. Use your existing Red Hat subscriptions. Select a Red Hat Gold Image on Google Cloud Platform. For details on Gold Images and how to access them on Google Cloud Platform, see the Red Hat Cloud Access Reference Guide . The subscription includes the Red Hat product cost; you pay Google for all other instance costs. Red Hat provides support directly for custom RHEL images. Deploy a custom image that you move to GCP. Use your existing Red Hat subscriptions. Upload your custom image and attach your subscriptions. The subscription includes the Red Hat product cost; you pay all other instance costs. Red Hat provides support directly for custom RHEL images. Deploy an existing GCP image that includes RHEL. The GCP images include a Red Hat product. Choose a RHEL image when you launch an instance on the GCP Compute Engine , or choose an image from the Google Cloud Platform Marketplace . You pay GCP hourly on a pay-as-you-go model. Such images are called "on-demand" images. GCP offers support for on-demand images through a support agreement. Note You can create a custom image for GCP by using Red Hat Image Builder. See Composing a Customized RHEL System Image for more information. Important You cannot convert an on-demand instance to a custom RHEL instance. To change from an on-demand image to a custom RHEL bring-your-own-subscription (BYOS) image: Create a new custom RHEL instance and migrate data from your on-demand instance. Cancel your on-demand instance after you migrate your data to avoid double billing. Additional resources Red Hat in the Public Cloud Compute Engine images Creating an instance from a custom image 3.2. Understanding base images To create a base VM from an ISO image, you can use preconfigured base images and their configuration settings. 3.2.1. Using a custom base image To manually configure a virtual machine (VM), first create a base (starter) VM image. Then, you can modify configuration settings and add the packages the VM requires to operate on the cloud. You can make additional configuration changes for your specific application after you upload the image. Additional resources Red Hat Enterprise Linux 3.2.2. Virtual machine configuration settings Cloud VMs must have the following configuration settings. Table 3.2. VM configuration settings Setting Recommendation ssh ssh must be enabled to provide remote access to your VMs. dhcp The primary virtual adapter should be configured for dhcp. 3.3. Creating a base VM from an ISO image To create a RHEL 8 base image from an ISO image, enable your host machine for virtualization and create a RHEL virtual machine (VM). Prerequisites Virtualization is enabled on your host machine. You have downloaded the latest Red Hat Enterprise Linux ISO image from the Red Hat Customer Portal and moved the image to /var/lib/libvirt/images . 3.3.1. Creating a VM from the RHEL ISO image Procedure Ensure that you have enabled your host machine for virtualization. See Enabling virtualization in RHEL 8 for information and procedures. Create and start a basic Red Hat Enterprise Linux VM. For instructions, see Creating virtual machines . If you use the command line to create your VM, ensure that you set the default memory and CPUs to the capacity you want for the VM. Set your virtual network interface to virtio . For example, the following command creates a kvmtest VM by using the /home/username/Downloads/rhel8.iso image: If you use the web console to create your VM, follow the procedure in Creating virtual machines by using the web console , with these caveats: Do not check Immediately Start VM . Change your Memory size to your preferred settings. Before you start the installation, ensure that you have changed Model under Virtual Network Interface Settings to virtio and change your vCPUs to the capacity settings you want for the VM. 3.3.2. Completing the RHEL installation To finish the installation of a RHEL system that you want to deploy on Amazon Web Services (AWS), customize the Installation Summary view, begin the installation, and enable root access once the VM launches. Procedure Choose the language you want to use during the installation process. On the Installation Summary view: Click Software Selection and check Minimal Install . Click Done . Click Installation Destination and check Custom under Storage Configuration . Verify at least 500 MB for /boot . You can use the remaining space for root / . Standard partitions are recommended, but you can use Logical Volume Manager (LVM). You can use xfs, ext4, or ext3 for the file system. Click Done when you are finished with changes. Click Begin Installation . Set a Root Password . Create other users as applicable. Reboot the VM and log in as root once the installation completes. Configure the image. Register the VM and enable the Red Hat Enterprise Linux 8 repository. Ensure that the cloud-init package is installed and enabled. Power down the VM. Additional resources Introduction to cloud-init 3.4. Uploading the RHEL image to GCP To run your RHEL 8 instance on Google Cloud Platform (GCP), you must upload your RHEL 8 image to GCP. 3.4.1. Creating a new project on GCP To upload your Red Hat Enterprise Linux 8 image to Google Cloud Platform (GCP), you must first create a new project on GCP. Prerequisites You must have an account with GCP. If you do not, see Google Cloud for more information. Procedure Launch the GCP Console . Click the drop-down menu to the right of Google Cloud Platform . From the pop-up menu, click NEW PROJECT . From the New Project window, enter a name for your new project. Check Organization . Click the drop-down menu to change the organization, if necessary. Confirm the Location of your parent organization or folder. Click Browse to search for and change this value, if necessary. Click CREATE to create your new GCP project. Note Once you have installed the Google Cloud SDK, you can use the gcloud projects create CLI command to create a project. For example: The example creates a project with the project ID my-gcp-project3 and the project name project3 . See gcloud project create for more information. Additional resources Creating and Managing Resources in Google Cloud 3.4.2. Installing the Google Cloud SDK Many of the procedures to manage HA clusters on Google Cloud Platform (GCP) require the tools in the Google Cloud SDK. Procedure Follow the GCP instructions for downloading and extracting the Google Cloud SDK archive. See the GCP document Quickstart for Linux for details. Follow the same instructions for initializing the Google Cloud SDK. Note Once you have initialized the Google Cloud SDK, you can use the gcloud CLI commands to perform tasks and obtain information about your project and instances. For example, you can display project information with the gcloud compute project-info describe --project <project-name> command. Additional resources Quickstart for Linux gcloud command reference gcloud command-line tool overview 3.4.3. Creating SSH keys for Google Compute Engine Generate and register SSH keys with GCE so that you can SSH directly into an instance by using its public IP address. Procedure Use the ssh-keygen command to generate an SSH key pair for use with GCE. From the GCP Console Dashboard page , click the Navigation menu to the left of the Google Cloud Console banner and select Compute Engine and then select Metadata . Click SSH Keys and then click Edit . Enter the output generated from the ~/.ssh/google_compute_engine.pub file and click Save . You can now connect to your instance by using standard SSH. Note You can run the gcloud compute config-ssh command to populate your config file with aliases for your instances. The aliases allow simple SSH connections by instance name. For information about the gcloud compute config-ssh command, see gcloud compute config-ssh . Additional resources gcloud compute config-ssh Connecting to instances 3.4.4. Creating a storage bucket in GCP Storage To import your RHEL 8 image to GCP, you must first create a GCP Storage Bucket. Procedure If you are not already logged in to GCP, log in with the following command. Create a storage bucket. Note Alternatively, you can use the Google Cloud Console to create a bucket. See Create a bucket for information. Additional resources Create a bucket 3.4.5. Converting and uploading your image to your GCP Bucket Before a local RHEL 8 image can be deployed in GCP, you must first convert and upload the image to your GCP Bucket. The following steps describe converting an qcow2 image to raw format and then uploading the image as a tar archive. However, using different formats is possible as well. Procedure Run the qemu-img command to convert your image. The converted image must have the name disk.raw . Tar the image. Upload the image to the bucket you created previously. Upload could take a few minutes. From the Google Cloud Platform home screen, click the collapsed menu icon and select Storage and then select Browser . Click the name of your bucket. The tarred image is listed under your bucket name. Note You can also upload your image by using the GCP Console . To do so, click the name of your bucket and then click Upload files . Additional resources Manually importing virtual disks Choosing an import method 3.4.6. Creating an image from the object in the GCP bucket Before you can create a GCE image from an object that you uploaded to your GCP bucket, you must convert the object into a GCE image. Procedure Run the following command to create an image for GCE. Specify the name of the image you are creating, the bucket name, and the name of the tarred image. Note Alternatively, you can use the Google Cloud Console to create an image. See Creating, deleting, and deprecating custom images for more information. Optional: Find the image in the GCP Console. Click the Navigation menu to the left of the Google Cloud Console banner. Select Compute Engine and then Images . Additional resources Creating, deleting, and deprecating custom images gcloud compute images create 3.4.7. Creating a Google Compute Engine instance from an image To configure a GCE VM instance from an image, use the GCP Console. Note See Creating and starting a VM instance for more information about GCE VM instances and their configuration options. Procedure From the GCP Console Dashboard page , click the Navigation menu to the left of the Google Cloud Console banner and select Compute Engine and then select Images . Select your image. Click Create Instance . On the Create an instance page, enter a Name for your instance. Choose a Region and Zone . Choose a Machine configuration that meets or exceeds the requirements of your workload. Ensure that Boot disk specifies the name of your image. Optional: Under Firewall , select Allow HTTP traffic or Allow HTTPS traffic . Click Create . Note These are the minimum configuration options necessary to create a basic instance. Review additional options based on your application requirements. Find your image under VM instances . From the GCP Console Dashboard, click the Navigation menu to the left of the Google Cloud Console banner and select Compute Engine and then select VM instances . Note Alternatively, you can use the gcloud compute instances create CLI command to create a GCE VM instance from an image. A simple example follows. The example creates a VM instance named myinstance3 in zone us-central1-a based upon the existing image test-iso2-image . See gcloud compute instances create for more information. 3.4.8. Connecting to your instance Connect to your GCE instance by using its public IP address. Procedure Ensure that your instance is running. The following command lists information about your GCE instance, including whether the instance is running, and, if so, the public IP address of the running instance. Connect to your instance by using standard SSH. The example uses the google_compute_engine key created earlier. Note GCP offers a number of ways to SSH into your instance. See Connecting to instances for more information. You can also connect to your instance using the root account and password you set previously. Additional resources gcloud compute instances list Connecting to instances 3.4.9. Attaching Red Hat subscriptions Using the subscription-manager command, you can register and attach your Red Hat subscription to a RHEL instance. Prerequisites You must have enabled your subscriptions. Procedure Register your system. Attach your subscriptions. You can use an activation key to attach subscriptions. See Creating Red Hat Customer Portal Activation Keys for more information. Alternatively, you can manually attach a subscription by using the ID of the subscription pool (Pool ID). See Attaching a host-based subscription to hypervisors . Optional: To collect various system metrics about the instance in the Red Hat Hybrid Cloud Console , you can register the instance with Red Hat Insights . For information on further configuration of Red Hat Insights, see Client Configuration Guide for Red Hat Insights . Additional resources Creating Red Hat Customer Portal Activation Keys Attaching a host-based subscription to hypervisors Client Configuration Guide for Red Hat Insights 3.5. Additional resources Red Hat in the Public Cloud Google Cloud | [
"virt-install --name kvmtest --memory 2048 --vcpus 2 --cdrom /home/username/Downloads/rhel8.iso,bus=virtio --os-variant=rhel8.0",
"subscription-manager register --auto-attach",
"yum install cloud-init systemctl enable --now cloud-init.service",
"gcloud projects create my-gcp-project3 --name project3",
"ssh-keygen -t rsa -f ~/.ssh/google_compute_engine",
"ssh -i ~/.ssh/google_compute_engine <username> @ <instance_external_ip>",
"gcloud auth login",
"gsutil mb gs://bucket_name",
"qemu-img convert -f qcow2 -O raw rhel-8.0-sample.qcow2 disk.raw",
"tar --format=oldgnu -Sczf disk.raw.tar.gz disk.raw",
"gsutil cp disk.raw.tar.gz gs://bucket_name",
"gcloud compute images create my-image-name --source-uri gs://my-bucket-name/disk.raw.tar.gz",
"gcloud compute instances create myinstance3 --zone=us-central1-a --image test-iso2-image",
"gcloud compute instances list",
"ssh -i ~/.ssh/google_compute_engine <user_name>@<instance_external_ip>",
"subscription-manager register --auto-attach",
"insights-client register --display-name <display-name-value>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_rhel_8_on_google_cloud_platform/assembly_deploying-a-rhel-image-as-a-compute-engine-instance-on-google-cloud-platform |
Chapter 1. Preparing to install on Nutanix | Chapter 1. Preparing to install on Nutanix Before you install an OpenShift Container Platform cluster, be sure that your Nutanix environment meets the following requirements. 1.1. Nutanix version requirements You must install the OpenShift Container Platform cluster to a Nutanix environment that meets the following requirements. Table 1.1. Version requirements for Nutanix virtual environments Component Required version Nutanix AOS 6.5.2.7 or later Prism Central pc.2022.6 or later 1.2. Environment requirements Before you install an OpenShift Container Platform cluster, review the following Nutanix AOS environment requirements. 1.2.1. Required account privileges The installation program requires access to a Nutanix account with the necessary permissions to deploy the cluster and to maintain the daily operation of it. The following options are available to you: You can use a local Prism Central user account with administrative privileges. Using a local account is the quickest way to grant access to an account with the required permissions. If your organization's security policies require that you use a more restrictive set of permissions, use the permissions that are listed in the following table to create a custom Cloud Native role in Prism Central. You can then assign the role to a user account that is a member of a Prism Central authentication directory. Consider the following when managing this user account: When assigning entities to the role, ensure that the user can access only the Prism Element and subnet that are required to deploy the virtual machines. Ensure that the user is a member of the project to which it needs to assign virtual machines. For more information, see the Nutanix documentation about creating a Custom Cloud Native role , assigning a role , and adding a user to a project . Example 1.1. Required permissions for creating a Custom Cloud Native role Nutanix Object When required Required permissions in Nutanix API Description Categories Always Create_Category_Mapping Create_Or_Update_Name_Category Create_Or_Update_Value_Category Delete_Category_Mapping Delete_Name_Category Delete_Value_Category View_Category_Mapping View_Name_Category View_Value_Category Create, read, and delete categories that are assigned to the OpenShift Container Platform machines. Images Always Create_Image Delete_Image View_Image Create, read, and delete the operating system images used for the OpenShift Container Platform machines. Virtual Machines Always Create_Virtual_Machine Delete_Virtual_Machine View_Virtual_Machine Create, read, and delete the OpenShift Container Platform machines. Clusters Always View_Cluster View the Prism Element clusters that host the OpenShift Container Platform machines. Subnets Always View_Subnet View the subnets that host the OpenShift Container Platform machines. Projects If you will associate a project with compute machines, control plane machines, or all machines. View_Project View the projects defined in Prism Central and allow a project to be assigned to the OpenShift Container Platform machines. 1.2.2. Cluster limits Available resources vary between clusters. The number of possible clusters within a Nutanix environment is limited primarily by available storage space and any limitations associated with the resources that the cluster creates, and resources that you require to deploy the cluster, such a IP addresses and networks. 1.2.3. Cluster resources A minimum of 800 GB of storage is required to use a standard cluster. When you deploy a OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your Nutanix instance. Although these resources use 856 GB of storage, the bootstrap node is destroyed as part of the installation process. A standard OpenShift Container Platform installation creates the following resources: 1 label Virtual machines: 1 disk image 1 temporary bootstrap node 3 control plane nodes 3 compute machines 1.2.4. Networking requirements You must use either AHV IP Address Management (IPAM) or Dynamic Host Configuration Protocol (DHCP) for the network and ensure that it is configured to provide persistent IP addresses to the cluster machines. Additionally, create the following networking resources before you install the OpenShift Container Platform cluster: IP addresses DNS records Note It is recommended that each OpenShift Container Platform node in the cluster have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, an NTP server prevents errors typically associated with asynchronous server clocks. 1.2.4.1. Required IP Addresses An installer-provisioned installation requires two static virtual IP (VIP) addresses: A VIP address for the API is required. This address is used to access the cluster API. A VIP address for ingress is required. This address is used for cluster ingress traffic. You specify these IP addresses when you install the OpenShift Container Platform cluster. 1.2.4.2. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the Nutanix instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. If you use your own DNS or DHCP server, you must also create records for each node, including the bootstrap, control plane, and compute nodes. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 1.2. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 1.3. Configuring the Cloud Credential Operator utility The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). To install a cluster on Nutanix, you must set the CCO to manual mode as part of the installation process. To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Preparing to update a cluster with manually maintained credentials | [
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command."
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_nutanix/preparing-to-install-on-nutanix |
Part V. Administration: Managing Authentication | Part V. Administration: Managing Authentication This part provides instruction on how to set up and manage smart card authentication. In addition, it covers certificate-related topics, such as issuing certificates, configuring certificate-based authentication, and controlling certificate validity in Identity Management . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/p.administration-guide-authentication |
Chapter 28. Network Observability | Chapter 28. Network Observability 28.1. Network Observability Operator release notes The Network Observability Operator enables administrators to observe and analyze network traffic flows for OpenShift Container Platform clusters. These release notes track the development of the Network Observability Operator in the OpenShift Container Platform. For an overview of the Network Observability Operator, see About Network Observability Operator . 28.1.1. Network Observability Operator 1.3.0 The following advisory is available for the Network Observability Operator 1.3.0: RHSA-2023:3905 Network Observability Operator 1.3.0 28.1.1.1. Channel deprecation You must switch your channel from v1.0.x to stable to receive future Operator updates. The v1.0.x channel is deprecated and planned for removal in the release. 28.1.1.2. New features and enhancements 28.1.1.2.1. Multi-tenancy in Network Observability System administrators can allow and restrict individual user access, or group access, to the flows stored in Loki. For more information, see Multi-tenancy in Network Observability . 28.1.1.2.2. Flow-based metrics dashboard This release adds a new dashboard, which provides an overview of the network flows in your OpenShift Container Platform cluster. For more information, see Network Observability metrics . 28.1.1.2.3. Troubleshooting with the must-gather tool Information about the Network Observability Operator can now be included in the must-gather data for troubleshooting. For more information, see Network Observability must-gather . 28.1.1.2.4. Multiple architectures now supported Network Observability Operator can now run on an amd64, ppc64le, or arm64 architecture. Previously, it only ran on amd64. 28.1.1.3. Deprecated features 28.1.1.3.1. Deprecated configuration parameter setting The release of Network Observability Operator 1.3 deprecates the spec.Loki.authToken HOST setting. When using the Loki Operator, you must now only use the FORWARD setting. 28.1.1.4. Bug fixes Previously, when the Operator was installed from the CLI, the Role and RoleBinding that are necessary for the Cluster Monitoring Operator to read the metrics were not installed as expected. The issue did not occur when the operator was installed from the web console. Now, either way of installing the Operator installs the required Role and RoleBinding . ( NETOBSERV-1003 ) Since version 1.2, the Network Observability Operator can raise alerts when a problem occurs with the flows collection. Previously, due to a bug, the related configuration to disable alerts, spec.processor.metrics.disableAlerts was not working as expected and sometimes ineffectual. Now, this configuration is fixed so that it is possible to disable the alerts. ( NETOBSERV-976 ) Previously, when Network Observability was configured with spec.loki.authToken set to DISABLED , only a kubeadmin cluster administrator was able to view network flows. Other types of cluster administrators received authorization failure. Now, any cluster administrator is able to view network flows. ( NETOBSERV-972 ) Previously, a bug prevented users from setting spec.consolePlugin.portNaming.enable to false . Now, this setting can be set to false to disable port-to-service name translation. ( NETOBSERV-971 ) Previously, the metrics exposed by the console plugin were not collected by the Cluster Monitoring Operator (Prometheus), due to an incorrect configuration. Now the configuration has been fixed so that the console plugin metrics are correctly collected and accessible from the OpenShift Container Platform web console. ( NETOBSERV-765 ) Previously, when processor.metrics.tls was set to AUTO in the FlowCollector , the flowlogs-pipeline servicemonitor did not adapt the appropriate TLS scheme, and metrics were not visible in the web console. Now the issue is fixed for AUTO mode. ( NETOBSERV-1070 ) Previously, certificate configuration, such as used for Kafka and Loki, did not allow specifying a namespace field, implying that the certificates had to be in the same namespace where Network Observability is deployed. Moreover, when using Kafka with TLS/mTLS, the user had to manually copy the certificate(s) to the privileged namespace where the eBPF agent pods are deployed and manually manage certificate updates, such as in the case of certificate rotation. Now, Network Observability setup is simplified by adding a namespace field for certificates in the FlowCollector resource. As a result, users can now install Loki or Kafka in different namespaces without needing to manually copy their certificates in the Network Observability namespace. The original certificates are watched so that the copies are automatically updated when needed. ( NETOBSERV-773 ) Previously, the SCTP, ICMPv4 and ICMPv6 protocols were not covered by the Network Observability agents, resulting in a less comprehensive network flows coverage. These protocols are now recognized to improve the flows coverage. ( NETOBSERV-934 ) 28.1.1.5. Known issue When processor.metrics.tls is set to PROVIDED in the FlowCollector , the flowlogs-pipeline servicemonitor is not adapted to the TLS scheme. ( NETOBSERV-1087 ) 28.1.2. Network Observability Operator 1.2.0 The following advisory is available for the Network Observability Operator 1.2.0: RHSA-2023:1817 Network Observability Operator 1.2.0 28.1.2.1. Preparing for the update The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. Until the 1.2 release of the Network Observability Operator, the only channel available was v1.0.x . The 1.2 release of the Network Observability Operator introduces the stable update channel for tracking and receiving updates. You must switch your channel from v1.0.x to stable to receive future Operator updates. The v1.0.x channel is deprecated and planned for removal in a following release. 28.1.2.2. New features and enhancements 28.1.2.2.1. Histogram in Traffic Flows view You can now choose to show a histogram bar chart of flows over time. The histogram enables you to visualize the history of flows without hitting the Loki query limit. For more information, see Using the histogram . 28.1.2.2.2. Conversation tracking You can now query flows by Log Type , which enables grouping network flows that are part of the same conversation. For more information, see Working with conversations . 28.1.2.2.3. Network Observability health alerts The Network Observability Operator now creates automatic alerts if the flowlogs-pipeline is dropping flows because of errors at the write stage or if the Loki ingestion rate limit has been reached. For more information, see Viewing health information . 28.1.2.3. Bug fixes Previously, after changing the namespace value in the FlowCollector spec, eBPF Agent pods running in the namespace were not appropriately deleted. Now, the pods running in the namespace are appropriately deleted. ( NETOBSERV-774 ) Previously, after changing the caCert.name value in the FlowCollector spec (such as in Loki section), FlowLogs-Pipeline pods and Console plug-in pods were not restarted, therefore they were unaware of the configuration change. Now, the pods are restarted, so they get the configuration change. ( NETOBSERV-772 ) Previously, network flows between pods running on different nodes were sometimes not correctly identified as being duplicates because they are captured by different network interfaces. This resulted in over-estimated metrics displayed in the console plug-in. Now, flows are correctly identified as duplicates, and the console plug-in displays accurate metrics. ( NETOBSERV-755 ) The "reporter" option in the console plug-in is used to filter flows based on the observation point of either source node or destination node. Previously, this option mixed the flows regardless of the node observation point. This was due to network flows being incorrectly reported as Ingress or Egress at the node level. Now, the network flow direction reporting is correct. The "reporter" option filters for source observation point, or destination observation point, as expected. ( NETOBSERV-696 ) Previously, for agents configured to send flows directly to the processor as gRPC+protobuf requests, the submitted payload could be too large and is rejected by the processors' GRPC server. This occurred under very-high-load scenarios and with only some configurations of the agent. The agent logged an error message, such as: grpc: received message larger than max . As a consequence, there was information loss about those flows. Now, the gRPC payload is split into several messages when the size exceeds a threshold. As a result, the server maintains connectivity. ( NETOBSERV-617 ) 28.1.2.4. Known issue In the 1.2.0 release of the Network Observability Operator, using Loki Operator 5.6, a Loki certificate transition periodically affects the flowlogs-pipeline pods and results in dropped flows rather than flows written to Loki. The problem self-corrects after some time, but it still causes temporary flow data loss during the Loki certificate transition. ( NETOBSERV-980 ) 28.1.2.5. Notable technical changes Previously, you could install the Network Observability Operator using a custom namespace. This release introduces the conversion webhook which changes the ClusterServiceVersion . Because of this change, all the available namespaces are no longer listed. Additionally, to enable Operator metrics collection, namespaces that are shared with other Operators, like the openshift-operators namespace, cannot be used. Now, the Operator must be installed in the openshift-netobserv-operator namespace. You cannot automatically upgrade to the new Operator version if you previously installed the Network Observability Operator using a custom namespace. If you previously installed the Operator using a custom namespace, you must delete the instance of the Operator that was installed and re-install your operator in the openshift-netobserv-operator namespace. It is important to note that custom namespaces, such as the commonly used netobserv namespace, are still possible for the FlowCollector , Loki, Kafka, and other plug-ins. ( NETOBSERV-907 )( NETOBSERV-956 ) 28.1.3. Network Observability Operator 1.1.0 The following advisory is available for the Network Observability Operator 1.1.0: RHSA-2023:0786 Network Observability Operator Security Advisory Update The Network Observability Operator is now stable and the release channel is upgraded to v1.1.0 . 28.1.3.1. Bug fix Previously, unless the Loki authToken configuration was set to FORWARD mode, authentication was no longer enforced, allowing any user who could connect to the OpenShift Container Platform console in an OpenShift Container Platform cluster to retrieve flows without authentication. Now, regardless of the Loki authToken mode, only cluster administrators can retrieve flows. ( BZ#2169468 ) 28.2. About Network Observability Red Hat offers cluster administrators the Network Observability Operator to observe the network traffic for OpenShift Container Platform clusters. The Network Observability Operator uses the eBPF technology to create network flows. The network flows are then enriched with OpenShift Container Platform information and stored in Loki. You can view and analyze the stored network flows information in the OpenShift Container Platform console for further insight and troubleshooting. 28.2.1. Dependency of Network Observability Operator The Network Observability Operator requires the following Operators: Loki: You must install Loki. Loki is the backend that is used to store all collected flows. It is recommended to install Loki by installing the Red Hat Loki Operator for the installation of Network Observability Operator. 28.2.2. Optional dependencies of the Network Observability Operator Grafana: You can install Grafana for using custom dashboards and querying capabilities, by using the Grafana Operator. Red Hat does not support Grafana Operator. Kafka: It provides scalability, resiliency and high availability in the OpenShift Container Platform cluster. It is recommended to install Kafka using the AMQ Streams operator for large scale deployments. 28.2.3. Network Observability Operator The Network Observability Operator provides the Flow Collector API custom resource definition. A Flow Collector instance is created during installation and enables configuration of network flow collection. The Flow Collector instance deploys pods and services that form a monitoring pipeline where network flows are then collected and enriched with the Kubernetes metadata before storing in Loki. The eBPF agent, which is deployed as a daemonset object, creates the network flows. 28.2.4. OpenShift Container Platform console integration OpenShift Container Platform console integration offers overview, topology view and traffic flow tables. 28.2.4.1. Network Observability metrics The OpenShift Container Platform console offers the Overview tab which displays the overall aggregated metrics of the network traffic flow on the cluster. The information can be displayed by node, namespace, owner, pod, and service. Filters and display options can further refine the metrics. In Observe Dashboards , the Netobserv dashboard provides a quick overview of the network flows in your OpenShift Container Platform cluster. You can view distillations of the network traffic metrics in the following categories: Top flow rates per source and destination namespaces (1-min rates) Top byte rates emitted per source and destination nodes (1-min rates) Top byte rates received per source and destination nodes (1-min rates) Top byte rates emitted per source and destination workloads (1-min rates) Top byte rates received per source and destination workloads (1-min rates) Top packet rates emitted per source and destination workloads (1-min rates) Top packet rates received per source and destination workloads (1-min rates) You can configure the FlowCollector spec.processor.metrics to add or remove metrics by changing the ignoreTags list. For more information about available tags, see the Flow Collector API Reference Also in Observe Dashboards , the Netobserv/Health dashboard provides metrics about the health of the Operator. 28.2.4.2. Network Observability topology views The OpenShift Container Platform console offers the Topology tab which displays a graphical representation of the network flows and the amount of traffic. The topology view represents traffic between the OpenShift Container Platform components as a network graph. You can refine the graph by using the filters and display options. You can access the information for node, namespace, owner, pod, and service. 28.2.4.3. Traffic flow tables The traffic flow table view provides a view for raw flows, non aggregated filtering options, and configurable columns. The OpenShift Container Platform console offers the Traffic flows tab which displays the data of the network flows and the amount of traffic. 28.3. Installing the Network Observability Operator Installing Loki is a prerequisite for using the Network Observability Operator. It is recommended to install Loki using the Loki Operator; therefore, these steps are documented below prior to the Network Observability Operator installation. The Loki Operator integrates a gateway that implements multi-tenancy & authentication with Loki for data flow storage. The LokiStack resource manages Loki , which is a scalable, highly-available, multi-tenant log aggregation system, and a web proxy with OpenShift Container Platform authentication. The LokiStack proxy uses OpenShift Container Platform authentication to enforce multi-tenancy and facilitate the saving and indexing of data in Loki log stores. Note The Loki Operator can also be used for Logging with the LokiStack . The Network Observability Operator requires a dedicated LokiStack separate from Logging. 28.3.1. Installing the Loki Operator It is recommended to install Loki Operator version 5.7 , This version provides the ability to create a LokiStack instance using the openshift-network tenant configuration mode. It also provides fully automatic, in-cluster authentication and authorization support for Network Observability. Prerequisites Supported Log Store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation) OpenShift Container Platform 4.10+. Linux Kernel 4.18+. There are several ways you can install Loki. One way you can install the Loki Operator is by using the OpenShift Container Platform web console Operator Hub. Procedure Install the Loki Operator Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Loki Operator from the list of available Operators, and click Install . Under Installation Mode , select All namespaces on the cluster . Verify that you installed the Loki Operator. Visit the Operators Installed Operators page and look for Loki Operator . Verify that Loki Operator is listed with Status as Succeeded in all the projects. Create a Secret YAML file. You can create this secret in the web console or CLI. Using the web console, navigate to the Project All Projects dropdown and select Create Project . Name the project netobserv and click Create . Navigate to the Import icon , + , in the top right corner. Drop your YAML file into the editor. It is important to create this YAML file in the netobserv namespace that uses the access_key_id and access_key_secret to specify your credentials. Once you create the secret, you should see it listed under Workloads Secrets in the web console. The following shows an example secret YAML file: Important To uninstall Loki, refer to the uninstallation process that corresponds with the method you used to install Loki. You might have remaining ClusterRoles and ClusterRoleBindings , data stored in object store, and persistent volume that must be removed. 28.3.1.1. Create a LokiStack custom resource It is recommended to deploy the LokiStack in the same namespace referenced by the FlowCollector specification, spec.namespace . You can use the web console or CLI to create a namespace, or new project. Procedure Navigate to Operators Installed Operators , viewing All projects from the Project dropdown. Look for Loki Operator . In the details, under Provided APIs , select LokiStack . Click Create LokiStack . Ensure the following fields are specified in either Form View or YAML view : apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: size: 1x.small storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: loki-s3 type: s3 storageClassName: gp3 1 tenants: mode: openshift-network 1 Use a storage class name that is available on the cluster for ReadWriteOnce access mode. You can use oc get storageclasses to see what is available on your cluster. Important You must not reuse the same LokiStack that is used for cluster logging. Click Create . 28.3.1.1.1. Deployment Sizing Sizing for Loki follows the format of N<x>. <size> where the value <N> is the number of instances and <size> specifies performance capabilities. Note 1x.extra-small is for demo purposes only, and is not supported. Table 28.1. Loki Sizing 1x.extra-small 1x.small 1x.medium Data transfer Demo use only. 500GB/day 2TB/day Queries per second (QPS) Demo use only. 25-50 QPS at 200ms 25-75 QPS at 200ms Replication factor None 2 3 Total CPU requests 5 vCPUs 36 vCPUs 54 vCPUs Total Memory requests 7.5Gi 63Gi 139Gi Total Disk requests 150Gi 300Gi 450Gi 28.3.1.2. LokiStack ingestion limits and health alerts The LokiStack instance comes with default settings according to the configured size. It is possible to override some of these settings, such as the ingestion and query limits. You might want to update them if you get Loki errors showing up in the Console plugin, or in flowlogs-pipeline logs. An automatic alert in the web console notifies you when these limits are reached. Here is an example of configured limits: spec: limits: global: ingestion: ingestionBurstSize: 40 ingestionRate: 20 maxGlobalStreamsPerTenant: 25000 queries: maxChunksPerQuery: 2000000 maxEntriesLimitPerQuery: 10000 maxQuerySeries: 3000 For more information about these settings, see the LokiStack API reference . 28.3.2. Configure authorization and multi-tenancy Define ClusterRole and ClusterRoleBinding . The netobserv-reader ClusterRole enables multi-tenancy and allows individual user access, or group access, to the flows stored in Loki. You can create a YAML file to define these roles. Procedure Using the web console, click the Import icon, + . Drop your YAML file into the editor and click Create : Example ClusterRole reader yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: netobserv-reader 1 rules: - apiGroups: - 'loki.grafana.com' resources: - network resourceNames: - logs verbs: - 'get' 1 This role can be used for multi-tenancy. Example ClusterRole writer yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: netobserv-writer rules: - apiGroups: - 'loki.grafana.com' resources: - network resourceNames: - logs verbs: - 'create' Example ClusterRoleBinding yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: netobserv-writer-flp roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: netobserv-writer subjects: - kind: ServiceAccount name: flowlogs-pipeline 1 namespace: netobserv - kind: ServiceAccount name: flowlogs-pipeline-transformer namespace: netobserv 1 The flowlogs-pipeline writes to Loki. If you are using Kafka, this value is flowlogs-pipeline-transformer . 28.3.3. Enable multi-tenancy in Network Observability Multi-tenancy in the Network Observability Operator allows and restricts individual user access, or group access, to the flows stored in Loki. Access is enabled for project admins. Project admins who have limited access to some namespaces can access flows for only those namespaces. Prerequisite You have installed Loki Operator version 5.7 The FlowCollector spec.loki.authToken configuration must be set to FORWARD . You must be logged in as a project administrator Procedure Authorize reading permission to user1 by running the following command: USD oc adm policy add-cluster-role-to-user netobserv-reader user1 Now, the data is restricted to only allowed user namespaces. For example, a user that has access to a single namespace can see all the flows internal to this namespace, as well as flows going from and to this namespace. Project admins have access to the Administrator perspective in the OpenShift Container Platform console to access the Network Flows Traffic page. 28.3.4. Installing Kafka (optional) The Kafka Operator is supported for large scale environments. You can install the Kafka Operator as Red Hat AMQ Streams from the Operator Hub, just as the Loki Operator and Network Observability Operator were installed. Note To uninstall Kafka, refer to the uninstallation process that corresponds with the method you used to install. 28.3.5. Installing the Network Observability Operator You can install the Network Observability Operator using the OpenShift Container Platform web console Operator Hub. When you install the Operator, it provides the FlowCollector custom resource definition (CRD). You can set specifications in the web console when you create the FlowCollector . Prerequisites Installed Loki. It is recommended to install Loki using the Loki Operator version 5.7 . One of the following supported architectures is required: amd64 , ppc64le , arm64 , or s390x . Any CPU supported by Red Hat Enterprise Linux (RHEL) 9 Note This documentation assumes that your LokiStack instance name is loki . Using a different name requires additional configuration. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Network Observability Operator from the list of available Operators in the OperatorHub , and click Install . Select the checkbox Enable Operator recommended cluster monitoring on this Namespace . Navigate to Operators Installed Operators . Under Provided APIs for Network Observability, select the Flow Collector link. Navigate to the Flow Collector tab, and click Create FlowCollector . Make the following selections in the form view: spec.agent.ebpf.Sampling : Specify a sampling size for flows. Lower sampling sizes will have higher impact on resource utilization. For more information, see the FlowCollector API reference, under spec.agent.ebpf. spec.deploymentModel : If you are using Kafka, verify Kafka is selected. spec.exporters : If you are using Kafka, you can optionally send network flows to Kafka, so that they can be consumed by any processor or storage that supports Kafka input, such as Splunk, Elasticsearch, or Fluentd. To do this, set the following specifications: Set the type to KAFKA . Set the address as kafka-cluster-kafka-bootstrap.netobserv . Set the topic as netobserv-flows-export . The Operator exports all flows to the configured Kafka topic. Set the following tls specifications: certFile : service-ca.crt , name : kafka-gateway-ca-bundle , and type : configmap . You can also configure this option at a later time by directly editing the YAML. For more information, see Export enriched network flow data . loki.url : Since authentication is specified separately, this URL needs to be updated to https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network . The first part of the URL, "loki", should match the name of your LokiStack. loki.statusUrl : Set this to https://loki-query-frontend-http.netobserv.svc:3100/ . The first part of the URL, "loki", should match the name of your LokiStack. loki.authToken : Select the FORWARD value. tls.enable : Verify that the box is checked so it is enabled. statusTls : The enable value is false by default. For the first part of the certificate reference names: loki-gateway-ca-bundle , loki-ca-bundle , and loki-query-frontend-http , loki , should match the name of your LokiStack . Click Create . Verification To confirm this was successful, when you navigate to Observe you should see Network Traffic listed in the options. In the absence of Application Traffic within the OpenShift Container Platform cluster, default filters might show that there are "No results", which results in no visual flow. Beside the filter selections, select Clear all filters to see the flow. Important If you installed Loki using the Loki Operator, it is advised not to use querierUrl , as it can break the console access to Loki. If you installed Loki using another type of Loki installation, this does not apply. Additional resources For more information about Flow Collector specifications, see the Flow Collector API Reference and the Flow Collector sample resource . For more information about exporting flow data to Kafka for third party processing consumption, see Export enriched network flow data . 28.3.6. Uninstalling the Network Observability Operator You can uninstall the Network Observability Operator using the OpenShift Container Platform web console Operator Hub, working in the Operators Installed Operators area. Procedure Remove the FlowCollector custom resource. Click Flow Collector , which is to the Network Observability Operator in the Provided APIs column. Click the options menu for the cluster and select Delete FlowCollector . Uninstall the Network Observability Operator. Navigate back to the Operators Installed Operators area. Click the options menu to the Network Observability Operator and select Uninstall Operator . Home Projects and select openshift-netobserv-operator Navigate to Actions and select Delete Project Remove the FlowCollector custom resource definition (CRD). Navigate to Administration CustomResourceDefinitions . Look for FlowCollector and click the options menu . Select Delete CustomResourceDefinition . Important The Loki Operator and Kafka remain if they were installed and must be removed separately. Additionally, you might have remaining data stored in an object store, and a persistent volume that must be removed. 28.4. Network Observability Operator in OpenShift Container Platform Network Observability is an OpenShift operator that deploys a monitoring pipeline to collect and enrich network traffic flows that are produced by the Network Observability eBPF agent. 28.4.1. Viewing statuses The Network Observability Operator provides the Flow Collector API. When a Flow Collector resource is created, it deploys pods and services to create and store network flows in the Loki log store, as well as to display dashboards, metrics, and flows in the OpenShift Container Platform web console. Procedure Run the following command to view the state of FlowCollector : USD oc get flowcollector/cluster Example output Check the status of pods running in the netobserv namespace by entering the following command: USD oc get pods -n netobserv Example output flowlogs-pipeline pods collect flows, enriches the collected flows, then send flows to the Loki storage. netobserv-plugin pods create a visualization plugin for the OpenShift Container Platform Console. Check the status of pods running in the namespace netobserv-privileged by entering the following command: USD oc get pods -n netobserv-privileged Example output netobserv-ebpf-agent pods monitor network interfaces of the nodes to get flows and send them to flowlogs-pipeline pods. If you are using a Loki Operator, check the status of pods running in the openshift-operators-redhat namespace by entering the following command: USD oc get pods -n openshift-operators-redhat Example output 28.4.2. Viewing Network Observability Operator status and configuration You can inspect the status and view the details of the FlowCollector using the oc describe command. Procedure Run the following command to view the status and configuration of the Network Observability Operator: USD oc describe flowcollector/cluster 28.5. Configuring the Network Observability Operator You can update the Flow Collector API resource to configure the Network Observability Operator and its managed components. The Flow Collector is explicitly created during installation. Since this resource operates cluster-wide, only a single FlowCollector is allowed, and it has to be named cluster . 28.5.1. View the FlowCollector resource You can view and edit YAML directly in the OpenShift Container Platform web console. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster then select the YAML tab. There, you can modify the FlowCollector resource to configure the Network Observability operator. The following example shows a sample FlowCollector resource for OpenShift Container Platform Network Observability operator: Sample FlowCollector resource apiVersion: flows.netobserv.io/v1beta1 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: DIRECT agent: type: EBPF 1 ebpf: sampling: 50 2 logLevel: info privileged: false resources: requests: memory: 50Mi cpu: 100m limits: memory: 800Mi processor: logLevel: info resources: requests: memory: 100Mi cpu: 100m limits: memory: 800Mi conversationEndTimeout: 10s logTypes: FLOWS 3 conversationHeartbeatInterval: 30s loki: 4 url: 'https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network' statusUrl: 'https://loki-query-frontend-http.netobserv.svc:3100/' authToken: FORWARD tls: enable: true caCert: type: configmap name: loki-gateway-ca-bundle certFile: service-ca.crt consolePlugin: register: true logLevel: info portNaming: enable: true portNames: "3100": loki quickFilters: 5 - name: Applications filter: src_namespace!: 'openshift-,netobserv' dst_namespace!: 'openshift-,netobserv' default: true - name: Infrastructure filter: src_namespace: 'openshift-,netobserv' dst_namespace: 'openshift-,netobserv' - name: Pods network filter: src_kind: 'Pod' dst_kind: 'Pod' default: true - name: Services network filter: dst_kind: 'Service' 1 The Agent specification, spec.agent.type , must be EBPF . eBPF is the only OpenShift Container Platform supported option. 2 You can set the Sampling specification, spec.agent.ebpf.sampling , to manage resources. Lower sampling values might consume a large amount of computational, memory and storage resources. You can mitigate this by specifying a sampling ratio value. A value of 100 means 1 flow every 100 is sampled. A value of 0 or 1 means all flows are captured. The lower the value, the increase in returned flows and the accuracy of derived metrics. By default, eBPF sampling is set to a value of 50, so 1 flow every 50 is sampled. Note that more sampled flows also means more storage needed. It is recommend to start with default values and refine empirically, to determine which setting your cluster can manage. 3 The optional specifications spec.processor.logTypes , spec.processor.conversationHeartbeatInterval , and spec.processor.conversationEndTimeout can be set to enable conversation tracking. When enabled, conversation events are queryable in the web console. The values for spec.processor.logTypes are as follows: FLOWS CONVERSATIONS , ENDED_CONVERSATIONS , or ALL . Storage requirements are highest for ALL and lowest for ENDED_CONVERSATIONS . 4 The Loki specification, spec.loki , specifies the Loki client. The default values match the Loki install paths mentioned in the Installing the Loki Operator section. If you used another installation method for Loki, specify the appropriate client information for your install. 5 The spec.quickFilters specification defines filters that show up in the web console. The Application filter keys, src_namespace and dst_namespace , are negated ( ! ), so the Application filter shows all traffic that does not originate from, or have a destination to, any openshift- or netobserv namespaces. For more information, see Configuring quick filters below. Additional resources For more information about conversation tracking, see Working with conversations . 28.5.2. Configuring the Flow Collector resource with Kafka You can configure the FlowCollector resource to use Kafka. A Kafka instance needs to be running, and a Kafka topic dedicated to OpenShift Container Platform Network Observability must be created in that instance. For more information, refer to your Kafka documentation, such as Kafka documentation with AMQ Streams . The following example shows how to modify the FlowCollector resource for OpenShift Container Platform Network Observability operator to use Kafka: Sample Kafka configuration in FlowCollector resource deploymentModel: KAFKA 1 kafka: address: "kafka-cluster-kafka-bootstrap.netobserv" 2 topic: network-flows 3 tls: enable: false 4 1 Set spec.deploymentModel to KAFKA instead of DIRECT to enable the Kafka deployment model. 2 spec.kafka.address refers to the Kafka bootstrap server address. You can specify a port if needed, for instance kafka-cluster-kafka-bootstrap.netobserv:9093 for using TLS on port 9093. 3 spec.kafka.topic should match the name of a topic created in Kafka. 4 spec.kafka.tls can be used to encrypt all communications to and from Kafka with TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the flowlogs-pipeline processor component is deployed (default: netobserv ) and where the eBPF agents are deployed (default: netobserv-privileged ). It must be referenced with spec.kafka.tls.caCert . When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced with spec.kafka.tls.userCert . 28.5.3. Export enriched network flow data You can send network flows to Kafka, so that they can be consumed by any processor or storage that supports Kafka input, such as Splunk, Elasticsearch, or Fluentd. Prerequisites Installed Kafka Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster and then select the YAML tab. Edit the FlowCollector to configure spec.exporters as follows: apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: exporters: - type: KAFKA kafka: address: "kafka-cluster-kafka-bootstrap.netobserv" topic: netobserv-flows-export 1 tls: enable: false 2 1 The Network Observability Operator exports all flows to the configured Kafka topic. 2 You can encrypt all communications to and from Kafka with SSL/TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the flowlogs-pipeline processor component is deployed (default: netobserv). It must be referenced with spec.exporters.tls.caCert . When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced with spec.exporters.tls.userCert . After configuration, network flows data can be sent to an available output in a JSON format. For more information, see Network flows format reference Additional resources For more information about specifying flow format, see Network flows format reference . 28.5.4. Updating the Flow Collector resource As an alternative to editing YAML in the OpenShift Container Platform web console, you can configure specifications, such as eBPF sampling, by patching the flowcollector custom resource (CR): Procedure Run the following command to patch the flowcollector CR and update the spec.agent.ebpf.sampling value: USD oc patch flowcollector cluster --type=json -p "[{"op": "replace", "path": "/spec/agent/ebpf/sampling", "value": <new value>}] -n netobserv" 28.5.5. Configuring quick filters You can modify the filters in the FlowCollector resource. Exact matches are possible using double-quotes around values. Otherwise, partial matches are used for textual values. The bang (!) character, placed at the end of a key, means negation. See the sample FlowCollector resource for more context about modifying the YAML. Note The filter matching types "all of" or "any of" is a UI setting that the users can modify from the query options. It is not part of this resource configuration. Here is a list of all available filter keys: Table 28.2. Filter keys Universal* Source Destination Description namespace src_namespace dst_namespace Filter traffic related to a specific namespace. name src_name dst_name Filter traffic related to a given leaf resource name, such as a specific pod, service, or node (for host-network traffic). kind src_kind dst_kind Filter traffic related to a given resource kind. The resource kinds include the leaf resource (Pod, Service or Node), or the owner resource (Deployment and StatefulSet). owner_name src_owner_name dst_owner_name Filter traffic related to a given resource owner; that is, a workload or a set of pods. For example, it can be a Deployment name, a StatefulSet name, etc. resource src_resource dst_resource Filter traffic related to a specific resource that is denoted by its canonical name, that identifies it uniquely. The canonical notation is kind.namespace.name for namespaced kinds, or node.name for nodes. For example, Deployment.my-namespace.my-web-server . address src_address dst_address Filter traffic related to an IP address. IPv4 and IPv6 are supported. CIDR ranges are also supported. mac src_mac dst_mac Filter traffic related to a MAC address. port src_port dst_port Filter traffic related to a specific port. host_address src_host_address dst_host_address Filter traffic related to the host IP address where the pods are running. protocol N/A N/A Filter traffic related to a protocol, such as TCP or UDP. Universal keys filter for any of source or destination. For example, filtering name: 'my-pod' means all traffic from my-pod and all traffic to my-pod , regardless of the matching type used, whether Match all or Match any . 28.5.6. Resource management and performance considerations The amount of resources required by Network Observability depends on the size of your cluster and your requirements for the cluster to ingest and store observability data. To manage resources and set performance criteria for your cluster, consider configuring the following settings. Configuring these settings might meet your optimal setup and observability needs. The following settings can help you manage resources and performance from the outset: eBPF Sampling You can set the Sampling specification, spec.agent.ebpf.sampling , to manage resources. Smaller sampling values might consume a large amount of computational, memory and storage resources. You can mitigate this by specifying a sampling ratio value. A value of 100 means 1 flow every 100 is sampled. A value of 0 or 1 means all flows are captured. Smaller values result in an increase in returned flows and the accuracy of derived metrics. By default, eBPF sampling is set to a value of 50, so 1 flow every 50 is sampled. Note that more sampled flows also means more storage needed. Consider starting with the default values and refine empirically, in order to determine which setting your cluster can manage. Restricting or excluding interfaces Reduce the overall observed traffic by setting the values for spec.agent.ebpf.interfaces and spec.agent.ebpf.excludeInterfaces . By default, the agent fetches all the interfaces in the system, except the ones listed in excludeInterfaces and lo (local interface). Note that the interface names might vary according to the Container Network Interface (CNI) used. The following settings can be used to fine-tune performance after the Network Observability has been running for a while: Resource requirements and limits Adapt the resource requirements and limits to the load and memory usage you expect on your cluster by using the spec.agent.ebpf.resources and spec.processor.resources specifications. The default limits of 800MB might be sufficient for most medium-sized clusters. Cache max flows timeout Control how often flows are reported by the agents by using the eBPF agent's spec.agent.ebpf.cacheMaxFlows and spec.agent.ebpf.cacheActiveTimeout specifications. A larger value results in less traffic being generated by the agents, which correlates with a lower CPU load. However, a larger value leads to a slightly higher memory consumption, and might generate more latency in the flow collection. 28.5.6.1. Resource considerations The following table outlines examples of resource considerations for clusters with certain workload sizes. Important The examples outlined in the table demonstrate scenarios that are tailored to specific workloads. Consider each example only as a baseline from which adjustments can be made to accommodate your workload needs. Table 28.3. Resource recommendations Extra small (10 nodes) Small (25 nodes) Medium (65 nodes) [2] Large (120 nodes) [2] Worker Node vCPU and memory 4 vCPUs| 16GiB mem [1] 16 vCPUs| 64GiB mem [1] 16 vCPUs| 64GiB mem [1] 16 vCPUs| 64GiB Mem [1] LokiStack size 1x.extra-small 1x.small 1x.small 1x.medium Network Observability controller memory limit 400Mi (default) 400Mi (default) 400Mi (default) 800Mi eBPF sampling rate 50 (default) 50 (default) 50 (default) 50 (default) eBPF memory limit 800Mi (default) 800Mi (default) 2000Mi 800Mi (default) FLP memory limit 800Mi (default) 800Mi (default) 800Mi (default) 800Mi (default) FLP Kafka partitions N/A 48 48 48 Kafka consumer replicas N/A 24 24 24 Kafka brokers N/A 3 (default) 3 (default) 3 (default) Tested with AWS M6i instances. In addition to this worker and its controller, 3 infra nodes (size M6i.12xlarge ) and 1 workload node (size M6i.8xlarge ) were tested. 28.6. Network Policy As a user with the admin role, you can create a network policy for the netobserv namespace. 28.6.1. Creating a network policy for Network Observability You might need to create a network policy to secure ingress traffic to the netobserv namespace. In the web console, you can create a network policy using the form view. Procedure Navigate to Networking NetworkPolicies . Select the netobserv project from the Project dropdown menu. Name the policy. For this example, the policy name is allow-ingress . Click Add ingress rule three times to create three ingress rules. Specify the following in the form: Make the following specifications for the first Ingress rule : From the Add allowed source dropdown menu, select Allow pods from the same namespace . Make the following specifications for the second Ingress rule : From the Add allowed source dropdown menu, select Allow pods from inside the cluster . Click + Add namespace selector . Add the label, kubernetes.io/metadata.name , and the selector, openshift-console . Make the following specifications for the third Ingress rule : From the Add allowed source dropdown menu, select Allow pods from inside the cluster . Click + Add namespace selector . Add the label, kubernetes.io/metadata.name , and the selector, openshift-monitoring . Verification Navigate to Observe Network Traffic . View the Traffic Flows tab, or any tab, to verify that the data is displayed. Navigate to Observe Dashboards . In the NetObserv/Health selection, verify that the flows are being ingested and sent to Loki, which is represented in the first graph. 28.6.2. Example network policy The following annotates an example NetworkPolicy object for the netobserv namespace: Sample network policy kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-ingress namespace: netobserv spec: podSelector: {} 1 ingress: - from: - podSelector: {} 2 namespaceSelector: 3 matchLabels: kubernetes.io/metadata.name: openshift-console - podSelector: {} namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring policyTypes: - Ingress status: {} 1 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. In this documentation, it would be the project in which the Network Observability Operator is installed, which is the netobserv project. 2 A selector that matches the pods from which the policy object allows ingress traffic. The default is that the selector matches pods in the same namespace as the NetworkPolicy . 3 When the namespaceSelector is specified, the selector matches pods in the specified namespace. Additional resources Creating a network policy using the CLI 28.7. Observing the network traffic As an administrator, you can observe the network traffic in the OpenShift Container Platform console for detailed troubleshooting and analysis. This feature helps you get insights from different graphical representations of traffic flow. There are several available views to observe the network traffic. 28.7.1. Observing the network traffic from the Overview view The Overview view displays the overall aggregated metrics of the network traffic flow on the cluster. As an administrator, you can monitor the statistics with the available display options. 28.7.1.1. Working with the Overview view As an administrator, you can navigate to the Overview view to see the graphical representation of the flow rate statistics. Procedure Navigate to Observe Network Traffic . In the Network Traffic page, click the Overview tab. You can configure the scope of each flow rate data by clicking the menu icon. 28.7.1.2. Configuring advanced options for the Overview view You can customize the graphical view by using advanced options. To access the advanced options, click Show advanced options .You can configure the details in the graph by using the Display options drop-down menu. The options available are: Metric type : The metrics to be shown in Bytes or Packets . The default value is Bytes . Scope : To select the detail of components between which the network traffic flows. You can set the scope to Node , Namespace , Owner , or Resource . Owner is an aggregation of resources. Resource can be a pod, service, node, in case of host-network traffic, or an unknown IP address. The default value is Namespace . Truncate labels : Select the required width of the label from the drop-down list. The default value is M . 28.7.1.2.1. Managing panels You can select the required statistics to be displayed, and reorder them. To manage columns, click Manage panels . 28.7.2. Observing the network traffic from the Traffic flows view The Traffic flows view displays the data of the network flows and the amount of traffic in a table. As an administrator, you can monitor the amount of traffic across the application by using the traffic flow table. 28.7.2.1. Working with the Traffic flows view As an administrator, you can navigate to Traffic flows table to see network flow information. Procedure Navigate to Observe Network Traffic . In the Network Traffic page, click the Traffic flows tab. You can click on each row to get the corresponding flow information. 28.7.2.2. Configuring advanced options for the Traffic flows view You can customize and export the view by using Show advanced options . You can set the row size by using the Display options drop-down menu. The default value is Normal . 28.7.2.2.1. Managing columns You can select the required columns to be displayed, and reorder them. To manage columns, click Manage columns . 28.7.2.2.2. Exporting the traffic flow data You can export data from the Traffic flows view. Procedure Click Export data . In the pop-up window, you can select the Export all data checkbox to export all the data, and clear the checkbox to select the required fields to be exported. Click Export . 28.7.2.3. Working with conversation tracking As an administrator, you can you can group network flows that are part of the same conversation. A conversation is defined as a grouping of peers that are identified by their IP addresses, ports, and protocols, resulting in an unique Conversation Id . You can query conversation events in the web console. These events are represented in the web console as follows: Conversation start : This event happens when a connection is starting or TCP flag intercepted Conversation tick : This event happens at each specified interval defined in the FlowCollector spec.processor.conversationHeartbeatInterval parameter while the connection is active. Conversation end : This event happens when the FlowCollector spec.processor.conversationEndTimeout parameter is reached or the TCP flag is intercepted. Flow : This is the network traffic flow that occurs within the specified interval. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster then select the YAML tab. Configure the FlowCollector custom resource so that spec.processor.logTypes , conversationEndTimeout , and conversationHeartbeatInterval parameters are set according to your observation needs. A sample configuration is as follows: Configure FlowCollector for conversation tracking apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: processor: conversationEndTimeout: 10s 1 logTypes: FLOWS 2 conversationHeartbeatInterval: 30s 3 1 The Conversation end event represents the point when the conversationEndTimeout is reached or the TCP flag is intercepted. 2 When logTypes is set to FLOWS , only the Flow event is exported. If you set the value to ALL , both conversation and flow events are exported and visible in the Network Traffic page. To focus only on conversation events, you can specify CONVERSATIONS which exports the Conversation start , Conversation tick and Conversation end events; or ENDED_CONVERSATIONS exports only the Conversation end events. Storage requirements are highest for ALL and lowest for ENDED_CONVERSATIONS . 3 The Conversation tick event represents each specified interval defined in the FlowCollector conversationHeartbeatInterval parameter while the network connection is active. Note If you update the logType option, the flows from the selection do not clear from the console plugin. For example, if you initially set logType to CONVERSATIONS for a span of time until 10 AM and then move to ENDED_CONVERSATIONS , the console plugin shows all conversation events before 10 AM and only ended conversations after 10 AM. Refresh the Network Traffic page on the Traffic flows tab. Notice there are two new columns, Event/Type and Conversation Id . All the Event/Type fields are Flow when Flow is the selected query option. Select Query Options and choose the Log Type , Conversation . Now the Event/Type shows all of the desired conversation events. you can filter on a specific conversation ID or switch between the Conversation and Flow log type options from the side panel. 28.7.2.3.1. Using the histogram You can click Show histogram to display a toolbar view for visualizing the history of flows as a bar chart. The histogram shows the number of logs over time. You can select a part of the histogram to filter the network flow data in the table that follows the toolbar. 28.7.3. Observing the network traffic from the Topology view The Topology view provides a graphical representation of the network flows and the amount of traffic. As an administrator, you can monitor the traffic data across the application by using the Topology view. 28.7.3.1. Working with the Topology view As an administrator, you can navigate to the Topology view to see the details and metrics of the component. Procedure Navigate to Observe Network Traffic . In the Network Traffic page, click the Topology tab. You can click each component in the Topology to view the details and metrics of the component. 28.7.3.2. Configuring the advanced options for the Topology view You can customize and export the view by using Show advanced options . The advanced options view has the following features: Find in view : To search the required components in the view. Display options : To configure the following options: Layout : To select the layout of the graphical representation. The default value is ColaNoForce . Scope : To select the scope of components between which the network traffic flows. The default value is Namespace . Groups : To enchance the understanding of ownership by grouping the components. The default value is None . Collapse groups : To expand or collapse the groups. The groups are expanded by default. This option is disabled if Groups has value None . Show : To select the details that need to be displayed. All the options are checked by default. The options available are: Edges , Edges label , and Badges . Truncate labels : To select the required width of the label from the drop-down list. The default value is M . 28.7.3.2.1. Exporting the topology view To export the view, click Export topology view . The view is downloaded in PNG format. 28.7.4. Filtering the network traffic By default, the Network Traffic page displays the traffic flow data in the cluster based on the default filters configured in the FlowCollector instance. You can use the filter options to observe the required data by changing the preset filter. Query Options You can use Query Options to optimize the search results, as listed below: Log Type : The available options Conversation and Flows provide the ability to query flows by log type, such as flow log, new conversation, completed conversation, and a heartbeat, which is a periodic record with updates for long conversations. A conversation is an aggregation of flows between the same peers. Reporter Node : Every flow can be reported from both source and destination nodes. For cluster ingress, the flow is reported from the destination node and for cluster egress, the flow is reported from the source node. You can select either Source or Destination . The option Both is disabled for the Overview and Topology view. The default selected value is Destination . Match filters : You can determine the relation between different filter parameters selected in the advanced filter. The available options are Match all and Match any . Match all provides results that match all the values, and Match any provides results that match any of the values entered. The default value is Match all . Limit : The data limit for internal backend queries. Depending upon the matching and the filter settings, the number of traffic flow data is displayed within the specified limit. Quick filters The default values in Quick filters drop-down menu are defined in the FlowCollector configuration. You can modify the options from console. Advanced filters You can set the advanced filters by providing the parameter to be filtered and its corresponding text value. The section Common in the parameter drop-down list filters the results that match either Source or Destination . To enable or disable the applied filter, you can click on the applied filter listed below the filter options. Note To understand the rules of specifying the text value, click Learn More . You can click Reset default filter to remove the existing filters, and apply the filter defined in FlowCollector configuration. Alternatively, you can access the traffic flow data in the Network Traffic tab of the Namespaces , Services , Routes , Nodes , and Workloads pages which provide the filtered data of the corresponding aggregations. Additional resources For more information about configuring quick filters in the FlowCollector , see Configuring Quick Filters and the Flow Collector sample resource . 28.8. Monitoring the Network Observability Operator You can use the web console to monitor alerts related to the health of the Network Observability Operator. 28.8.1. Viewing health information You can access metrics about health and resource usage of the Network Observability Operator from the Dashboards page in the web console. A health alert banner that directs you to the dashboard can appear on the Network Traffic and Home pages in the event that an alert is triggered. Alerts are generated in the following cases: The NetObservLokiError alert occurs if the flowlogs-pipeline workload is dropping flows because of Loki errors, such as if the Loki ingestion rate limit has been reached. The NetObservNoFlows alert occurs if no flows are ingested for a certain amount of time..Prerequisites You have the Network Observability Operator installed. You have access to the cluster as a user with the cluster-admin role or with view permissions for all projects. Procedure From the Administrator perspective in the web console, navigate to Observe Dashboards . From the Dashboards dropdown, select Netobserv/Health . Metrics about the health of the Operator are displayed on the page. 28.8.1.1. Disabling health alerts You can opt out of health alerting by editing the FlowCollector resource: In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster then select the YAML tab. Add spec.processor.metrics.disableAlerts to disable health alerts, as in the following YAML sample: 1 You can specify one or a list with both types of alerts to disable. 28.9. FlowCollector configuration parameters FlowCollector is the Schema for the network flows collection API, which pilots and configures the underlying deployments. 28.9.1. FlowCollector API specifications Description FlowCollector is the schema for the network flows collection API, which pilots and configures the underlying deployments. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and might reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers might infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata object Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object FlowCollectorSpec defines the desired state of the FlowCollector resource. *: the mention of "unsupported" , or "deprecated" for a feature throughout this document means that this feature is not officially supported by Red Hat. It might have been, for instance, contributed by the community and accepted without a formal agreement for maintenance. The product maintainers might provide some support for these features as a best effort only. 28.9.1.1. .metadata Description Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Type object 28.9.1.2. .spec Description FlowCollectorSpec defines the desired state of the FlowCollector resource. *: the mention of "unsupported" , or "deprecated" for a feature throughout this document means that this feature is not officially supported by Red Hat. It might have been, for instance, contributed by the community and accepted without a formal agreement for maintenance. The product maintainers might provide some support for these features as a best effort only. Type object Required agent deploymentModel Property Type Description agent object Agent configuration for flows extraction. consolePlugin object consolePlugin defines the settings related to the OpenShift Container Platform Console plugin, when available. deploymentModel string deploymentModel defines the desired type of deployment for flow processing. Possible values are: - DIRECT (default) to make the flow processor listening directly from the agents. - KAFKA to make flows sent to a Kafka pipeline before consumption by the processor. Kafka can provide better scalability, resiliency, and high availability (for more details, see https://www.redhat.com/en/topics/integration/what-is-apache-kafka ). exporters array exporters define additional optional exporters for custom consumption or storage. kafka object Kafka configuration, allowing to use Kafka as a broker as part of the flow collection pipeline. Available when the spec.deploymentModel is KAFKA . loki object Loki, the flow store, client settings. namespace string Namespace where NetObserv pods are deployed. If empty, the namespace of the operator is going to be used. processor object processor defines the settings of the component that receives the flows from the agent, enriches them, generates metrics, and forwards them to the Loki persistence layer and/or any available exporter. 28.9.1.3. .spec.agent Description Agent configuration for flows extraction. Type object Required type Property Type Description ebpf object ebpf describes the settings related to the eBPF-based flow reporter when spec.agent.type is set to EBPF . ipfix object ipfix - deprecated (*) - describes the settings related to the IPFIX-based flow reporter when spec.agent.type is set to IPFIX . type string type selects the flows tracing agent. Possible values are: - EBPF (default) to use NetObserv eBPF agent. - IPFIX - deprecated (*) - to use the legacy IPFIX collector. EBPF is recommended as it offers better performances and should work regardless of the CNI installed on the cluster. IPFIX works with OVN-Kubernetes CNI (other CNIs could work if they support exporting IPFIX, but they would require manual configuration). 28.9.1.4. .spec.agent.ebpf Description ebpf describes the settings related to the eBPF-based flow reporter when spec.agent.type is set to EBPF . Type object Property Type Description cacheActiveTimeout string cacheActiveTimeout is the max period during which the reporter will aggregate flows before sending. Increasing cacheMaxFlows and cacheActiveTimeout can decrease the network traffic overhead and the CPU load, however you can expect higher memory consumption and an increased latency in the flow collection. cacheMaxFlows integer cacheMaxFlows is the max number of flows in an aggregate; when reached, the reporter sends the flows. Increasing cacheMaxFlows and cacheActiveTimeout can decrease the network traffic overhead and the CPU load, however you can expect higher memory consumption and an increased latency in the flow collection. debug object debug allows setting some aspects of the internal configuration of the eBPF agent. This section is aimed exclusively for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Users setting its values do it at their own risk. excludeInterfaces array (string) excludeInterfaces contains the interface names that will be excluded from flow tracing. An entry is enclosed by slashes, such as /br-/ , is matched as a regular expression. Otherwise it is matched as a case-sensitive string. imagePullPolicy string imagePullPolicy is the Kubernetes pull policy for the image defined above interfaces array (string) interfaces contains the interface names from where flows will be collected. If empty, the agent will fetch all the interfaces in the system, excepting the ones listed in ExcludeInterfaces. An entry is enclosed by slashes, such as /br-/ , is matched as a regular expression. Otherwise it is matched as a case-sensitive string. kafkaBatchSize integer kafkaBatchSize limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 10MB. logLevel string logLevel defines the log level for the NetObserv eBPF Agent privileged boolean Privileged mode for the eBPF Agent container. In general this setting can be ignored or set to false: in that case, the operator will set granular capabilities (BPF, PERFMON, NET_ADMIN, SYS_RESOURCE) to the container, to enable its correct operation. If for some reason these capabilities cannot be set, such as if an old kernel version not knowing CAP_BPF is in use, then you can turn on this mode for more global privileges. resources object resources are the compute resources required by this container. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ sampling integer Sampling rate of the flow reporter. 100 means one flow on 100 is sent. 0 or 1 means all flows are sampled. 28.9.1.5. .spec.agent.ebpf.debug Description debug allows setting some aspects of the internal configuration of the eBPF agent. This section is aimed exclusively for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Users setting its values do it at their own risk. Type object Property Type Description env object (string) env allows passing custom environment variables to underlying components. Useful for passing some very concrete performance-tuning options, such as GOGC and GOMAXPROCS, that should not be publicly exposed as part of the FlowCollector descriptor, as they are only useful in edge debug or support scenarios. 28.9.1.6. .spec.agent.ebpf.resources Description resources are the compute resources required by this container. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 28.9.1.7. .spec.agent.ipfix Description ipfix - deprecated (*) - describes the settings related to the IPFIX-based flow reporter when spec.agent.type is set to IPFIX . Type object Property Type Description cacheActiveTimeout string cacheActiveTimeout is the max period during which the reporter will aggregate flows before sending cacheMaxFlows integer cacheMaxFlows is the max number of flows in an aggregate; when reached, the reporter sends the flows clusterNetworkOperator object clusterNetworkOperator defines the settings related to the OpenShift Container Platform Cluster Network Operator, when available. forceSampleAll boolean forceSampleAll allows disabling sampling in the IPFIX-based flow reporter. It is not recommended to sample all the traffic with IPFIX, as it might generate cluster instability. If you REALLY want to do that, set this flag to true. Use at your own risk. When it is set to true, the value of sampling is ignored. ovnKubernetes object ovnKubernetes defines the settings of the OVN-Kubernetes CNI, when available. This configuration is used when using OVN's IPFIX exports, without OpenShift Container Platform. When using OpenShift Container Platform, refer to the clusterNetworkOperator property instead. sampling integer sampling is the sampling rate on the reporter. 100 means one flow on 100 is sent. To ensure cluster stability, it is not possible to set a value below 2. If you really want to sample every packet, which might impact the cluster stability, refer to forceSampleAll . Alternatively, you can use the eBPF Agent instead of IPFIX. 28.9.1.8. .spec.agent.ipfix.clusterNetworkOperator Description clusterNetworkOperator defines the settings related to the OpenShift Container Platform Cluster Network Operator, when available. Type object Property Type Description namespace string Namespace where the config map is going to be deployed. 28.9.1.9. .spec.agent.ipfix.ovnKubernetes Description ovnKubernetes defines the settings of the OVN-Kubernetes CNI, when available. This configuration is used when using OVN's IPFIX exports, without OpenShift Container Platform. When using OpenShift Container Platform, refer to the clusterNetworkOperator property instead. Type object Property Type Description containerName string containerName defines the name of the container to configure for IPFIX. daemonSetName string daemonSetName defines the name of the DaemonSet controlling the OVN-Kubernetes pods. namespace string Namespace where OVN-Kubernetes pods are deployed. 28.9.1.10. .spec.consolePlugin Description consolePlugin defines the settings related to the OpenShift Container Platform Console plugin, when available. Type object Property Type Description autoscaler object autoscaler spec of a horizontal pod autoscaler to set up for the plugin Deployment. Refer to HorizontalPodAutoscaler documentation (autoscaling/v2). imagePullPolicy string imagePullPolicy is the Kubernetes pull policy for the image defined above logLevel string logLevel for the console plugin backend port integer port is the plugin service port. Do not use 9002, which is reserved for metrics. portNaming object portNaming defines the configuration of the port-to-service name translation quickFilters array quickFilters configures quick filter presets for the Console plugin register boolean register allows, when set to true, to automatically register the provided console plugin with the OpenShift Container Platform Console operator. When set to false, you can still register it manually by editing console.operator.openshift.io/cluster with the following command: oc patch console.operator.openshift.io cluster --type='json' -p '[{"op": "add", "path": "/spec/plugins/-", "value": "netobserv-plugin"}]' replicas integer replicas defines the number of replicas (pods) to start. resources object resources , in terms of compute resources, required by this container. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 28.9.1.11. .spec.consolePlugin.autoscaler Description autoscaler spec of a horizontal pod autoscaler to set up for the plugin Deployment. Refer to HorizontalPodAutoscaler documentation (autoscaling/v2). Type object 28.9.1.12. .spec.consolePlugin.portNaming Description portNaming defines the configuration of the port-to-service name translation Type object Property Type Description enable boolean Enable the console plugin port-to-service name translation portNames object (string) portNames defines additional port names to use in the console, for example, portNames: {"3100": "loki"} . 28.9.1.13. .spec.consolePlugin.quickFilters Description quickFilters configures quick filter presets for the Console plugin Type array 28.9.1.14. .spec.consolePlugin.quickFilters[] Description QuickFilter defines preset configuration for Console's quick filters Type object Required filter name Property Type Description default boolean default defines whether this filter should be active by default or not filter object (string) filter is a set of keys and values to be set when this filter is selected. Each key can relate to a list of values using a coma-separated string, for example, filter: {"src_namespace": "namespace1,namespace2"} . name string Name of the filter, that will be displayed in Console 28.9.1.15. .spec.consolePlugin.resources Description resources , in terms of compute resources, required by this container. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 28.9.1.16. .spec.exporters Description exporters define additional optional exporters for custom consumption or storage. Type array 28.9.1.17. .spec.exporters[] Description FlowCollectorExporter defines an additional exporter to send enriched flows to. Type object Required type Property Type Description ipfix object IPFIX configuration, such as the IP address and port to send enriched IPFIX flows to. Unsupported (*) . kafka object Kafka configuration, such as the address and topic, to send enriched flows to. type string type selects the type of exporters. The available options are KAFKA and IPFIX . IPFIX is unsupported (*) . 28.9.1.18. .spec.exporters[].ipfix Description IPFIX configuration, such as the IP address and port to send enriched IPFIX flows to. Unsupported (*) . Type object Required targetHost targetPort Property Type Description targetHost string Address of the IPFIX external receiver targetPort integer Port for the IPFIX external receiver transport string Transport protocol ( TCP or UDP ) to be used for the IPFIX connection, defaults to TCP . 28.9.1.19. .spec.exporters[].kafka Description Kafka configuration, such as the address and topic, to send enriched flows to. Type object Required address topic Property Type Description address string Address of the Kafka server tls object TLS client configuration. When using TLS, verify that the address matches the Kafka port used for TLS, generally 9093. Note that, when eBPF agents are used, the Kafka certificate needs to be copied in the agent namespace (by default it is netobserv-privileged ). topic string Kafka topic to use. It must exist, NetObserv will not create it. 28.9.1.20. .spec.exporters[].kafka.tls Description TLS client configuration. When using TLS, verify that the address matches the Kafka port used for TLS, generally 9093. Note that, when eBPF agents are used, the Kafka certificate needs to be copied in the agent namespace (by default it is netobserv-privileged ). Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true, the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS) 28.9.1.21. .spec.exporters[].kafka.tls.caCert Description caCert defines the reference of the certificate for the Certificate Authority Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates namespace string Namespace of the config map or secret containing certificates. If omitted, assumes the same namespace as where NetObserv is deployed. If the namespace is different, the config map or the secret will be copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret 28.9.1.22. .spec.exporters[].kafka.tls.userCert Description userCert defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS) Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates namespace string Namespace of the config map or secret containing certificates. If omitted, assumes the same namespace as where NetObserv is deployed. If the namespace is different, the config map or the secret will be copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret 28.9.1.23. .spec.kafka Description Kafka configuration, allowing to use Kafka as a broker as part of the flow collection pipeline. Available when the spec.deploymentModel is KAFKA . Type object Required address topic Property Type Description address string Address of the Kafka server tls object TLS client configuration. When using TLS, verify that the address matches the Kafka port used for TLS, generally 9093. Note that, when eBPF agents are used, the Kafka certificate needs to be copied in the agent namespace (by default it is netobserv-privileged ). topic string Kafka topic to use. It must exist, NetObserv will not create it. 28.9.1.24. .spec.kafka.tls Description TLS client configuration. When using TLS, verify that the address matches the Kafka port used for TLS, generally 9093. Note that, when eBPF agents are used, the Kafka certificate needs to be copied in the agent namespace (by default it is netobserv-privileged ). Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true, the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS) 28.9.1.25. .spec.kafka.tls.caCert Description caCert defines the reference of the certificate for the Certificate Authority Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates namespace string Namespace of the config map or secret containing certificates. If omitted, assumes the same namespace as where NetObserv is deployed. If the namespace is different, the config map or the secret will be copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret 28.9.1.26. .spec.kafka.tls.userCert Description userCert defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS) Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates namespace string Namespace of the config map or secret containing certificates. If omitted, assumes the same namespace as where NetObserv is deployed. If the namespace is different, the config map or the secret will be copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret 28.9.1.27. .spec.loki Description Loki, the flow store, client settings. Type object Property Type Description authToken string authToken describes the way to get a token to authenticate to Loki. - DISABLED will not send any token with the request. - FORWARD will forward the user token for authorization. - HOST - deprecated (*) - will use the local pod service account to authenticate to Loki. When using the Loki Operator, this must be set to FORWARD . batchSize integer batchSize is the maximum batch size (in bytes) of logs to accumulate before sending. batchWait string batchWait is the maximum time to wait before sending a batch. maxBackoff string maxBackoff is the maximum backoff time for client connection between retries. maxRetries integer maxRetries is the maximum number of retries for client connections. minBackoff string minBackoff is the initial backoff time for client connection between retries. querierUrl string querierURL specifies the address of the Loki querier service, in case it is different from the Loki ingester URL. If empty, the URL value will be used (assuming that the Loki ingester and querier are in the same server). When using the Loki Operator, do not set it, since ingestion and queries use the Loki gateway. staticLabels object (string) staticLabels is a map of common labels to set on each flow. statusTls object TLS client configuration for Loki status URL. statusUrl string statusURL specifies the address of the Loki /ready , /metrics and /config endpoints, in case it is different from the Loki querier URL. If empty, the querierURL value will be used. This is useful to show error messages and some context in the frontend. When using the Loki Operator, set it to the Loki HTTP query frontend service, for example https://loki-query-frontend-http.netobserv.svc:3100/ . statusTLS configuration will be used when statusUrl is set. tenantID string tenantID is the Loki X-Scope-OrgID that identifies the tenant for each request. When using the Loki Operator, set it to network , which corresponds to a special tenant mode. timeout string timeout is the maximum time connection / request limit. A timeout of zero means no timeout. tls object TLS client configuration for Loki URL. url string url is the address of an existing Loki service to push the flows to. When using the Loki Operator, set it to the Loki gateway service with the network tenant set in path, for example https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network . 28.9.1.28. .spec.loki.statusTls Description TLS client configuration for Loki status URL. Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true, the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS) 28.9.1.29. .spec.loki.statusTls.caCert Description caCert defines the reference of the certificate for the Certificate Authority Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates namespace string Namespace of the config map or secret containing certificates. If omitted, assumes the same namespace as where NetObserv is deployed. If the namespace is different, the config map or the secret will be copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret 28.9.1.30. .spec.loki.statusTls.userCert Description userCert defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS) Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates namespace string Namespace of the config map or secret containing certificates. If omitted, assumes the same namespace as where NetObserv is deployed. If the namespace is different, the config map or the secret will be copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret 28.9.1.31. .spec.loki.tls Description TLS client configuration for Loki URL. Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true, the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS) 28.9.1.32. .spec.loki.tls.caCert Description caCert defines the reference of the certificate for the Certificate Authority Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates namespace string Namespace of the config map or secret containing certificates. If omitted, assumes the same namespace as where NetObserv is deployed. If the namespace is different, the config map or the secret will be copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret 28.9.1.33. .spec.loki.tls.userCert Description userCert defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS) Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates namespace string Namespace of the config map or secret containing certificates. If omitted, assumes the same namespace as where NetObserv is deployed. If the namespace is different, the config map or the secret will be copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret 28.9.1.34. .spec.processor Description processor defines the settings of the component that receives the flows from the agent, enriches them, generates metrics, and forwards them to the Loki persistence layer and/or any available exporter. Type object Property Type Description conversationEndTimeout string conversationEndTimeout is the time to wait after a network flow is received, to consider the conversation ended. This delay is ignored when a FIN packet is collected for TCP flows (see conversationTerminatingTimeout instead). conversationHeartbeatInterval string conversationHeartbeatInterval is the time to wait between "tick" events of a conversation conversationTerminatingTimeout string conversationTerminatingTimeout is the time to wait from detected FIN flag to end a conversation. Only relevant for TCP flows. debug object debug allows setting some aspects of the internal configuration of the flow processor. This section is aimed exclusively for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Users setting its values do it at their own risk. dropUnusedFields boolean dropUnusedFields allows, when set to true, to drop fields that are known to be unused by OVS, to save storage space. enableKubeProbes boolean enableKubeProbes is a flag to enable or disable Kubernetes liveness and readiness probes healthPort integer healthPort is a collector HTTP port in the Pod that exposes the health check API imagePullPolicy string imagePullPolicy is the Kubernetes pull policy for the image defined above kafkaConsumerAutoscaler object kafkaConsumerAutoscaler is the spec of a horizontal pod autoscaler to set up for flowlogs-pipeline-transformer , which consumes Kafka messages. This setting is ignored when Kafka is disabled. Refer to HorizontalPodAutoscaler documentation (autoscaling/v2). kafkaConsumerBatchSize integer kafkaConsumerBatchSize indicates to the broker the maximum batch size, in bytes, that the consumer will accept. Ignored when not using Kafka. Default: 10MB. kafkaConsumerQueueCapacity integer kafkaConsumerQueueCapacity defines the capacity of the internal message queue used in the Kafka consumer client. Ignored when not using Kafka. kafkaConsumerReplicas integer kafkaConsumerReplicas defines the number of replicas (pods) to start for flowlogs-pipeline-transformer , which consumes Kafka messages. This setting is ignored when Kafka is disabled. logLevel string logLevel of the processor runtime logTypes string logTypes defines the desired record types to generate. Possible values are: - FLOWS (default) to export regular network flows - CONVERSATIONS to generate events for started conversations, ended conversations as well as periodic "tick" updates - ENDED_CONVERSATIONS to generate only ended conversations events - ALL to generate both network flows and all conversations events metrics object Metrics define the processor configuration regarding metrics port integer Port of the flow collector (host port). By convention, some values are forbidden. It must be greater than 1024 and different from 4500, 4789 and 6081. profilePort integer profilePort allows setting up a Go pprof profiler listening to this port resources object resources are the compute resources required by this container. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 28.9.1.35. .spec.processor.debug Description debug allows setting some aspects of the internal configuration of the flow processor. This section is aimed exclusively for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Users setting its values do it at their own risk. Type object Property Type Description env object (string) env allows passing custom environment variables to underlying components. Useful for passing some very concrete performance-tuning options, such as GOGC and GOMAXPROCS, that should not be publicly exposed as part of the FlowCollector descriptor, as they are only useful in edge debug or support scenarios. 28.9.1.36. .spec.processor.kafkaConsumerAutoscaler Description kafkaConsumerAutoscaler is the spec of a horizontal pod autoscaler to set up for flowlogs-pipeline-transformer , which consumes Kafka messages. This setting is ignored when Kafka is disabled. Refer to HorizontalPodAutoscaler documentation (autoscaling/v2). Type object 28.9.1.37. .spec.processor.metrics Description Metrics define the processor configuration regarding metrics Type object Property Type Description disableAlerts array (string) disableAlerts is a list of alerts that should be disabled. Possible values are: NetObservNoFlows , which is triggered when no flows are being observed for a certain period. NetObservLokiError , which is triggered when flows are being dropped due to Loki errors. ignoreTags array (string) ignoreTags is a list of tags to specify which metrics to ignore. Each metric is associated with a list of tags. More details in https://github.com/netobserv/network-observability-operator/tree/main/controllers/flowlogspipeline/metrics_definitions . Available tags are: egress , ingress , flows , bytes , packets , namespaces , nodes , workloads . server object Metrics server endpoint configuration for Prometheus scraper 28.9.1.38. .spec.processor.metrics.server Description Metrics server endpoint configuration for Prometheus scraper Type object Property Type Description port integer The prometheus HTTP port tls object TLS configuration. 28.9.1.39. .spec.processor.metrics.server.tls Description TLS configuration. Type object Property Type Description provided object TLS configuration when type is set to PROVIDED . type string Select the type of TLS configuration: - DISABLED (default) to not configure TLS for the endpoint. - PROVIDED to manually provide cert file and a key file. - AUTO to use OpenShift Container Platform auto generated certificate using annotations. 28.9.1.40. .spec.processor.metrics.server.tls.provided Description TLS configuration when type is set to PROVIDED . Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates namespace string Namespace of the config map or secret containing certificates. If omitted, assumes the same namespace as where NetObserv is deployed. If the namespace is different, the config map or the secret will be copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret 28.9.1.41. .spec.processor.resources Description resources are the compute resources required by this container. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 28.10. Network flows format reference These are the specifications for network flows format, used both internally and when exporting flows to Kafka. 28.10.1. Network Flows format reference The document is organized in two main categories: Labels and regular Fields . This distinction only matters when querying Loki. This is because Labels , unlike Fields , must be used in stream selectors . If you are reading this specification as a reference for the Kafka export feature, you must treat all Labels and Fields as regualr fields and ignore any distinctions between them that are specific to Loki. 28.10.1.1. Labels SrcK8S_Namespace Optional SrcK8S_Namespace : string Source namespace DstK8S_Namespace Optional DstK8S_Namespace : string Destination namespace SrcK8S_OwnerName Optional SrcK8S_OwnerName : string Source owner, such as Deployment, StatefulSet, etc. DstK8S_OwnerName Optional DstK8S_OwnerName : string Destination owner, such as Deployment, StatefulSet, etc. FlowDirection FlowDirection : see the following section, Enumeration: FlowDirection for more details. Flow direction from the node observation point _RecordType Optional _RecordType : RecordType Type of record: 'flowLog' for regular flow logs, or 'allConnections', 'newConnection', 'heartbeat', 'endConnection' for conversation tracking 28.10.1.2. Fields SrcAddr SrcAddr : string Source IP address (ipv4 or ipv6) DstAddr DstAddr : string Destination IP address (ipv4 or ipv6) SrcMac SrcMac : string Source MAC address DstMac DstMac : string Destination MAC address SrcK8S_Name Optional SrcK8S_Name : string Name of the source matched Kubernetes object, such as Pod name, Service name, etc. DstK8S_Name Optional DstK8S_Name : string Name of the destination matched Kubernetes object, such as Pod name, Service name, etc. SrcK8S_Type Optional SrcK8S_Type : string Kind of the source matched Kubernetes object, such as Pod, Service, etc. DstK8S_Type Optional DstK8S_Type : string Kind of the destination matched Kubernetes object, such as Pod name, Service name, etc. SrcPort SrcPort : number Source port DstPort DstPort : number Destination port SrcK8S_OwnerType Optional SrcK8S_OwnerType : string Kind of the source Kubernetes owner, such as Deployment, StatefulSet, etc. DstK8S_OwnerType Optional DstK8S_OwnerType : string Kind of the destination Kubernetes owner, such as Deployment, StatefulSet, etc. SrcK8S_HostIP Optional SrcK8S_HostIP : string Source node IP DstK8S_HostIP Optional DstK8S_HostIP : string Destination node IP SrcK8S_HostName Optional SrcK8S_HostName : string Source node name DstK8S_HostName Optional DstK8S_HostName : string Destination node name Proto Proto : number L4 protocol Interface Optional Interface : string Network interface Packets Packets : number Number of packets in this flow Packets_AB Optional Packets_AB : number In conversation tracking, A to B packets counter per conversation Packets_BA Optional Packets_BA : number In conversation tracking, B to A packets counter per conversation Bytes Bytes : number Number of bytes in this flow Bytes_AB Optional Bytes_AB : number In conversation tracking, A to B bytes counter per conversation Bytes_BA Optional Bytes_BA : number In conversation tracking, B to A bytes counter per conversation TimeFlowStartMs TimeFlowStartMs : number Start timestamp of this flow, in milliseconds TimeFlowEndMs TimeFlowEndMs : number End timestamp of this flow, in milliseconds TimeReceived TimeReceived : number Timestamp when this flow was received and processed by the flow collector, in seconds _HashId Optional _HashId : string In conversation tracking, the conversation identifier _IsFirst Optional _IsFirst : string In conversation tracking, a flag identifying the first flow numFlowLogs Optional numFlowLogs : number In conversation tracking, a counter of flow logs per conversation 28.10.1.3. Enumeration: FlowDirection Ingress Ingress = "0" Incoming traffic, from node observation point Egress Egress = "1" Outgoing traffic, from node observation point 28.11. Troubleshooting Network Observability To assist in troubleshooting Network Observability issues, you can perform some troubleshooting actions. 28.11.1. Using the must-gather tool You can use the must-gather tool to collect information about the Network Observability Operator resources and cluster-wide resources, such as pod logs, FlowCollector , and webhook configurations. Procedure Navigate to the directory where you want to store the must-gather data. Run the following command to collect cluster-wide must-gather resources: USD oc adm must-gather --image-stream=openshift/must-gather \ --image=quay.io/netobserv/must-gather 28.11.2. Configuring network traffic menu entry in the OpenShift Container Platform console Manually configure the network traffic menu entry in the OpenShift Container Platform console when the network traffic menu entry is not listed in Observe menu in the OpenShift Container Platform console. Prerequisites You have installed OpenShift Container Platform version 4.10 or newer. Procedure Check if the spec.consolePlugin.register field is set to true by running the following command: USD oc -n netobserv get flowcollector cluster -o yaml Example output Optional: Add the netobserv-plugin plugin by manually editing the Console Operator config: USD oc edit console.operator.openshift.io cluster Example output Optional: Set the spec.consolePlugin.register field to true by running the following command: USD oc -n netobserv edit flowcollector cluster -o yaml Example output Ensure the status of console pods is running by running the following command: USD oc get pods -n openshift-console -l app=console Restart the console pods by running the following command: USD oc delete pods -n openshift-console -l app=console Clear your browser cache and history. Check the status of Network Observability plugin pods by running the following command: USD oc get pods -n netobserv -l app=netobserv-plugin Example output Check the logs of the Network Observability plugin pods by running the following command: USD oc logs -n netobserv -l app=netobserv-plugin Example output time="2022-12-13T12:06:49Z" level=info msg="Starting netobserv-console-plugin [build version: , build date: 2022-10-21 15:15] at log level info" module=main time="2022-12-13T12:06:49Z" level=info msg="listening on https://:9001" module=server 28.11.3. Flowlogs-Pipeline does not consume network flows after installing Kafka If you deployed the flow collector first with deploymentModel: KAFKA and then deployed Kafka, the flow collector might not connect correctly to Kafka. Manually restart the flow-pipeline pods where Flowlogs-pipeline does not consume network flows from Kafka. Procedure Delete the flow-pipeline pods to restart them by running the following command: USD oc delete pods -n netobserv -l app=flowlogs-pipeline-transformer 28.11.4. Failing to see network flows from both br-int and br-ex interfaces br-ex` and br-int are virtual bridge devices operated at OSI layer 2. The eBPF agent works at the IP and TCP levels, layers 3 and 4 respectively. You can expect that the eBPF agent captures the network traffic passing through br-ex and br-int , when the network traffic is processed by other interfaces such as physical host or virtual pod interfaces. If you restrict the eBPF agent network interfaces to attach only to br-ex and br-int , you do not see any network flow. Manually remove the part in the interfaces or excludeInterfaces that restricts the network interfaces to br-int and br-ex . Procedure Remove the interfaces: [ 'br-int', 'br-ex' ] field. This allows the agent to fetch information from all the interfaces. Alternatively, you can specify the Layer-3 interface for example, eth0 . Run the following command: USD oc edit -n netobserv flowcollector.yaml -o yaml Example output 1 Specifies the network interfaces. 28.11.5. Network Observability controller manager pod runs out of memory You can increase memory limits for the Network Observability operator by patching the Cluster Service Version (CSV), where Network Observability controller manager pod runs out of memory. Procedure Run the following command to patch the CSV: USD oc -n netobserv patch csv network-observability-operator.v1.0.0 --type='json' -p='[{"op": "replace", "path":"/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/memory", value: "1Gi"}]' Example output Run the following command to view the updated CSV: USD oc -n netobserv get csv network-observability-operator.v1.0.0 -o jsonpath='{.spec.install.spec.deployments[0].spec.template.spec.containers[0].resources.limits.memory}' 1Gi | [
"apiVersion: v1 kind: Secret metadata: name: loki-s3 namespace: netobserv stringData: access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK access_key_secret: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo= bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: size: 1x.small storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: loki-s3 type: s3 storageClassName: gp3 1 tenants: mode: openshift-network",
"spec: limits: global: ingestion: ingestionBurstSize: 40 ingestionRate: 20 maxGlobalStreamsPerTenant: 25000 queries: maxChunksPerQuery: 2000000 maxEntriesLimitPerQuery: 10000 maxQuerySeries: 3000",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: netobserv-reader 1 rules: - apiGroups: - 'loki.grafana.com' resources: - network resourceNames: - logs verbs: - 'get'",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: netobserv-writer rules: - apiGroups: - 'loki.grafana.com' resources: - network resourceNames: - logs verbs: - 'create'",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: netobserv-writer-flp roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: netobserv-writer subjects: - kind: ServiceAccount name: flowlogs-pipeline 1 namespace: netobserv - kind: ServiceAccount name: flowlogs-pipeline-transformer namespace: netobserv",
"oc adm policy add-cluster-role-to-user netobserv-reader user1",
"oc get flowcollector/cluster",
"NAME AGENT SAMPLING (EBPF) DEPLOYMENT MODEL STATUS cluster EBPF 50 DIRECT Ready",
"oc get pods -n netobserv",
"NAME READY STATUS RESTARTS AGE flowlogs-pipeline-56hbp 1/1 Running 0 147m flowlogs-pipeline-9plvv 1/1 Running 0 147m flowlogs-pipeline-h5gkb 1/1 Running 0 147m flowlogs-pipeline-hh6kf 1/1 Running 0 147m flowlogs-pipeline-w7vv5 1/1 Running 0 147m netobserv-plugin-cdd7dc6c-j8ggp 1/1 Running 0 147m",
"oc get pods -n netobserv-privileged",
"NAME READY STATUS RESTARTS AGE netobserv-ebpf-agent-4lpp6 1/1 Running 0 151m netobserv-ebpf-agent-6gbrk 1/1 Running 0 151m netobserv-ebpf-agent-klpl9 1/1 Running 0 151m netobserv-ebpf-agent-vrcnf 1/1 Running 0 151m netobserv-ebpf-agent-xf5jh 1/1 Running 0 151m",
"oc get pods -n openshift-operators-redhat",
"NAME READY STATUS RESTARTS AGE loki-operator-controller-manager-5f6cff4f9d-jq25h 2/2 Running 0 18h lokistack-compactor-0 1/1 Running 0 18h lokistack-distributor-654f87c5bc-qhkhv 1/1 Running 0 18h lokistack-distributor-654f87c5bc-skxgm 1/1 Running 0 18h lokistack-gateway-796dc6ff7-c54gz 2/2 Running 0 18h lokistack-index-gateway-0 1/1 Running 0 18h lokistack-index-gateway-1 1/1 Running 0 18h lokistack-ingester-0 1/1 Running 0 18h lokistack-ingester-1 1/1 Running 0 18h lokistack-ingester-2 1/1 Running 0 18h lokistack-querier-66747dc666-6vh5x 1/1 Running 0 18h lokistack-querier-66747dc666-cjr45 1/1 Running 0 18h lokistack-querier-66747dc666-xh8rq 1/1 Running 0 18h lokistack-query-frontend-85c6db4fbd-b2xfb 1/1 Running 0 18h lokistack-query-frontend-85c6db4fbd-jm94f 1/1 Running 0 18h",
"oc describe flowcollector/cluster",
"apiVersion: flows.netobserv.io/v1beta1 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: DIRECT agent: type: EBPF 1 ebpf: sampling: 50 2 logLevel: info privileged: false resources: requests: memory: 50Mi cpu: 100m limits: memory: 800Mi processor: logLevel: info resources: requests: memory: 100Mi cpu: 100m limits: memory: 800Mi conversationEndTimeout: 10s logTypes: FLOWS 3 conversationHeartbeatInterval: 30s loki: 4 url: 'https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network' statusUrl: 'https://loki-query-frontend-http.netobserv.svc:3100/' authToken: FORWARD tls: enable: true caCert: type: configmap name: loki-gateway-ca-bundle certFile: service-ca.crt consolePlugin: register: true logLevel: info portNaming: enable: true portNames: \"3100\": loki quickFilters: 5 - name: Applications filter: src_namespace!: 'openshift-,netobserv' dst_namespace!: 'openshift-,netobserv' default: true - name: Infrastructure filter: src_namespace: 'openshift-,netobserv' dst_namespace: 'openshift-,netobserv' - name: Pods network filter: src_kind: 'Pod' dst_kind: 'Pod' default: true - name: Services network filter: dst_kind: 'Service'",
"deploymentModel: KAFKA 1 kafka: address: \"kafka-cluster-kafka-bootstrap.netobserv\" 2 topic: network-flows 3 tls: enable: false 4",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: exporters: - type: KAFKA kafka: address: \"kafka-cluster-kafka-bootstrap.netobserv\" topic: netobserv-flows-export 1 tls: enable: false 2",
"oc patch flowcollector cluster --type=json -p \"[{\"op\": \"replace\", \"path\": \"/spec/agent/ebpf/sampling\", \"value\": <new value>}] -n netobserv\"",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-ingress namespace: netobserv spec: podSelector: {} 1 ingress: - from: - podSelector: {} 2 namespaceSelector: 3 matchLabels: kubernetes.io/metadata.name: openshift-console - podSelector: {} namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring policyTypes: - Ingress status: {}",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: processor: conversationEndTimeout: 10s 1 logTypes: FLOWS 2 conversationHeartbeatInterval: 30s 3",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: processor: metrics: disableAlerts: [NetObservLokiError, NetObservNoFlows] 1",
"oc adm must-gather --image-stream=openshift/must-gather --image=quay.io/netobserv/must-gather",
"oc -n netobserv get flowcollector cluster -o yaml",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: consolePlugin: register: false",
"oc edit console.operator.openshift.io cluster",
"spec: plugins: - netobserv-plugin",
"oc -n netobserv edit flowcollector cluster -o yaml",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: consolePlugin: register: true",
"oc get pods -n openshift-console -l app=console",
"oc delete pods -n openshift-console -l app=console",
"oc get pods -n netobserv -l app=netobserv-plugin",
"NAME READY STATUS RESTARTS AGE netobserv-plugin-68c7bbb9bb-b69q6 1/1 Running 0 21s",
"oc logs -n netobserv -l app=netobserv-plugin",
"time=\"2022-12-13T12:06:49Z\" level=info msg=\"Starting netobserv-console-plugin [build version: , build date: 2022-10-21 15:15] at log level info\" module=main time=\"2022-12-13T12:06:49Z\" level=info msg=\"listening on https://:9001\" module=server",
"oc delete pods -n netobserv -l app=flowlogs-pipeline-transformer",
"oc edit -n netobserv flowcollector.yaml -o yaml",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: agent: type: EBPF ebpf: interfaces: [ 'br-int', 'br-ex' ] 1",
"oc -n netobserv patch csv network-observability-operator.v1.0.0 --type='json' -p='[{\"op\": \"replace\", \"path\":\"/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/memory\", value: \"1Gi\"}]'",
"clusterserviceversion.operators.coreos.com/network-observability-operator.v1.0.0 patched",
"oc -n netobserv get csv network-observability-operator.v1.0.0 -o jsonpath='{.spec.install.spec.deployments[0].spec.template.spec.containers[0].resources.limits.memory}' 1Gi"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/networking/network-observability |
Chapter 14. Adding and removing Kafka brokers and ZooKeeper nodes | Chapter 14. Adding and removing Kafka brokers and ZooKeeper nodes In a Kafka cluster, managing the addition and removal of brokers and ZooKeeper nodes is critical to maintaining a stable and scalable system. When you add to the number of available brokers, you can configure the default replication factor and minimum in-sync replicas for topics across the brokers. You can use dynamic reconfiguration to add and remove ZooKeeper nodes from an ensemble without disruption. 14.1. Scaling clusters by adding or removing brokers Scaling Kafka clusters by adding brokers can increase the performance and reliability of the cluster. Adding more brokers increases available resources, allowing the cluster to handle larger workloads and process more messages. It can also improve fault tolerance by providing more replicas and backups. Conversely, removing underutilized brokers can reduce resource consumption and improve efficiency. Scaling must be done carefully to avoid disruption or data loss. By redistributing partitions across all brokers in the cluster, the resource utilization of each broker is reduced, which can increase the overall throughput of the cluster. Note To increase the throughput of a Kafka topic, you can increase the number of partitions for that topic. This allows the load of the topic to be shared between different brokers in the cluster. However, if every broker is constrained by a specific resource (such as I/O), adding more partitions will not increase the throughput. In this case, you need to add more brokers to the cluster. Adding brokers when running a multi-node Kafka cluster affects the number of brokers in the cluster that act as replicas. The actual replication factor for topics is determined by settings for the default.replication.factor and min.insync.replicas , and the number of available brokers. For example, a replication factor of 3 means that each partition of a topic is replicated across three brokers, ensuring fault tolerance in the event of a broker failure. Example replica configuration default.replication.factor = 3 min.insync.replicas = 2 When you add or remove brokers, Kafka does not automatically reassign partitions. The best way to do this is using Cruise Control. You can use Cruise Control's add-brokers and remove-brokers modes when scaling a cluster up or down. Use the add-brokers mode after scaling up a Kafka cluster to move partition replicas from existing brokers to the newly added brokers. Use the remove-brokers mode before scaling down a Kafka cluster to move partition replicas off the brokers that are going to be removed. 14.2. Adding nodes to a ZooKeeper cluster Use dynamic reconfiguration to add nodes from a ZooKeeper cluster without stopping the entire cluster. Dynamic Reconfiguration allows ZooKeeper to change the membership of a set of nodes that make up the ZooKeeper cluster without interruption. Prerequisites Dynamic reconfiguration is enabled in the ZooKeeper configuration file ( reconfigEnabled=true ). ZooKeeper authentication is enabled and you can access the new server using the authentication mechanism. Procedure Perform the following steps for each ZooKeeper server you are adding, one at a time: Add a server to the ZooKeeper cluster as described in Section 4.1, "Running a multi-node ZooKeeper cluster" and then start ZooKeeper. Note the IP address and configured access ports of the new server. Start a zookeeper-shell session for the server. Run the following command from a machine that has access to the cluster (this might be one of the ZooKeeper nodes or your local machine, if it has access). ./bin/zookeeper-shell.sh <ip-address>:<zk-port> In the shell session, with the ZooKeeper node running, enter the following line to add the new server to the quorum as a voting member: reconfig -add server.<positive-id> = <address1>:<port1>:<port2>[:role];[<client-port-address>:]<client-port> For example: reconfig -add server.4=172.17.0.4:2888:3888:participant;172.17.0.4:2181 Where <positive-id> is the new server ID 4 . For the two ports, <port1> 2888 is for communication between ZooKeeper servers, and <port2> 3888 is for leader election. The new configuration propagates to the other servers in the ZooKeeper cluster; the new server is now a full member of the quorum. 14.3. Removing nodes from a ZooKeeper cluster Use dynamic reconfiguration to remove nodes from a ZooKeeper cluster without stopping the entire cluster. Dynamic Reconfiguration allows ZooKeeper to change the membership of a set of nodes that make up the ZooKeeper cluster without interruption. Prerequisites Dynamic reconfiguration is enabled in the ZooKeeper configuration file ( reconfigEnabled=true ). ZooKeeper authentication is enabled and you can access the new server using the authentication mechanism. Procedure Perform the following steps, one at a time, for each ZooKeeper server you remove: Log in to the zookeeper-shell on one of the servers that will be retained after the scale down (for example, server 1). Note Access the server using the authentication mechanism configured for the ZooKeeper cluster. Remove a server, for example server 5. Deactivate the server that you removed. | [
"default.replication.factor = 3 min.insync.replicas = 2",
"./bin/zookeeper-shell.sh <ip-address>:<zk-port>",
"reconfig -add server.<positive-id> = <address1>:<port1>:<port2>[:role];[<client-port-address>:]<client-port>",
"reconfig -add server.4=172.17.0.4:2888:3888:participant;172.17.0.4:2181",
"reconfig -remove 5"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/assembly-scaling-clusters-str |
Chapter 55. Browse Component | Chapter 55. Browse Component Available as of Camel version 1.3 The Browse component provides a simple BrowsableEndpoint which can be useful for testing, visualisation tools or debugging. The exchanges sent to the endpoint are all available to be browsed. 55.1. URI format Where someName can be any string to uniquely identify the endpoint. 55.2. Options The Browse component has no options. The Browse endpoint is configured using URI syntax: with the following path and query parameters: 55.2.1. Path Parameters (1 parameters): Name Description Default Type name Required A name which can be any string to uniquely identify the endpoint String 55.2.2. Query Parameters (4 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN/ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions, that will be logged at WARN/ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the default exchange pattern when creating an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 55.3. Sample In the route below, we insert a browse: component to be able to browse the Exchanges that are passing through: from("activemq:order.in").to("browse:orderReceived").to("bean:processOrder"); We can now inspect the received exchanges from within the Java code: private CamelContext context; public void inspectRecievedOrders() { BrowsableEndpoint browse = context.getEndpoint("browse:orderReceived", BrowsableEndpoint.class); List<Exchange> exchanges = browse.getExchanges(); // then we can inspect the list of received exchanges from Java for (Exchange exchange : exchanges) { String payload = exchange.getIn().getBody(); // do something with payload } } 55.4. See Also Configuring Camel Component Endpoint Getting Started | [
"browse:someName[?options]",
"browse:name",
"from(\"activemq:order.in\").to(\"browse:orderReceived\").to(\"bean:processOrder\");",
"private CamelContext context; public void inspectRecievedOrders() { BrowsableEndpoint browse = context.getEndpoint(\"browse:orderReceived\", BrowsableEndpoint.class); List<Exchange> exchanges = browse.getExchanges(); // then we can inspect the list of received exchanges from Java for (Exchange exchange : exchanges) { String payload = exchange.getIn().getBody(); // do something with payload } }"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/browse-component |
3.4. Removing the Cluster Configuration | 3.4. Removing the Cluster Configuration To remove all cluster configuration files and stop all cluster services, thus permanently destroying a cluster, use the following command. Warning This command permanently removes any cluster configuration that has been created. It is recommended that you run pcs cluster stop before destroying the cluster. | [
"pcs cluster destroy"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-clusterremove-haar |
Chapter 5. Disaster recovery with stretch cluster for OpenShift Data Foundation | Chapter 5. Disaster recovery with stretch cluster for OpenShift Data Foundation Red Hat OpenShift Data Foundation deployment can be stretched between two different geographical locations to provide the storage infrastructure with disaster recovery capabilities. When faced with a disaster, such as one of the two locations is partially or totally not available, OpenShift Data Foundation deployed on the OpenShift Container Platform deployment must be able to survive. This solution is available only for metropolitan spanned data centers with specific latency requirements between the servers of the infrastructure. Note The stretch cluster solution is designed for deployments where latencies do not exceed 10 ms maximum round-trip time (RTT) between the zones containing data volumes. For Arbiter nodes follow the latency requirements specified for etcd, see Guidance for Red Hat OpenShift Container Platform Clusters - Deployments Spanning Multiple Sites(Data Centers/Regions) . Contact Red Hat Customer Support if you are planning to deploy with higher latencies. The following diagram shows the simplest deployment for a stretched cluster: OpenShift nodes and OpenShift Data Foundation daemons In the diagram the OpenShift Data Foundation monitor pod deployed in the Arbiter zone has a built-in tolerance for the master nodes. The diagram shows the master nodes in each Data Zone which are required for a highly available OpenShift Container Platform control plane. Also, it is important that the OpenShift Container Platform nodes in one of the zones have network connectivity with the OpenShift Container Platform nodes in the other two zones. 5.1. Requirements for enabling stretch cluster Ensure you have addressed OpenShift Container Platform requirements for deployments spanning multiple sites. For more information, see knowledgebase article on cluster deployments spanning multiple sites . Ensure that you have at least three OpenShift Container Platform master nodes in three different zones. One master node in each of the three zones. Ensure that you have at least four OpenShift Container Platform worker nodes evenly distributed across the two Data Zones. For stretch clusters on bare metall, use the SSD drive as the root drive for OpenShift Container Platform master nodes. Ensure that each node is pre-labeled with its zone label. For more information, see the Applying topology zone labels to OpenShift Container Platform node section. The stretch cluster solution is designed for deployments where latencies do not exceed 10 ms between zones. Contact Red Hat Customer Support if you are planning to deploy with higher latencies. Note Flexible scaling and Arbiter both cannot be enabled at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas in an Arbiter cluster, you need to add at least one node in each of the two data zones. 5.2. Applying topology zone labels to OpenShift Container Platform nodes During a site outage, the zone that has the arbiter function makes use of the arbiter label. These labels are arbitrary and must be unique for the three locations. For example, you can label the nodes as follows: To apply the labels to the node: <NODENAME> Is the name of the node <LABEL> Is the topology zone label To validate the labels using the example labels for the three zones: <LABEL> Is the topology zone label Alternatively, you can run a single command to see all the nodes with its zone. The stretch cluster topology zone labels are now applied to the appropriate OpenShift Container Platform nodes to define the three locations. step Install the local storage operator from the OpenShift Container Platform web console . 5.3. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 5.4. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least four worker nodes evenly distributed across two data centers in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see Planning your deployment . Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in command-line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to search for the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.15 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you selected Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. steps Create an OpenShift Data Foundation cluster . 5.5. Creating OpenShift Data Foundation cluster Prerequisites Ensure that you have met all the requirements in Requirements for enabling stretch cluster section. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the Create a new StorageClass using the local storage devices option. Click . Important You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on selected nodes. Important If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Select SSD or NVMe to build a supported configuration. You can select HDDs for unsupported test installations. Expand the Advanced section and set the following options: Volume Mode Block is selected by default. Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Select Enable arbiter checkbox if you want to use the stretch clusters. This option is available only when all the prerequisites for arbiter are fulfilled and the selected nodes are populated. For more information, see Arbiter stretch cluster requirements in Requirements for enabling stretch cluster . Select the arbiter zone from the dropdown list. Choose a performance profile for Configure performance . You can also configure the performance profile after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Select one of the following Encryption level : Cluster-wide encryption to encrypt the entire cluster (block and file). StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Network is set to Default (OVN) if you are using a single network. You can switch to Custom (Multus) if you are using multiple network interfaces and then choose any one of the following: Select a Public Network Interface from the dropdown. Select a Cluster Network Interface from the dropdown. Note If you are using only one additional network interface, select the single NetworkAttachementDefinition , that is, ocs-public-cluster for the Public Network Interface, and leave the Cluster Network Interface blank. Click . In the Data Protection page, click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that the Status of StorageCluster is Ready and has a green tick mark to it. For arbiter mode of deployment: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for the arbiter key in the spec section and ensure enable is set to true . To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation installation . 5.6. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . 5.6.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information about the expected number of pods for each component and how it varies depending on the number of nodes, see Table 5.1, "Pods corresponding to OpenShift Data Foundation cluster" . Click the Running and Completed tabs to verify that the following pods are in Running and Completed state: Table 5.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) csi-addons-controller-manager-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (5 pods are distributed across 3 zones, 2 per data-center zones and 1 in arbiter zone) MGR rook-ceph-mgr-* (2 pods on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods are distributed across 2 data-center zones) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (2 pods are distributed across 2 data-center zones) CSI cephfs csi-cephfsplugin-* (1 pod on each worker node) csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes) rbd csi-rbdplugin-* (1 pod on each worker node) csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node and 1 pod in arbiter zone) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 5.6.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 5.6.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 5.6.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw 5.7. Install Zone Aware Sample Application Deploy a zone aware sample application to validate whether an OpenShift Data Foundation, stretch cluster setup is configured correctly. Important With latency between the data zones, you can expect to see performance degradation compared to an OpenShift cluster with low latency between nodes and zones (for example, all nodes in the same location). The rate of or amount of performance degradation depends on the latency between the zones and on the application behavior using the storage (such as heavy write traffic). Ensure that you test the critical applications with stretch cluster configuration to ensure sufficient application performance for the required service levels. A ReadWriteMany (RWX) Persistent Volume Claim (PVC) is created using the ocs-storagecluster-cephfs storage class. Multiple pods use the newly created RWX PVC at the same time. The application used is called File Uploader. Demonstration on how an application is spread across topology zones so that it is still available in the event of a site outage: Note This demonstration is possible since this application shares the same RWX volume for storing files. It works for persistent data access as well because Red Hat OpenShift Data Foundation is configured as a stretched cluster with zone awareness and high availability. Create a new project. Deploy the example PHP application called file-uploader. Example Output: View the build log and wait until the application is deployed. Example Output: The command prompt returns out of the tail mode after you see Push successful . Note The new-app command deploys the application directly from the git repository and does not use the OpenShift template, hence the OpenShift route resource is not created by default. You need to create the route manually. Scaling the application Scale the application to four replicas and expose its services to make the application zone aware and available. You should have four file-uploader pods in a few minutes. Repeat the above command until there are 4 file-uploader pods in the Running status. Create a PVC and attach it into an application. This command: Creates a PVC. Updates the application deployment to include a volume definition. Updates the application deployment to attach a volume mount into the specified mount-path. Creates a new deployment with the four application pods. Check the result of adding the volume. Example Output: Notice the ACCESS MODE is set to RWX. All the four file-uploader pods are using the same RWX volume. Without this access mode, OpenShift does not attempt to attach multiple pods to the same Persistent Volume (PV) reliably. If you attempt to scale up the deployments that are using ReadWriteOnce (RWO) PV, the pods may get colocated on the same node. 5.7.1. Scaling the application after installation Procedure Scale the application to four replicas and expose its services to make the application zone aware and available. You should have four file-uploader pods in a few minutes. Repeat the above command until there are 4 file-uploader pods in the Running status. Create a PVC and attach it into an application. This command: Creates a PVC. Updates the application deployment to include a volume definition. Updates the application deployment to attach a volume mount into the specified mount-path. Creates a new deployment with the four application pods. Check the result of adding the volume. Example Output: Notice the ACCESS MODE is set to RWX. All the four file-uploader pods are using the same RWX volume. Without this access mode, OpenShift does not attempt to attach multiple pods to the same Persistent Volume (PV) reliably. If you attempt to scale up the deployments that are using ReadWriteOnce (RWO) PV, the pods may get colocated on the same node. 5.7.2. Modify Deployment to be Zone Aware Currently, the file-uploader Deployment is not zone aware and can schedule all the pods in the same zone. In this case, if there is a site outage then the application is unavailable. For more information, see Controlling pod placement by using pod topology spread constraints . Add the pod placement rule in the application deployment configuration to make the application zone aware. Run the following command, and review the output: Example Output: Edit the deployment to use the topology zone labels. Add add the following new lines between the Start and End (shown in the output in the step): Example output: Scale down the deployment to zero pods and then back to four pods. This is needed because the deployment changed in terms of pod placement. Scaling down to zero pods Example output: Scaling up to four pods Example output: Verify that the four pods are spread across the four nodes in datacenter1 and datacenter2 zones. Example output: Search for the zone labels used. Example output: Use the file-uploader web application using your browser to upload new files. Find the route that is created. Example Output: Point your browser to the web application using the route in the step. The web application lists all the uploaded files and offers the ability to upload new ones as well as you download the existing data. Right now, there is nothing. Select an arbitrary file from your local machine and upload it to the application. Click Choose file to select an arbitrary file. Click Upload . Figure 5.1. A simple PHP-based file upload tool Click List uploaded files to see the list of all currently uploaded files. Note The OpenShift Container Platform image registry, ingress routing, and monitoring services are not zone aware. 5.8. Recovering OpenShift Data Foundation stretch cluster Given that the stretch cluster disaster recovery solution is to provide resiliency in the face of a complete or partial site outage, it is important to understand the different methods of recovery for applications and their storage. How the application is architected determines how soon it becomes available again on the active zone. There are different methods of recovery for applications and their storage depending on the site outage. The recovery time depends on the application architecture. The different methods of recovery are as follows: Recovering zone-aware HA applications with RWX storage . Recovering HA applications with RWX storage . Recovering applications with RWO storage . Recovering StatefulSet pods . 5.8.1. Understanding zone failure For the purpose of this section, zone failure is considered as a failure where all OpenShift Container Platform, master and worker nodes in a zone are no longer communicating with the resources in the second data zone (for example, powered down nodes). If communication between the data zones is still partially working (intermittently up or down), the cluster, storage, and network admins should disconnect the communication path between the data zones for recovery to succeed. Important When you install the sample application, power off the OpenShift Container Platform nodes (at least the nodes with OpenShift Data Foundation devices) to test the failure of a data zone in order to validate that your file-uploader application is available, and you can upload new files. 5.8.2. Recovering zone-aware HA applications with RWX storage Applications that are deployed with topologyKey: topology.kubernetes.io/zone have one or more replicas scheduled in each data zone, and are using shared storage, that is, ReadWriteMany (RWX) CephFS volume, terminate themselves in the failed zone after few minutes and new pods are rolled in and stuck in pending state until the zones are recovered. An example of this type of application is detailed in the Install Zone Aware Sample Application section. Important During zone recovery if application pods go into CrashLoopBackOff (CLBO) state with permission denied error while mounting the CephFS volume, then restart the nodes where the pods are scheduled. Wait for some time and then check if the pods are running again. 5.8.3. Recovering HA applications with RWX storage Applications that are using topologyKey: kubernetes.io/hostname or no topology configuration have no protection against all of the application replicas being in the same zone. Note This can happen even with podAntiAffinity and topologyKey: kubernetes.io/hostname in the Pod spec because this anti-affinity rule is host-based and not zone-based. If this happens and all replicas are located in the zone that fails, the application using ReadWriteMany (RWX) storage takes 6-8 minutes to recover on the active zone. This pause is for the OpenShift Container Platform nodes in the failed zone to become NotReady (60 seconds) and then for the default pod eviction timeout to expire (300 seconds). 5.8.4. Recovering applications with RWO storage Applications that use ReadWriteOnce (RWO) storage have a known behavior described in this Kubernetes issue . Because of this issue, if there is a data zone failure, any application pods in that zone mounting RWO volumes (for example, cephrbd based volumes) are stuck with Terminating status after 6-8 minutes and are not re-created on the active zone without manual intervention. Check the OpenShift Container Platform nodes with a status of NotReady . There may be an issue that prevents the nodes from communicating with the OpenShift control plane. However, the nodes may still be performing I/O operations against Persistent Volumes (PVs). If two pods are concurrently writing to the same RWO volume, there is a risk of data corruption. Ensure that processes on the NotReady node are either terminated or blocked until they are terminated. Example solutions: Use an out of band management system to power off a node, with confirmation, to ensure process termination. Withdraw a network route that is used by nodes at a failed site to communicate with storage. Note Before restoring service to the failed zone or nodes, confirm that all the pods with PVs have terminated successfully. To get the Terminating pods to recreate on the active zone, you can either force delete the pod or delete the finalizer on the associated PV. Once one of these two actions are completed, the application pod should recreate on the active zone and successfully mount its RWO storage. Force deleting the pod Force deletions do not wait for confirmation from the kubelet that the pod has been terminated. <PODNAME> Is the name of the pod <NAMESPACE> Is the project namespace Deleting the finalizer on the associated PV Find the associated PV for the Persistent Volume Claim (PVC) that is mounted by the Terminating pod and delete the finalizer using the oc patch command. <PV_NAME> Is the name of the PV An easy way to find the associated PV is to describe the Terminating pod. If you see a multi-attach warning, it should have the PV names in the warning (for example, pvc-0595a8d2-683f-443b-aee0-6e547f5f5a7c ). <PODNAME> Is the name of the pod <NAMESPACE> Is the project namespace Example output: 5.8.5. Recovering StatefulSet pods Pods that are part of a StatefulSet have a similar issue as pods mounting ReadWriteOnce (RWO) volumes. More information is referenced in the Kubernetes resource StatefulSet considerations . To get the pods part of a StatefulSet to re-create on the active zone after 6-8 minutes you need to force delete the pod with the same requirements (that is, OpenShift Container Platform node powered off or communication disconnected) as pods with RWO volumes. | [
"topology.kubernetes.io/zone=arbiter for Master0 topology.kubernetes.io/zone=datacenter1 for Master1, Worker1, Worker2 topology.kubernetes.io/zone=datacenter2 for Master2, Worker3, Worker4",
"oc label node <NODENAME> topology.kubernetes.io/zone= <LABEL>",
"oc get nodes -l topology.kubernetes.io/zone= <LABEL> -o name",
"oc get nodes -L topology.kubernetes.io/zone",
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"spec: arbiter: enable: true [..] nodeTopologies: arbiterLocation: arbiter #arbiter zone storageDeviceSets: - config: {} count: 1 [..] replica: 4 status: conditions: [..] failureDomain: zone",
"oc new-project my-shared-storage",
"oc new-app openshift/php:latest~https://github.com/mashetty330/openshift-php-upload-demo --name=file-uploader",
"Found image 4f2dcc0 (9 days old) in image stream \"openshift/php\" under tag \"7.2-ubi8\" for \"openshift/php:7.2- ubi8\" Apache 2.4 with PHP 7.2 ----------------------- PHP 7.2 available as container is a base platform for building and running various PHP 7.2 applications and frameworks. PHP is an HTML-embedded scripting language. PHP attempts to make it easy for developers to write dynamically generated web pages. PHP also offers built-in database integration for several commercial and non-commercial database management systems, so writing a database-enabled webpage with PHP is fairly simple. The most common use of PHP coding is probably as a replacement for CGI scripts. Tags: builder, php, php72, php-72 * A source build using source code from https://github.com/christianh814/openshift-php-upload-demo will be cr eated * The resulting image will be pushed to image stream tag \"file-uploader:latest\" * Use 'oc start-build' to trigger a new build --> Creating resources imagestream.image.openshift.io \"file-uploader\" created buildconfig.build.openshift.io \"file-uploader\" created deployment.apps \"file-uploader\" created service \"file-uploader\" created --> Success Build scheduled, use 'oc logs -f buildconfig/file-uploader' to track its progress. Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose service/file-uploader' Run 'oc status' to view your app.",
"oc logs -f bc/file-uploader -n my-shared-storage",
"Cloning \"https://github.com/christianh814/openshift-php-upload-demo\" [...] Generating dockerfile with builder image image-registry.openshift-image-regis try.svc:5000/openshift/php@sha256:d97466f33999951739a76bce922ab17088885db610c 0e05b593844b41d5494ea STEP 1: FROM image-registry.openshift-image-registry.svc:5000/openshift/php@s ha256:d97466f33999951739a76bce922ab17088885db610c0e05b593844b41d5494ea STEP 2: LABEL \"io.openshift.build.commit.author\"=\"Christian Hernandez <christ [email protected]>\" \"io.openshift.build.commit.date\"=\"Sun Oct 1 1 7:15:09 2017 -0700\" \"io.openshift.build.commit.id\"=\"288eda3dff43b02f7f7 b6b6b6f93396ffdf34cb2\" \"io.openshift.build.commit.ref\"=\"master\" \" io.openshift.build.commit.message\"=\"trying to modularize\" \"io.openshift .build.source-location\"=\"https://github.com/christianh814/openshift-php-uploa d-demo\" \"io.openshift.build.image\"=\"image-registry.openshift-image-regi stry.svc:5000/openshift/php@sha256:d97466f33999951739a76bce922ab17088885db610 c0e05b593844b41d5494ea\" STEP 3: ENV OPENSHIFT_BUILD_NAME=\"file-uploader-1\" OPENSHIFT_BUILD_NAMESP ACE=\"my-shared-storage\" OPENSHIFT_BUILD_SOURCE=\"https://github.com/christ ianh814/openshift-php-upload-demo\" OPENSHIFT_BUILD_COMMIT=\"288eda3dff43b0 2f7f7b6b6b6f93396ffdf34cb2\" STEP 4: USER root STEP 5: COPY upload/src /tmp/src STEP 6: RUN chown -R 1001:0 /tmp/src STEP 7: USER 1001 STEP 8: RUN /usr/libexec/s2i/assemble ---> Installing application source => sourcing 20-copy-config.sh ---> 17:24:39 Processing additional arbitrary httpd configuration provide d by s2i => sourcing 00-documentroot.conf => sourcing 50-mpm-tuning.conf => sourcing 40-ssl-certs.sh STEP 9: CMD /usr/libexec/s2i/run STEP 10: COMMIT temp.builder.openshift.io/my-shared-storage/file-uploader-1:3 b83e447 Getting image source signatures [...]",
"oc expose svc/file-uploader -n my-shared-storage",
"oc scale --replicas=4 deploy/file-uploader -n my-shared-storage",
"oc get pods -o wide -n my-shared-storage",
"oc set volume deploy/file-uploader --add --name=my-shared-storage -t pvc --claim-mode=ReadWriteMany --claim-size=10Gi --claim-name=my-shared-storage --claim-class=ocs-storagecluster-cephfs --mount-path=/opt/app-root/src/uploaded -n my-shared-storage",
"oc get pvc -n my-shared-storage",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-shared-storage Bound pvc-5402cc8a-e874-4d7e-af76-1eb05bd2e7c7 10Gi RWX ocs-storagecluster-cephfs 52s",
"oc expose svc/file-uploader -n my-shared-storage",
"oc scale --replicas=4 deploy/file-uploader -n my-shared-storage",
"oc get pods -o wide -n my-shared-storage",
"oc set volume deploy/file-uploader --add --name=my-shared-storage -t pvc --claim-mode=ReadWriteMany --claim-size=10Gi --claim-name=my-shared-storage --claim-class=ocs-storagecluster-cephfs --mount-path=/opt/app-root/src/uploaded -n my-shared-storage",
"oc get pvc -n my-shared-storage",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-shared-storage Bound pvc-5402cc8a-e874-4d7e-af76-1eb05bd2e7c7 10Gi RWX ocs-storagecluster-cephfs 52s",
"oc get deployment file-uploader -o yaml -n my-shared-storage | less",
"[...] spec: progressDeadlineSeconds: 600 replicas: 4 revisionHistoryLimit: 10 selector: matchLabels: deployment: file-uploader strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: annotations: openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: deployment: file-uploader spec: # <-- Start inserted lines after here containers: # <-- End inserted lines before here - image: image-registry.openshift-image-registry.svc:5000/my-shared-storage/file-uploader@sha256:a458ea62f990e431ad7d5f84c89e2fa27bdebdd5e29c5418c70c56eb81f0a26b imagePullPolicy: IfNotPresent name: file-uploader [...]",
"oc edit deployment file-uploader -n my-shared-storage",
"[...] spec: topologySpreadConstraints: - labelSelector: matchLabels: deployment: file-uploader maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: deployment: file-uploader maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: ScheduleAnyway nodeSelector: node-role.kubernetes.io/worker: \"\" containers: [...]",
"deployment.apps/file-uploader edited",
"oc scale deployment file-uploader --replicas=0 -n my-shared-storage",
"deployment.apps/file-uploader scaled",
"oc scale deployment file-uploader --replicas=4 -n my-shared-storage",
"deployment.apps/file-uploader scaled",
"oc get pods -o wide -n my-shared-storage | egrep '^file-uploader'| grep -v build | awk '{print USD7}' | sort | uniq -c",
"1 perf1-mz8bt-worker-d2hdm 1 perf1-mz8bt-worker-k68rv 1 perf1-mz8bt-worker-ntkp8 1 perf1-mz8bt-worker-qpwsr",
"oc get nodes -L topology.kubernetes.io/zone | grep datacenter | grep -v master",
"perf1-mz8bt-worker-d2hdm Ready worker 35d v1.20.0+5fbfd19 datacenter1 perf1-mz8bt-worker-k68rv Ready worker 35d v1.20.0+5fbfd19 datacenter1 perf1-mz8bt-worker-ntkp8 Ready worker 35d v1.20.0+5fbfd19 datacenter2 perf1-mz8bt-worker-qpwsr Ready worker 35d v1.20.0+5fbfd19 datacenter2",
"oc get route file-uploader -n my-shared-storage -o jsonpath --template=\"http://{.spec.host}{'\\n'}\"",
"http://file-uploader-my-shared-storage.apps.cluster-ocs4-abdf.ocs4-abdf.sandbox744.opentlc.com",
"oc delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>",
"oc patch -n openshift-storage pv/ <PV_NAME> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge",
"oc describe pod <PODNAME> --namespace <NAMESPACE>",
"[...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m5s default-scheduler Successfully assigned openshift-storage/noobaa-db-pg-0 to perf1-mz8bt-worker-d2hdm Warning FailedAttachVolume 4m5s attachdetach-controller Multi-Attach error for volume \"pvc-0595a8d2-683f-443b-aee0-6e547f5f5a7c\" Volume is already exclusively attached to one node and can't be attached to another"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/introduction-to-stretch-cluster-disaster-recovery_stretch-cluster |
Chapter 54. File Systems | Chapter 54. File Systems Mounting a non-existent NFS export outputs a different error than in RHEL 6 The mount utility prints the operation not permitted error message when an NFS client is trying to mount a server export that does not exist. In Red Hat Enterprise Linux 6, the access denied message was printed in the same situation. (BZ#1428549) XFS disables per-inode DAX functionality Per-inode direct access (DAX) options are now disabled in the XFS file system due to unresolved issues with this feature. XFS now ignores existing per-inode DAX flags on the disk. You can still set file system DAX behavior using the dax mount option: (BZ#1623150) | [
"mount -o dax device mount-point"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/known_issues_file_systems |
Chapter 4. Configuring Camel K integrations | Chapter 4. Configuring Camel K integrations There are two configuration phases in a Camel K integration life cycle: Build time - When Camel Quarkus builds a Camel K integration, it consumes build-time properties. Runtime - When a Camel K integration is running, the integration uses runtime properties or configuration information from local files, OpenShift ConfigMaps, or Secrets. You provide configure information by using the following options with the kamel run command: For build-time configuration, use the --build-property option as described in Specifying build-time configuration properties For runtime configuration, use the --property , --config , or --resource options as described in Specifying runtime configuration options For example, you can use build-time and runtime options to quickly configure a datasource in Camel K as shown in the link: Connect Camel K with databases sample configuration. Section 4.1, "Specifying build-time configuration properties" Section 4.2, "Specifying runtime configuration options" Section 4.3, "Configuring Camel integration components" Section 4.4, "Configuring Camel K integration dependencies" 4.1. Specifying build-time configuration properties You might need to provide property values to the Camel Quarkus runtime so that it can build a Camel K integration. For more information about Quarkus configurations that take effect during build time, see the Quarkus Build Time configuration documentation . You can specify build-time properties directly at the command line or by referencing a property file. If a property is defined in both places, the value specified directly at the command line takes precedence over the value in the property file. Prerequisites You must have access to an OpenShift cluster on which the Camel K Operator and OpenShift Serverless Operator are installed: Installing Camel K Installing OpenShift Serverless from the OperatorHub You know the Camel Quarkus configuration options that you want to apply to your Camel K integration. Procedure Specify the --build-property option with the Camel K kamel run command: For example, the following Camel K integration (named my-simple-timer.yaml ) uses the quarkus.application.name configuration option: To override the default application name, specify a value for the quarkus.application.name property when you run the integration. For example, to change the name from my-simple-timer to my-favorite-app : To provide more than one build-time property, add additional --build-property options to the kamel run command: Alternately, if you need to specify multiple properties, you can create a property file and specify the property file with the --build-property file option: For example, the following property file (named quarkus.properties ) defines two Quarkus properties: The quarkus.banner.enabled property specifies to display the Quarkus banner when the integration starts up. To specify the quarkus.properties file with the Camel K kamel run command: Quarkus parses the property file and uses the property values to configure the Camel K integration. Additional resources For information about Camel Quarkus as the runtime for Camel K integrations, see Quarkus Trait . 4.2. Specifying runtime configuration options You can specify the following runtime configuration information for a Camel K integration to use when it is running: Runtime properties that you provide at the command line or in a .properties file. Configuration values that you want the Camel K operator to process and parse as runtime properties when the integration starts. You can provide the configuration values in a local text file, an OpenShift ConfigMap, or an OpenShift secret. Resource information that is not parsed as a property file when the integration starts. You can provide resource information in a local text file, a binary file, an OpenShift ConfigMap, or an OpenShift secret. Use the following kamel run options: --property Use the --property option to specify runtime properties directly at the command line or by referencing a Java *.properties file. The Camel K operator appends the contents of the properties file to the running integration's user.properties file. --config Use the --config option to provide configuration values that you want the Camel K operator to process and parse as runtime properties when the integration starts. You can provide a local text file (1 MiB maximum file size), a ConfigMap (3MB) or a Secret (3MB). The file must be a UTF-8 resource. The materialized file (that is generated at integration startup from the file that you provide) is made available at the classpath level so that you can reference it in your integration code without having to provide an exact location. Note: If you need to provide a non-UTF-8 resource (for example, a binary file), use the --resource option. --resource Use the --resource option to provide a resource for the integration to access when it is running. You can provide a local text or a binary file (1 MiB maximum file size), a ConfigMap (3MB maximum), or a Secret (3MB maximum). Optionally, you can specify the destination of the file that is materialized for the resource. For example, if you want to set an HTTPS connection, use the --resource option to provide an SSL certificate (a binary file) that is expected in a specified location. The Camel K operator does not parse the resource for properties and does not add the resource to the classpath. (If you want to add the resource to the classpath, you can use the JVM trait in your integration). 4.2.1. Providing runtime properties You can specify runtime properties directly at the command line or by referencing a Java *.properties file by using the kamel run command's --property option. When you run an integration with the --property option, the Camel K operator appends the properties to the running integration's user.properties file. 4.2.1.1. Providing runtime properties at the command line You can configure properties for Camel K integrations on the command line at runtime. When you define a property in an integration by using a property placeholder, for example, {{my.message}} , you can specify the property value on the command line, for example --property my.message=Hello . You can specify multiple properties in a single command. Prerequisites Setting up your Camel K development environment Procedure Develop a Camel integration that uses a property. The following simple example includes a {{my.message}} property placeholder: Run the integration by using the following syntax to set the property value at runtime. Alternately, you can use the --p shorthand notation (in place of --property ): For example: or Here is the example result: See also Providing runtime properties in a property file 4.2.1.2. Providing runtime properties in a property file You can configure multiple properties for Camel K integrations by specifying a property file ( *.properties ) on the command line at runtime. When you define properties in an integration using property placeholders, for example, {{my.items}} , you can specify the property values on the command line by using a properties file, for example --p file my-integration.properties . Prerequisite Setting up your Camel K development environment Procedure Create an integration properties file. The following example is from a file named my.properties : my.key.1=hello my.key.2=world Develop a Camel integration that uses properties that are defined in the properties file. The following example Routing.java integration uses the {{my.key.1}} and {{my.key.2=world}} property placeholders: import org.apache.camel.builder.RouteBuilder; public class Routing extends RouteBuilder { @Override public void configure() throws Exception { from("timer:property-file") .routeId("property-file") .log("property file content is: {{my.key.1}} {{my.key.2}}"); } } Run the integration by using the following syntax to reference the property file: Alternately, you can use the --p shorthand notation (in place of --property ): For example: Additional resources Deploying a basic Camel K Java integration Providing runtime properties at the command line 4.2.2. Providing configuration values You can provide configuration values that you want the Camel K operator to process and parse as runtime properties by using the kamel run command's --config option. You can provide the configuration values in a local text (UTF-8) file, an OpenShift ConfigMap, or an OpenShift secret. When you run the integration, the Camel K operator materializes the provided file and adds it to the classpath so that you can reference the configuration values in your integration code without having to provide an exact location. 4.2.2.1. Specifying a text file If you have a UTF-8 text file that contains configuration values, you can use the --config file:/path/to/file option to make the file available (with the same file name) on the running integration's classpath. Prerequisites Setting up your Camel K development environment You have one or more (non-binary) text files that contain configuration values. For example, create a file named resources-data.txt that contains the following line of text: Procedure Create a Camel K integration that references the text file that contains configuration values. For example, the following integration ( ConfigFileRoute.java ) expects the resources-data.txt file to be available on the classpath at runtime: import org.apache.camel.builder.RouteBuilder; public class ConfigFileRoute extends RouteBuilder { @Override public void configure() throws Exception { from("timer:config-file") .setBody() .simple("resource:classpath:resources-data.txt") .log("resource file content is: USD{body}"); } } Run the integration and use the --config option to specify the text file so that it is available to the running integration. For example: Optionally, you can provide more than one file by adding the --config option repeatedly, for example: 4.2.2.2. Specifying a ConfigMap If you have an OpenShift ConfigMap that contains configuration values, and you need to materialize a ConfigMap so that it is available to your Camel K integration, use the --config configmap:<configmap-name> syntax. Prerequisites Setting up your Camel K development environment You have one or more ConfigMap files stored on your OpenShift cluster. For example, you can create a ConfigMap by using the following command: Procedure Create a Camel K integration that references the ConfigMap. For example, the following integration (named ConfigConfigmapRoute.java ) references a configuration value named my-configmap-key in a ConfigMap named my-cm . import org.apache.camel.builder.RouteBuilder; public class ConfigConfigmapRoute extends RouteBuilder { @Override public void configure() throws Exception { from("timer:configmap") .setBody() .simple("resource:classpath:my-configmap-key") .log("configmap content is: USD{body}"); } } Run the integration and use the --config option to materialize the ConfigMap file so that it is available to the running integration. For example: When the integration starts, the Camel K operator mounts an OpenShift volume with the ConfigMap's content. Note: If you specify a ConfigMap that is not yet available on the cluster, the Integration waits and starts only after the ConfigMap becomes available. 4.2.2.3. Specifying a Secret You can use an OpenShift Secret to securely contain configuration information. To materialize a secret so that it is available to your Camel K integration, you can use the --config secret syntax. Prerequisites Setting up your Camel K development environment You have one or more Secrets stored on your OpenShift cluster. For example, you can create a Secret by using the following command: Procedure Create a Camel K integration that references the ConfigMap. For example, the following integration (named ConfigSecretRoute.java ) references the my-secret property that is in a Secret named my-sec : import org.apache.camel.builder.RouteBuilder; public class ConfigSecretRoute extends RouteBuilder { @Override public void configure() throws Exception { from("timer:secret") .setBody() .simple("resource:classpath:my-secret") .log("secret content is: USD{body}"); } } Run the integration and use the --config option to materialize the Secret so that it is available to the running integration. For example: When the integration starts, the Camel K operator mounts an OpenShift volume with the Secret's content. 4.2.2.4. Referencing properties that are contained in ConfigMaps or Secrets When you run an integration and you specify a ConfigMap or Secret with the --config option, the Camel K operator parses the ConfigMap or Secret as a runtime property file. Within your integration, you can reference the properties as you would reference any other runtime property. Prerequisite Setting up your Camel K development environment Procedure Create a text file that contains properties. For example, create a file named my.properties that contains the following properties: Create a ConfigMap or a Secret based on the properties file. For example, use the following command to create a secret from the my.properties file: In the integration, refer to the properties defined in the Secret. For example, the following integration (named ConfigSecretPropertyRoute.java ) references the my.key.1 and my.key.2 properties: import org.apache.camel.builder.RouteBuilder; public class ConfigSecretPropertyRoute extends RouteBuilder { @Override public void configure() throws Exception { from("timer:secret") .routeId("secret") .log("{{my.key.1}} {{my.key.2}}"); } } Run the integration and use the --config option to specify the Secret that contains the my.key.1 and my.key.2 properties. For example: 4.2.2.5. Filtering configuration values obtained from a ConfigMap or Secret ConfigMaps and Secrets can hold more than one source. For example, the following command creates a secret ( my-sec-multi ) from two sources: You can limit the quantity of information that your integration retrieves to just one source by using the /key notation after with the --config configmap or --config secret options. Prerequisites Setting up your Camel K development environment You have a ConfigMap or a Secret that holds more than one source. Procedure Create an integration that uses configuration values from only one of the sources in the ConfigMap or Secret. For example, the following integration ( ConfigSecretKeyRoute.java ) uses the property from only one of the sources in the my-sec-multi secret. import org.apache.camel.builder.RouteBuilder; public class ConfigSecretKeyRoute extends RouteBuilder { @Override public void configure() throws Exception { from("timer:secret") .setBody() .simple("resource:classpath:my-secret-key-2") .log("secret content is: USD{body}"); } } Run the integration by using the --config secret option and the /key notation. For example: Check the integration pod to verify that only the specified source (for example, my-secret-key-2 ) is mounted. For example, run the following command to list all volumes for a pod: 4.2.3. Providing resources to a running integration You can provide a resource for the integration to use when it is running by specifying the kamel run command's --resource option. You can specify a local text file (1 MiB maximum file size), a ConfigMap (3MB) or a Secret (3MB). You can optionally specify the destination of the file that is materialized for the resource. For example, if you want to set an HTTPS connection, you use the --resource option because you must provide an SSL certificate which is a binary file that is expected in a known location. When you use the --resource option, the Camel K operator does not parse the resource looking for runtime properties and it does not add the resource to the classpath. (If you want to add the resource to the classpath, you can use the JVM trait . 4.2.3.1. Specifying a text or binary file as a resource If you have a text or binary file that contains configuration values, you can use the --resource file:/path/to/file option to materialize the file. By default, the Camel K operator copies the materialized file to the /etc/camel/resources/ directory. Optionally, you can specify a different destination directory as described in Specifying a destination path for a resource . Prerequisites Setting up your Camel K development environment You have one or more text or binary files that contain configuration properties. Procedure Create a Camel K integration that reads the contents of a file that you provide. For example, the following integration ( ResourceFileBinaryRoute.java ) unzips and reads the resources-data.zip file: import org.apache.camel.builder.RouteBuilder; public class ResourceFileBinaryRoute extends RouteBuilder { @Override public void configure() throws Exception { from("file:/etc/camel/resources/?fileName=resources-data.zip&noop=true&idempotent=false") .unmarshal().zipFile() .log("resource file unzipped content is: USD{body}"); } } Run the integration and use the --resource option to copy the file to the default destination directory ( /etc/camel/resources/ ). For example: Note: If you specify a binary file, a binary representation of the contents of the file is created and decoded transparently in the integration. Optionally, you can provide more than one resource by adding the --resource option repeatedly, for example: 4.2.3.2. Specifying a ConfigMap as a resource If you have an OpenShift ConfigMap that contains configuration values, and you need to materialize the ConfigMap as a resource for an integration, use the --resource <configmap-file> option. Prerequisites Setting up your Camel K development environment You have one or more ConfigMap files stored on your OpenShift cluster. For example, you can create a ConfigMap by using the following command: Procedure Create a Camel K integration that references a ConfigMap stored on your OpenShift cluster. For example, the following integration (named ResourceConfigmapRoute.java ) references a ConfigMap named my-cm that contains my-configmap-key . import org.apache.camel.builder.RouteBuilder; public class ResourceConfigmapRoute extends RouteBuilder { @Override public void configure() throws Exception { from("file:/etc/camel/resources/my-cm/?fileName=my-configmap-key&noop=true&idempotent=false") .log("resource file content is: USD{body}"); } } Run the integration and use the --resource option to materialize the ConfigMap file in the default /etc/camel/resources/ directory so that it is available to the running integration. For example: When the integration starts, the Camel K operator mounts a volume with the ConfigMap's content (for example, my-configmap-key ). Note: If you specify a ConfigMap that is not yet available on the cluster, the Integration waits and starts only after the ConfigMap becomes available. 4.2.3.3. Specifying a Secret as a resource If you have an OpenShift Secret that contains configuration information, and you need to materialize it as a resource that is available to one or more integrations, use the --resource <secret> syntax. Prerequisites Setting up your Camel K development environment You have one or more Secrets files stored on your OpenShift cluster. For example, you can create a Secret by using the following command: Procedure Create a Camel K integration that references a Secret stored on your OpenShift cluster. For example, the following integration (named ResourceSecretRoute.java ) references the my-sec Secret: import org.apache.camel.builder.RouteBuilder; public class ResourceSecretRoute extends RouteBuilder { @Override public void configure() throws Exception { from("file:/etc/camel/resources/my-sec/?fileName=my-secret-key&noop=true&idempotent=false") .log("resource file content is: USD{body}"); } } Run the integration and use the --resource option to materialize the Secret in the default /etc/camel/resources/ directory so that it is available to the running integration. For example: When the integration starts, the Camel K operator mounts a volume with the Secret's content (for example, my-sec ). Note: If you specify a Secret that is not yet available on the cluster, the Integration waits and starts only after the Secret becomes available. 4.2.3.4. Specifying a destination path for a resource The /etc/camel/resources/ directory is the default location for mounting a resource that you specify with the --resource option. If you need to specify a different directory on which to mount a resource, use the --resource @path syntax. Prerequisites Setting up your Camel K development environment You have a file, ConfigMap, or Secret that contains one or more configuration properties. Procedure Create a Camel K integration that references the file, ConfigMap or Secret that contains configuration properties. For example, the following integration (named ResourceFileLocationRoute.java ) references the myprops file: import org.apache.camel.builder.RouteBuilder; public class ResourceFileLocationRoute extends RouteBuilder { @Override public void configure() throws Exception { from("file:/tmp/?fileName=input.txt&noop=true&idempotent=false") .log("resource file content is: USD{body}"); } } Run the integration and use the --resource option with the @path syntax and specify where to mount the resource content (either a file, ConfigMap or Secret): For example, the following command specifies to use the /tmp directory to mount the input.txt file: Check the integration's pod to verify that the file (for example, input.txt ) was mounted in the correct location (for example, in the tmp directory ). For example, run the following command: 4.2.3.5. Filtering ConfigMap or Secret data When you create a ConfigMap or a Secret, you can specify more than one source of information. For example, the following command creates a ConfigMap (named my-cm-multi ) from two sources: When you run an integration with the --resource option, a ConfigMap or Secret that was created with more than one source, by default, both sources are materialized. If you want to limit the quantity of information to recover from a ConfigMap or Secret, you can specify the --resource option's /key notation after the ConfigMap or Secret name. For example, --resource configmap:my-cm/my-key or --resource secret:my-secret/my-key . You can limit the quantity of information that your integration retrieves to just one resource by using the /key notation after with the --resource configmap or --resource secret options. Prerequisites Setting up your Camel K development environment You have a ConfigMap or a Secret that holds values from more than one source. Procedure Create an integration that uses configuration values from only one of the resources in the ConfigMap or Secret. For example, the following integration (named ResourceConfigmapKeyLocationRoute.java ) references the my-cm-multi ConfigMap: import org.apache.camel.builder.RouteBuilder; public class ResourceConfigmapKeyLocationRoute extends RouteBuilder { @Override public void configure() throws Exception { from("file:/tmp/app/data/?fileName=my-configmap-key-2&noop=true&idempotent=false") .log("resource file content is: USD{body} consumed from USD{header.CamelFileName}"); } } Run the integration and use the --resource option with the @path syntax and specify where to mount the source content (either a file, ConfigMap or Secret): For example, the following command specifies to use only one of the sources ( my-configmap-key-2@ ) contained within the ConfigMap and to use the /tmp/app/data directory to mount it: Check the integration's pod to verify that only one file (for example, my-configmap-key-2 ) was mounted in the correct location (for example, in the /tmp/app/data directory). For example, run the following command: 4.3. Configuring Camel integration components You can configure Camel components programmatically in your integration code or by using configuration properties on the command line at runtime. You can configure Camel components using the following syntax: camel.component.USD{scheme}.USD{property}=USD{value} For example, to change the queue size of the Camel seda component for staged event-driven architecture, you can configure the following property on the command line: camel.component.seda.queueSize=10 Prerequisites Setting up your Camel K development environment Procedure Enter the kamel run command and specify the Camel component configuration using the --property option. For example: kamel run --property camel.component.seda.queueSize=10 examples/Integration.java Additional resources Providing runtime properties at the command line Apache Camel SEDA component 4.4. Configuring Camel K integration dependencies Camel K automatically resolves a wide range of dependencies that are required to run your integration code. However, you can explicitly add dependencies on the command line at runtime using the kamel run --dependency option. The following example integration uses Camel K automatic dependency resolution: ... from("imap://[email protected]") .to("seda:output") ... Because this integration has an endpoint starting with the imap: prefix, Camel K can automatically add the camel-mail component to the list of required dependencies. The seda: endpoint belongs to camel-core , which is automatically added to all integrations, so Camel K does not add additional dependencies for this component. Camel K automatic dependency resolution is transparent to the user at runtime. This is very useful in development mode because you can quickly add all the components that you need without exiting the development loop. You can explicitly add a dependency using the kamel run --dependency or -d option. You might need to use this to specify dependencies that are not included in the Camel catalog. You can specify multiple dependencies on the command line. Prerequisites Setting up your Camel K development environment Procedure Enter the kamel run command and specify dependencies using the -d option. For example: kamel run -d mvn:com.google.guava:guava:26.0-jre -d camel-mina2 Integration.java Note You can disable automatic dependency resolution by disabling the dependencies trait: -trait dependencies.enabled=false . However, this is not recommended in most cases. Types of Dependencies The -d flag of the kamel run command is flexible and support multiple kind of dependencies. Camel dependencies can be added directly using the -d flag like this: In this case, the dependency will be added with the correct version. Note that the standard notation for specifying a Camel dependency is camel:xxx , while kamel also accepts camel-xxx for usability. You can add External dependencies using the -d flag, the mvn prefix, and the maven coordinates: Note that if your dependencies belong to a private repository, this repository must be defined. See Configure maven . You can add Local dependencies using the -d flag and the file:// prefix. The content of integration-dep.jar will then be accessible in your integration for you to use. You can also specify data files to be mounted in the running container: Specifying a directory will work recursively. Note that this feature relies on the Image Registry being setup accurately. Jitpack Dependencies If your dependency is not published in a maven repository, you will find Jitpack as a way to provide any custom dependency to your runtime Integration environment. In certain occasion, you will find it useful to include not only your route definition, but also some helper class or any other class which has to be used while defining the Integration behavior. With Jitpack you will be able to compile on the fly a java project hosted in a remote repository and use the produced package as a dependency of your Integration. The usage is the same as defined above for any maven dependency. It can be added using the -d flag, but, this time, you need to define the prefix as expected for the project repository you are using (that is, github ). It has to be provided in the form repository-kind:user/repo/version. As an example, you can provide the Apache Commons CSV dependency by executing: We support the most important public code repositories: The version can be omitted when you are willing to use the main branch. Else, it will represent the branch or tag used in the project repo. Dynamic URIs Camel K does not always discover all of your dependencies. When you are creating an URI dynamically, you must instruct Camel K which component to load (using the -d parameter). The following code snippet illustrates this. DynamicURI.java Here the from URI is dynamically created by some variables that are resolved at runtime. In cases like this, you must specify the component and the related dependency to load into the Integration. Additional resources Running Camel K integrations in development mode Camel K trait and profile configuration Apache Camel Mail component Apache Camel SEDA component | [
"kamel run --build-property <quarkus-property>=<property-value> <camel-k-integration>",
"- from: uri: \"timer:tick\" steps: - set-body: constant: \"{{quarkus.application.name}}\" - to: \"log:info\"",
"kamel run --build-property quarkus.application.name=my-favorite-app my-simple-timer.yaml",
"kamel run --build-property <quarkus-property1>=<property-value1> -build-property=<quarkus-property2>=<property-value12> <camel-k-integration>",
"kamel run --build-property file:<property-filename> <camel-k-integration>",
"quarkus.application.name = my-favorite-app quarkus.banner.enabled = true",
"kamel run --build-property file:quarkus.properties my-simple-timer.yaml",
"- from: uri: \"timer:tick\" steps: - set-body: constant: \"{{my.message}}\" - to: \"log:info\"",
"kamel run --property <property>=<value> <integration>",
"kamel run --property <property>=<value> <integration>",
"kamel run --property my.message=\"Hola Mundo\" HelloCamelK.java --dev",
"kamel run --p my.message=\"Hola Mundo\" HelloCamelK.java --dev",
"[1] 2020-04-13 15:39:59.213 INFO [main] ApplicationRuntime - Listener org.apache.camel.k.listener.RoutesDumper@6e0dec4a executed in phase Started [1] 2020-04-13 15:40:00.237 INFO [Camel (camel-k) thread #1 - timer://java] info - Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hola Mundo from java]",
"my.key.1=hello my.key.2=world",
"import org.apache.camel.builder.RouteBuilder; public class Routing extends RouteBuilder { @Override public void configure() throws Exception { from(\"timer:property-file\") .routeId(\"property-file\") .log(\"property file content is: {{my.key.1}} {{my.key.2}}\"); } }",
"kamel run --property file:<my-file.properties> <integration>",
"kamel run --p file:<my-file.properties> <integration>",
"kamel run Routing.java --property:file=my.properties --dev",
"the file body",
"import org.apache.camel.builder.RouteBuilder; public class ConfigFileRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"timer:config-file\") .setBody() .simple(\"resource:classpath:resources-data.txt\") .log(\"resource file content is: USD{body}\"); } }",
"kamel run --config file:resources-data.txt ConfigFileRoute.java --dev",
"kamel run --config file:resources-data1.txt --config file:resources-data2.txt ConfigFileRoute.java --dev",
"create configmap my-cm --from-literal=my-configmap-key=\"configmap content\"",
"import org.apache.camel.builder.RouteBuilder; public class ConfigConfigmapRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"timer:configmap\") .setBody() .simple(\"resource:classpath:my-configmap-key\") .log(\"configmap content is: USD{body}\"); } }",
"kamel run --config configmap:my-cm ConfigConfigmapRoute.java --dev",
"create secret generic my-sec --from-literal=my-secret-key=\"very top secret\"",
"import org.apache.camel.builder.RouteBuilder; public class ConfigSecretRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"timer:secret\") .setBody() .simple(\"resource:classpath:my-secret\") .log(\"secret content is: USD{body}\"); } }",
"kamel run --config secret:my-sec ConfigSecretRoute.java --dev",
"my.key.1=hello my.key.2=world",
"create secret generic my-sec --from-file my.properties",
"import org.apache.camel.builder.RouteBuilder; public class ConfigSecretPropertyRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"timer:secret\") .routeId(\"secret\") .log(\"{{my.key.1}} {{my.key.2}}\"); } }",
"kamel run --config secret:my-sec ConfigSecretPropertyRoute.java --dev",
"create secret generic my-sec-multi --from-literal=my-secret-key=\"very top secret\" --from-literal=my-secret-key-2=\"even more secret\"",
"import org.apache.camel.builder.RouteBuilder; public class ConfigSecretKeyRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"timer:secret\") .setBody() .simple(\"resource:classpath:my-secret-key-2\") .log(\"secret content is: USD{body}\"); } }",
"kamel run --config secret:my-sec-multi/my-secret-key-2 ConfigSecretKeyRoute.java --dev",
"set volume pod/<pod-name> --all",
"import org.apache.camel.builder.RouteBuilder; public class ResourceFileBinaryRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"file:/etc/camel/resources/?fileName=resources-data.zip&noop=true&idempotent=false\") .unmarshal().zipFile() .log(\"resource file unzipped content is: USD{body}\"); } }",
"kamel run --resource file:resources-data.zip ResourceFileBinaryRoute.java -d camel-zipfile --dev",
"kamel run --resource file:resources-data1.txt --resource file:resources-data2.txt ResourceFileBinaryRoute.java -d camel-zipfile --dev",
"create configmap my-cm --from-literal=my-configmap-key=\"configmap content\"",
"import org.apache.camel.builder.RouteBuilder; public class ResourceConfigmapRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"file:/etc/camel/resources/my-cm/?fileName=my-configmap-key&noop=true&idempotent=false\") .log(\"resource file content is: USD{body}\"); } }",
"kamel run --resource configmap:my-cm ResourceConfigmapRoute.java --dev",
"create secret generic my-sec --from-literal=my-secret-key=\"very top secret\"",
"import org.apache.camel.builder.RouteBuilder; public class ResourceSecretRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"file:/etc/camel/resources/my-sec/?fileName=my-secret-key&noop=true&idempotent=false\") .log(\"resource file content is: USD{body}\"); } }",
"kamel run --resource secret:my-sec ResourceSecretRoute.java --dev",
"import org.apache.camel.builder.RouteBuilder; public class ResourceFileLocationRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"file:/tmp/?fileName=input.txt&noop=true&idempotent=false\") .log(\"resource file content is: USD{body}\"); } }",
"kamel run --resource file:resources-data.txt@/tmp/input.txt ResourceFileLocationRoute.java --dev",
"exec <pod-name> -- cat /tmp/input.txt",
"create configmap my-cm-multi --from-literal=my-configmap-key=\"configmap content\" --from-literal=my-configmap-key-2=\"another content\"",
"import org.apache.camel.builder.RouteBuilder; public class ResourceConfigmapKeyLocationRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"file:/tmp/app/data/?fileName=my-configmap-key-2&noop=true&idempotent=false\") .log(\"resource file content is: USD{body} consumed from USD{header.CamelFileName}\"); } }",
"kamel run --resource configmap:my-cm-multi/my-configmap-key-2@/tmp/app/data ResourceConfigmapKeyLocationRoute.java --dev",
"exec <pod-name> -- cat /tmp/app/data/my-configmap-key-2",
"camel.component.USD{scheme}.USD{property}=USD{value}",
"camel.component.seda.queueSize=10",
"kamel run --property camel.component.seda.queueSize=10 examples/Integration.java",
"from(\"imap://[email protected]\") .to(\"seda:output\")",
"kamel run -d mvn:com.google.guava:guava:26.0-jre -d camel-mina2 Integration.java",
"kamel run -d camel:http Integration.java",
"kamel run -d mvn:com.google.guava:guava:26.0-jre Integration.java",
"kamel run -d file://path/to/integration-dep.jar Integration.java",
"kamel run -d file://path/to/data.csv:path/in/container/data.csv Integration.java",
"kamel run -d github:apache/commons-csv/1.1 Integration.java",
"github:user/repo/version gitlab:user/repo/version bitbucket:user/repo/version gitee:user/repo/version azure:user/repo/version",
"String myTopic = \"purchases\" from(\"kafka:\" + myTopic + \"? ... \") .to(...)"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/developing_and_managing_integrations_using_camel_k/configuring-camel-k |
Chapter 1. Upgrade a Red Hat Ceph Storage cluster using cephadm | Chapter 1. Upgrade a Red Hat Ceph Storage cluster using cephadm As a storage administrator, you can use the cephadm Orchestrator to Red Hat Ceph Storage 7 with the ceph orch upgrade command. Note Upgrading directly from Red Hat Ceph Storage 5 to Red Hat Ceph Storage 7 is supported. The automated upgrade process follows Ceph best practices. For example: The upgrade order starts with Ceph Managers, Ceph Monitors, then other daemons. Each daemon is restarted only after Ceph indicates that the cluster will remain available. The storage cluster health status is likely to switch to HEALTH_WARNING during the upgrade. When the upgrade is complete, the health status should switch back to HEALTH_OK. Note You do not get a message once the upgrade is successful. Run ceph versions and ceph orch ps commands to verify the new image ID and the version of the storage cluster. 1.1. Compatibility considerations between RHCS and podman versions podman and Red Hat Ceph Storage have different end-of-life strategies that might make it challenging to find compatible versions. Red Hat recommends to use the podman version shipped with the corresponding Red Hat Enterprise Linux version for Red Hat Ceph Storage. See the Red Hat Ceph Storage: Supported configurations knowledge base article for more details. See the Contacting Red Hat support for service section in the Red Hat Ceph Storage Troubleshooting Guide for additional assistance. The following table shows version compatibility between Red Hat Ceph Storage 8 and versions of podman . Ceph Podman 1.9 2.0 2.1 2.2 3.0 >3.0 Red Hat Ceph Storage 8 false true true false true true Warning You must use a version of Podman that is 2.0.0 or higher. 1.2. Upgrading the Red Hat Ceph Storage cluster You can use ceph orch upgrade command for upgrading a Red Hat Ceph Storage cluster. Prerequisites Latest version of running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Ansible user with sudo and passwordless ssh access to all nodes in the storage cluster. At least two Ceph Manager nodes in the storage cluster: one active and one standby. Procedure Register the node, and when prompted, enter your Red Hat Customer Portal credentials: Syntax Pull the latest subscription data from the CDN: Syntax List all available subscriptions for Red Hat Ceph Storage: Syntax Identify the appropriate subscription and retrieve its Pool ID. Attach a pool ID to gain access to the software entitlements. Use the Pool ID you identified in the step. Syntax Disable the default software repositories, and then enable the server and the extras repositories on the respective version of Red Hat Enterprise Linux: Red Hat Enterprise Linux 9 Update the system to receive the latest packages for Red Hat Enterprise Linux: Syntax Enable the Ceph Ansible repositories on the Ansible administration node: Red Hat Enterprise Linux 9 Update the cephadm and cephadm-ansible package: Example Navigate to the /usr/share/cephadm-ansible/ directory: Example Run the preflight playbook with the upgrade_ceph_packages parameter set to true on the bootstrapped host in the storage cluster: Syntax Example This package upgrades cephadm on all the nodes. Log into the cephadm shell: Example Ensure all the hosts are online and that the storage cluster is healthy: Example Set the OSD noout , noscrub , and nodeep-scrub flags to prevent OSDs from getting marked out during upgrade and to avoid unnecessary load on the cluster: Example Check service versions and the available target containers: Syntax Example Note The image name is applicable for both Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 9. Upgrade the storage cluster: Syntax Example Note To perform a staggered upgrade, see Performing a staggered upgrade . While the upgrade is underway, a progress bar appears in the ceph status output. Example Verify the new IMAGE_ID and VERSION of the Ceph cluster: Example Note If you are not using the cephadm-ansible playbooks, after upgrading your Ceph cluster, you must upgrade the ceph-common package and client libraries on your client nodes. Example Verify you have the latest version: Example When the upgrade is complete, unset the noout , noscrub , and nodeep-scrub flags: Example 1.3. Upgrading the Red Hat Ceph Storage cluster in a disconnected environment You can upgrade the storage cluster in a disconnected environment by using the --image tag. You can use ceph orch upgrade command for upgrading a Red Hat Ceph Storage cluster. Prerequisites Latest version of running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Ansible user with sudo and passwordless ssh access to all nodes in the storage cluster. At least two Ceph Manager nodes in the storage cluster: one active and one standby. Register the nodes to CDN and attach subscriptions. Check for the customer container images in a disconnected environment and change the configuration, if required. See the Changing configurations of custom container images for disconnected installations section in the Red Hat Ceph Storage Installation Guide for more details. By default, the monitoring stack components are deployed based on the primary Ceph image. For disconnected environment of the storage cluster, you have to use the latest available monitoring stack component images. Table 1.1. Custom image details for monitoring stack Monitoring stack component Image details Prometheus registry.redhat.io/openshift4/ose-prometheus:v4.12 Grafana registry.redhat.io/rhceph/grafana-rhel9:latest Node-exporter registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.12 AlertManager registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.12 HAProxy registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest Keepalived registry.redhat.io/rhceph/keepalived-rhel9:latest SNMP Gateway registry.redhat.io/rhceph/snmp-notifier-rhel9:latest Procedure Enable the Ceph Ansible repositories on the Ansible administration node: Red Hat Enterprise Linux 9 Update the cephadm and cephadm-ansible package. Example Navigate to the /usr/share/cephadm-ansible/ directory: Example Run the preflight playbook with the upgrade_ceph_packages parameter set to true and the ceph_origin parameter set to custom on the bootstrapped host in the storage cluster: Syntax Example This package upgrades cephadm on all the nodes. Log into the cephadm shell: Example Ensure all the hosts are online and that the storage cluster is healthy: Example Set the OSD noout , noscrub , and nodeep-scrub flags to prevent OSDs from getting marked out during upgrade and to avoid unnecessary load on the cluster: Example Check service versions and the available target containers: Syntax Example Upgrade the storage cluster: Syntax Example While the upgrade is underway, a progress bar appears in the ceph status output. Example Verify the new IMAGE_ID and VERSION of the Ceph cluster: Example When the upgrade is complete, unset the noout , noscrub , and nodeep-scrub flags: Example Additional Resources See the Registering Red Hat Ceph Storage nodes to the CDN and attaching subscriptions section in the Red Hat Ceph Storage Installation Guide . See the Configuring a private registry for a disconnected installation section in the Red Hat Ceph Storage Installation Guide . | [
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --matches 'Red Hat Ceph Storage'",
"subscription-manager attach --pool=POOL_ID",
"subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms",
"dnf update",
"subscription-manager repos --enable=rhceph-8-tools-for-rhel-9-x86_64-rpms",
"dnf update cephadm dnf update cephadm-ansible",
"cd /usr/share/cephadm-ansible",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs upgrade_ceph_packages=true\"",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i /etc/ansible/hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs upgrade_ceph_packages=true\"",
"cephadm shell",
"ceph -s",
"ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph orch upgrade check IMAGE_NAME",
"ceph orch upgrade check registry.redhat.io/rhceph/rhceph-8-rhel9:latest",
"ceph orch upgrade start IMAGE_NAME",
"ceph orch upgrade start registry.redhat.io/rhceph/rhceph-8-rhel9:latest",
"ceph status [...] progress: Upgrade to 18.2.0-128.el9cp (1s) [............................]",
"ceph versions ceph orch ps",
"[root@client01 ~] dnf update ceph-common",
"[root@client01 ~] ceph --version",
"ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub",
"subscription-manager repos --enable=rhceph-8-tools-for-rhel-9-x86_64-rpms",
"dnf update cephadm dnf update cephadm-ansible",
"cd /usr/share/cephadm-ansible",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=custom upgrade_ceph_packages=true\"",
"[ceph-admin@admin ~]USD ansible-playbook -i /etc/ansible/hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom upgrade_ceph_packages=true\"",
"cephadm shell",
"ceph -s",
"ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph orch upgrade check IMAGE_NAME",
"ceph orch upgrade check LOCAL_NODE_FQDN :5000/rhceph/rhceph-8-rhel9",
"ceph orch upgrade start IMAGE_NAME",
"ceph orch upgrade start LOCAL_NODE_FQDN :5000/rhceph/rhceph-8-rhel9",
"ceph status [...] progress: Upgrade to 18.2.0-128.el9cp (1s) [............................]",
"ceph version ceph versions ceph orch ps",
"ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/upgrade_guide/upgrade-a-red-hat-ceph-storage-cluster-using-cephadm |
Chapter 2. Changing SELinux states and modes | Chapter 2. Changing SELinux states and modes When enabled, SELinux can run in one of two modes: enforcing or permissive. The following sections show how to permanently change into these modes. 2.1. Permanent changes in SELinux states and modes As discussed in SELinux states and modes , SELinux can be enabled or disabled. When enabled, SELinux has two modes: enforcing and permissive. Use the getenforce or sestatus commands to check in which mode SELinux is running. The getenforce command returns Enforcing , Permissive , or Disabled . The sestatus command returns the SELinux status and the SELinux policy being used: Warning When systems run SELinux in permissive mode, users and processes might label various file-system objects incorrectly. File-system objects created while SELinux is disabled are not labeled at all. This behavior causes problems when changing to enforcing mode because SELinux relies on correct labels of file-system objects. To prevent incorrectly labeled and unlabeled files from causing problems, SELinux automatically relabels file systems when changing from the disabled state to permissive or enforcing mode. Use the fixfiles -F onboot command as root to create the /.autorelabel file containing the -F option to ensure that files are relabeled upon reboot. Before rebooting the system for relabeling, make sure the system will boot in permissive mode, for example by using the enforcing=0 kernel option. This prevents the system from failing to boot in case the system contains unlabeled files required by systemd before launching the selinux-autorelabel service. For more information, see RHBZ#2021835 . 2.2. Changing SELinux to permissive mode When SELinux is running in permissive mode, SELinux policy is not enforced. The system remains operational and SELinux does not deny any operations but only logs AVC messages, which can be then used for troubleshooting, debugging, and SELinux policy improvements. Each AVC is logged only once in this case. Prerequisites The selinux-policy-targeted , libselinux-utils , and policycoreutils packages are installed on your system. The selinux=0 or enforcing=0 kernel parameters are not used. Procedure Open the /etc/selinux/config file in a text editor of your choice, for example: Configure the SELINUX=permissive option: Restart the system: Verification After the system restarts, confirm that the getenforce command returns Permissive : 2.3. Changing SELinux to enforcing mode When SELinux is running in enforcing mode, it enforces the SELinux policy and denies access based on SELinux policy rules. In RHEL, enforcing mode is enabled by default when the system was initially installed with SELinux. Prerequisites The selinux-policy-targeted , libselinux-utils , and policycoreutils packages are installed on your system. The selinux=0 or enforcing=0 kernel parameters are not used. Procedure Open the /etc/selinux/config file in a text editor of your choice, for example: Configure the SELINUX=enforcing option: Save the change, and restart the system: On the boot, SELinux relabels all the files and directories within the system and adds SELinux context for files and directories that were created when SELinux was disabled. Verification After the system restarts, confirm that the getenforce command returns Enforcing : Troubleshooting After changing to enforcing mode, SELinux may deny some actions because of incorrect or missing SELinux policy rules. To view what actions SELinux denies, enter the following command as root: Alternatively, with the setroubleshoot-server package installed, enter: If SELinux is active and the Audit daemon ( auditd ) is not running on your system, then search for certain SELinux messages in the output of the dmesg command: See Troubleshooting problems related to SELinux for more information. 2.4. Enabling SELinux on systems that previously had it disabled To avoid problems, such as systems unable to boot or process failures, when enabling SELinux on systems that previously had it disabled, resolve Access Vector Cache (AVC) messages in permissive mode first. When systems run SELinux in permissive mode, users and processes might label various file-system objects incorrectly. File-system objects created while SELinux is disabled are not labeled at all. This behavior causes problems when changing to enforcing mode because SELinux relies on correct labels of file-system objects. To prevent incorrectly labeled and unlabeled files from causing problems, SELinux automatically relabels file systems when changing from the disabled state to permissive or enforcing mode. Warning Before rebooting the system for relabeling, make sure the system will boot in permissive mode, for example by using the enforcing=0 kernel option. This prevents the system from failing to boot in case the system contains unlabeled files required by systemd before launching the selinux-autorelabel service. For more information, see RHBZ#2021835 . Procedure Enable SELinux in permissive mode. For more information, see Changing to permissive mode . Restart your system: Check for SELinux denial messages. For more information, see Identifying SELinux denials . Ensure that files are relabeled upon the reboot: This creates the /.autorelabel file containing the -F option. Warning Always switch to permissive mode before entering the fixfiles -F onboot command. By default, autorelabel uses as many threads in parallel as the system has available CPU cores. To use only a single thread during automatic relabeling, use the fixfiles -T 1 onboot command. If there are no denials, switch to enforcing mode. For more information, see Changing SELinux modes at boot time . Verification After the system restarts, confirm that the getenforce command returns Enforcing : steps To run custom applications with SELinux in enforcing mode, choose one of the following scenarios: Run your application in the unconfined_service_t domain. Write a new policy for your application. See the Writing a custom SELinux policy section for more information. Additional resources SELinux states and modes section covers temporary changes in modes. 2.5. Disabling SELinux When you disable SELinux, your system does not load your SELinux policy. As a result, the system does not enforce the SELinux policy and does not log Access Vector Cache (AVC) messages. Therefore, all benefits of running SELinux are lost. Do not disable SELinux except in specific scenarios, such as performance-sensitive systems where the weakened security does not impose significant risks. Important If your scenario requires to perform debugging in a production environment, temporarily use permissive mode instead of permanently disabling SELinux. See Changing to permissive mode for more information about permissive mode. Prerequisites The grubby package is installed: Procedure Configure your boot loader to add selinux=0 to the kernel command line: Restart your system: Verification After the reboot, confirm that the getenforce command returns Disabled : Alternative method In RHEL 8, you can still use the deprecated method for disabling SELinux by using the SELINUX=disabled option in the /etc/selinux/config file. This results the kernel booting with SELinux enabled and switching to disabled mode later in the boot process. Consequently, memory leaks and race conditions might occur that cause kernel panics. To use this method: Open the /etc/selinux/config file in a text editor of your choice, for example: Configure the SELINUX=disabled option: Save the change, and restart your system: 2.6. Changing SELinux modes at boot time On boot, you can set the following kernel parameters to change the way SELinux runs: enforcing=0 Setting this parameter causes the system to start in permissive mode, which is useful when troubleshooting issues. Using permissive mode might be the only option to detect a problem if your file system is too corrupted. Moreover, in permissive mode, the system continues to create the labels correctly. The AVC messages that are created in this mode can be different than in enforcing mode. In permissive mode, only the first denial from a series of the same denials is reported. However, in enforcing mode, you might get a denial related to reading a directory, and an application stops. In permissive mode, you get the same AVC message, but the application continues reading files in the directory and you get an AVC for each denial in addition. selinux=0 This parameter causes the kernel to not load any part of the SELinux infrastructure. The init scripts notice that the system booted with the selinux=0 parameter and touch the /.autorelabel file. This causes the system to automatically relabel the time you boot with SELinux enabled. Important Do not use the selinux=0 parameter in a production environment. To debug your system, temporarily use permissive mode instead of disabling SELinux. autorelabel=1 This parameter forces the system to relabel similarly to the following commands: If a file system contains a large amount of mislabeled objects, start the system in permissive mode to make the autorelabel process successful. Additional resources For additional SELinux-related kernel boot parameters, such as checkreqprot , see the /usr/share/doc/kernel-doc- <KERNEL_VER> /Documentation/admin-guide/kernel-parameters.txt file installed with the kernel-doc package. Replace the <KERNEL_VER> string with the version number of the installed kernel, for example: | [
"USD sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Memory protection checking: actual (secure) Max kernel policy version: 31",
"vi /etc/selinux/config",
"This file controls the state of SELinux on the system. SELINUX= can take one of these three values: enforcing - SELinux security policy is enforced. permissive - SELinux prints warnings instead of enforcing. disabled - No SELinux policy is loaded. SELINUX= permissive SELINUXTYPE= can take one of these two values: targeted - Targeted processes are protected, mls - Multi Level Security protection. SELINUXTYPE=targeted",
"reboot",
"getenforce Permissive",
"vi /etc/selinux/config",
"This file controls the state of SELinux on the system. SELINUX= can take one of these three values: enforcing - SELinux security policy is enforced. permissive - SELinux prints warnings instead of enforcing. disabled - No SELinux policy is loaded. SELINUX= enforcing SELINUXTYPE= can take one of these two values: targeted - Targeted processes are protected, mls - Multi Level Security protection. SELINUXTYPE=targeted",
"reboot",
"getenforce Enforcing",
"ausearch -m AVC,USER_AVC,SELINUX_ERR,USER_SELINUX_ERR -ts today",
"grep \"SELinux is preventing\" /var/log/messages",
"dmesg | grep -i -e type=1300 -e type=1400",
"reboot",
"fixfiles -F onboot",
"getenforce Enforcing",
"rpm -q grubby grubby- <version>",
"sudo grubby --update-kernel ALL --args selinux=0",
"reboot",
"getenforce Disabled",
"vi /etc/selinux/config",
"This file controls the state of SELinux on the system. SELINUX= can take one of these three values: enforcing - SELinux security policy is enforced. permissive - SELinux prints warnings instead of enforcing. disabled - No SELinux policy is loaded. SELINUX= disabled SELINUXTYPE= can take one of these two values: targeted - Targeted processes are protected, mls - Multi Level Security protection. SELINUXTYPE=targeted",
"reboot",
"touch /.autorelabel reboot",
"yum install kernel-doc less /usr/share/doc/kernel-doc- 4.18.0 /Documentation/admin-guide/kernel-parameters.txt"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_selinux/changing-selinux-states-and-modes_using-selinux |
Chapter 3. About OpenShift Kubernetes Engine | Chapter 3. About OpenShift Kubernetes Engine As of 27 April 2020, Red Hat has decided to rename Red Hat OpenShift Container Engine to Red Hat OpenShift Kubernetes Engine to better communicate what value the product offering delivers. Red Hat OpenShift Kubernetes Engine is a product offering from Red Hat that lets you use an enterprise class Kubernetes platform as a production platform for launching containers. You download and install OpenShift Kubernetes Engine the same way as OpenShift Container Platform as they are the same binary distribution, but OpenShift Kubernetes Engine offers a subset of the features that OpenShift Container Platform offers. 3.1. Similarities and differences You can see the similarities and differences between OpenShift Kubernetes Engine and OpenShift Container Platform in the following table: Table 3.1. Product comparison for OpenShift Kubernetes Engine and OpenShift Container Platform OpenShift Kubernetes Engine OpenShift Container Platform Fully Automated Installers Yes Yes Over the Air Smart Upgrades Yes Yes Enterprise Secured Kubernetes Yes Yes Kubectl and oc automated command line Yes Yes Operator Lifecycle Manager (OLM) Yes Yes Administrator Web console Yes Yes OpenShift Virtualization Yes Yes User Workload Monitoring Yes Metering and Cost Management SaaS Service Yes Platform Logging Yes Developer Web Console Yes Developer Application Catalog Yes Source to Image and Builder Automation (Tekton) Yes OpenShift Service Mesh (Maistra, Kiali, and Jaeger) Yes OpenShift distributed tracing (Jaeger) Yes OpenShift Serverless (Knative) Yes OpenShift Pipelines (Jenkins and Tekton) Yes Embedded Component of IBM Cloud Pak and RHT MW Bundles Yes 3.1.1. Core Kubernetes and container orchestration OpenShift Kubernetes Engine offers full access to an enterprise-ready Kubernetes environment that is easy to install and offers an extensive compatibility test matrix with many of the software elements that you might use in your data center. OpenShift Kubernetes Engine offers the same service level agreements, bug fixes, and common vulnerabilities and errors protection as OpenShift Container Platform. OpenShift Kubernetes Engine includes a Red Hat Enterprise Linux (RHEL) Virtual Datacenter and Red Hat Enterprise Linux CoreOS (RHCOS) entitlement that allows you to use an integrated Linux operating system with container runtime from the same technology provider. The OpenShift Kubernetes Engine subscription is compatible with the Red Hat OpenShift support for Windows Containers subscription. 3.1.2. Enterprise-ready configurations OpenShift Kubernetes Engine uses the same security options and default settings as the OpenShift Container Platform. Default security context constraints, pod security policies, best practice network and storage settings, service account configuration, SELinux integration, HAproxy edge routing configuration, and all other standard protections that OpenShift Container Platform offers are available in OpenShift Kubernetes Engine. OpenShift Kubernetes Engine offers full access to the integrated monitoring solution that OpenShift Container Platform uses, which is based on Prometheus and offers deep coverage and alerting for common Kubernetes issues. OpenShift Kubernetes Engine uses the same installation and upgrade automation as OpenShift Container Platform. 3.1.3. Standard infrastructure services With an OpenShift Kubernetes Engine subscription, you receive support for all storage plugins that OpenShift Container Platform supports. In terms of networking, OpenShift Kubernetes Engine offers full and supported access to the Kubernetes Container Network Interface (CNI) and therefore allows you to use any third-party SDN that supports OpenShift Container Platform. It also allows you to use the included Open vSwitch software defined network to its fullest extent. OpenShift Kubernetes Engine allows you to take full advantage of the OVN Kubernetes overlay, Multus, and Multus plugins that are supported on OpenShift Container Platform. OpenShift Kubernetes Engine allows customers to use a Kubernetes Network Policy to create microsegmentation between deployed application services on the cluster. You can also use the Route API objects that are found in OpenShift Container Platform, including its sophisticated integration with the HAproxy edge routing layer as an out of the box Kubernetes ingress controller. 3.1.4. Core user experience OpenShift Kubernetes Engine users have full access to Kubernetes Operators, pod deployment strategies, Helm, and OpenShift Container Platform templates. OpenShift Kubernetes Engine users can use both the oc and kubectl command line interfaces. OpenShift Kubernetes Engine also offers an administrator web-based console that shows all aspects of the deployed container services and offers a container-as-a service experience. OpenShift Kubernetes Engine grants access to the Operator Life Cycle Manager that helps you control access to content on the cluster and life cycle operator-enabled services that you use. With an OpenShift Kubernetes Engine subscription, you receive access to the Kubernetes namespace, the OpenShift Project API object, and cluster-level Prometheus monitoring metrics and events. 3.1.5. Maintained and curated content With an OpenShift Kubernetes Engine subscription, you receive access to the OpenShift Container Platform content from the Red Hat Ecosystem Catalog and Red Hat Connect ISV marketplace. You can access all maintained and curated content that the OpenShift Container Platform eco-system offers. 3.1.6. OpenShift Container Storage compatible OpenShift Kubernetes Engine is compatible and supported with your purchase of OpenShift Container Storage. 3.1.7. Red Hat Middleware compatible OpenShift Kubernetes Engine is compatible and supported with individual Red Hat Middleware product solutions. Red Hat Middleware Bundles that include OpenShift embedded in them only contain OpenShift Container Platform. 3.1.8. OpenShift Serverless OpenShift Kubernetes Engine does not include OpenShift Serverless support. Use OpenShift Container Platform for this support. 3.1.9. Quay Integration compatible OpenShift Kubernetes Engine is compatible and supported with a Red Hat Quay purchase. 3.1.10. OpenShift Virtualization OpenShift Kubernetes Engine includes support for the Red Hat product offerings derived from the kubevirt.io open source project. 3.1.11. Advanced cluster management OpenShift Kubernetes Engine is compatible with your additional purchase of Red Hat Advanced Cluster Management (RHACM) for Kubernetes. An OpenShift Kubernetes Engine subscription does not offer a cluster-wide log aggregation solution or support Elasticsearch, Fluentd, or Kibana based logging solutions. Similarly, the chargeback features found in OpenShift Container Platform or the console.redhat.com Cost Management SaaS service are not supported with OpenShift Kubernetes Engine. Red Hat Service Mesh capabilities derived from the open source istio.io and kiali.io projects that offer OpenTracing observability for containerized services on OpenShift Container Platform are not supported in OpenShift Kubernetes Engine. 3.1.12. Advanced networking The standard networking solutions in OpenShift Container Platform are supported with an OpenShift Kubernetes Engine subscription. OpenShift Container Platform's Kubernetes CNI plugin for automation of multi-tenant network segmentation between OpenShift Container Platform projects is entitled for use with OpenShift Kubernetes Engine. OpenShift Kubernetes Engine offers all the granular control of the source IP addresses that are used by application services on the cluster. Those egress IP address controls are entitled for use with OpenShift Kubernetes Engine. OpenShift Container Platform offers ingress routing to on cluster services that use non-standard ports when no public cloud provider is in use via the VIP pods found in OpenShift Container Platform. That ingress solution is supported in OpenShift Kubernetes Engine. OpenShift Kubernetes Engine users are supported for the Kubernetes ingress control object, which offers integrations with public cloud providers. Red Hat Service Mesh, which is derived from the istio.io open source project, is not supported in OpenShift Kubernetes Engine. Also, the Kourier ingress controller found in OpenShift Serverless is not supported on OpenShift Kubernetes Engine. 3.1.13. Developer experience With OpenShift Kubernetes Engine, the following capabilities are not supported: The CodeReady developer experience utilities and tools, such as CodeReady Workspaces. OpenShift Container Platform's pipeline feature that integrates a streamlined, Kubernetes-enabled Jenkins and Tekton experience in the user's project space. The OpenShift Container Platform's source-to-image feature, which allows you to easily deploy source code, dockerfiles, or container images across the cluster. Build strategies, builder pods, or Tekton for end user container deployments. The odo developer command line. The developer persona in the OpenShift Container Platform web console. 3.1.14. Feature summary The following table is a summary of the feature availability in OpenShift Kubernetes Engine and OpenShift Container Platform. Where applicable, it includes the name of the Operator that enables a feature. Table 3.2. Features in OpenShift Kubernetes Engine and OpenShift Container Platform Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name Fully Automated Installers (IPI) Included Included N/A Customizable Installers (UPI) Included Included N/A Disconnected Installation Included Included N/A Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) entitlement Included Included N/A Existing RHEL manual attach to cluster (BYO) Included Included N/A CRIO Runtime Included Included N/A Over the Air Smart Upgrades and Operating System (RHCOS) Management Included Included N/A Enterprise Secured Kubernetes Included Included N/A Kubectl and oc automated command line Included Included N/A Auth Integrations, RBAC, SCC, Multi-Tenancy Admission Controller Included Included N/A Operator Lifecycle Manager (OLM) Included Included N/A Administrator web console Included Included N/A OpenShift Virtualization Included Included OpenShift Virtualization Operator Compliance Operator provided by Red Hat Included Included Compliance Operator File Integrity Operator Included Included File Integrity Operator Gatekeeper Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Gatekeeper Operator Klusterlet Not Included - Requires separate subscription Not Included - Requires separate subscription N/A Kube Descheduler Operator provided by Red Hat Included Included Kube Descheduler Operator Local Storage provided by Red Hat Included Included Local Storage Operator Node Feature Discovery provided by Red Hat Included Included Node Feature Discovery Operator Performance Add-on Operator Included Included Performance Add-on Operator PTP Operator provided by Red Hat Included Included PTP Operator Service Telemetry Operator provided by Red Hat Included Included Service Telemetry Operator SR-IOV Network Operator Included Included SR-IOV Network Operator Vertical Pod Autoscaler Included Included Vertical Pod Autoscaler Cluster Monitoring (Prometheus) Included Included Cluster Monitoring Device Manager (for example, GPU) Included Included N/A Log Forwarding (with fluentd) Included Included Red Hat OpenShift Logging Operator (for log forwarding with fluentd) Telemeter and Insights Connected Experience Included Included N/A Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name OpenShift Cloud Manager SaaS Service Included Included N/A OVS and OVN SDN Included Included N/A MetalLB Included Included MetalLB Operator HAProxy Ingress Controller Included Included N/A Red Hat OpenStack Platform (RHOSP) Kuryr Integration Included Included N/A Ingress Cluster-wide Firewall Included Included N/A Egress Pod and Namespace Granular Control Included Included N/A Ingress Non-Standard Ports Included Included N/A Multus and Available Multus Plugins Included Included N/A Network Policies Included Included N/A IPv6 Single and Dual Stack Included Included N/A CNI Plugin ISV Compatibility Included Included N/A CSI Plugin ISV Compatibility Included Included N/A RHT and IBM middleware a la carte purchases (not included in OpenShift Container Platform or OpenShift Kubernetes Engine) Included Included N/A ISV or Partner Operator and Container Compatibility (not included in OpenShift Container Platform or OpenShift Kubernetes Engine) Included Included N/A Embedded OperatorHub Included Included N/A Embedded Marketplace Included Included N/A Quay Compatibility (not included) Included Included N/A RHEL Software Collections and RHT SSO Common Service (included) Included Included N/A Embedded Registry Included Included N/A Helm Included Included N/A User Workload Monitoring Not Included Included N/A Metering and Cost Management SaaS Service Not Included Included N/A Platform Logging Not Included Included Red Hat OpenShift Logging Operator OpenShift Elasticsearch Operator provided by Red Hat Not Included Cannot be run standalone N/A Developer Web Console Not Included Included N/A Developer Application Catalog Not Included Included N/A Source to Image and Builder Automation (Tekton) Not Included Included N/A OpenShift Service Mesh Not Included Included OpenShift Service Mesh Operator Service Binding Operator Not Included Included Service Binding Operator Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name Red Hat OpenShift Serverless Not Included Included OpenShift Serverless Operator Web Terminal provided by Red Hat Not Included Included Web Terminal Operator Jenkins Operator provided by Red Hat Not Included Included Jenkins Operator Red Hat OpenShift Pipelines Operator Not Included Included OpenShift Pipelines Operator Embedded Component of IBM Cloud Pak and RHT MW Bundles Not Included Included N/A Red Hat OpenShift GitOps Not Included Included OpenShift GitOps Red Hat CodeReady Workspaces Not Included Included CodeReady Workspaces Red Hat CodeReady Containers Not Included Included N/A Quay Bridge Operator provided by Red Hat Not Included Included Quay Bridge Operator Quay Container Security provided by Red Hat Not Included Included Quay Operator Red Hat OpenShift distributed tracing platform Not Included Included Red Hat OpenShift distributed tracing platform Operator Red Hat OpenShift Kiali Not Included Included Kiali Operator Metering provided by Red Hat (deprecated) Not Included Included N/A Migration Toolkit for Containers Operator Not Included Included Migration Toolkit for Containers Operator Cost management for OpenShift Not included Included N/A Red Hat JBoss Web Server Not included Included JWS Operator Red Hat Build of Quarkus Not included Included N/A Kourier Ingress Controller Not included Included N/A RHT Middleware Bundles Sub Compatibility (not included in OpenShift Container Platform) Not included Included N/A IBM Cloud Pak Sub Compatibility (not included in OpenShift Container Platform) Not included Included N/A OpenShift Do ( odo ) Not included Included N/A Source to Image and Tekton Builders Not included Included N/A OpenShift Serverless FaaS Not included Included N/A IDE Integrations Not included Included N/A Windows Machine Config Operator Community Windows Machine Config Operator included - no subscription required Red Hat Windows Machine Config Operator included - Requires separate subscription Windows Machine Config Operator Red Hat Quay Not Included - Requires separate subscription Not Included - Requires separate subscription Quay Operator Red Hat Advanced Cluster Management Not Included - Requires separate subscription Not Included - Requires separate subscription Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Security Not Included - Requires separate subscription Not Included - Requires separate subscription N/A OpenShift Container Storage Not Included - Requires separate subscription Not Included - Requires separate subscription OpenShift Container Storage Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name Ansible Automation Platform Resource Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Ansible Automation Platform Resource Operator Business Automation provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription Business Automation Operator Data Grid provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription Data Grid Operator Red Hat Integration provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription Red Hat Integration Operator Red Hat Integration - 3Scale provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription 3scale Red Hat Integration - 3Scale APICast gateway provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription 3scale APIcast Red Hat Integration - AMQ Broker Not Included - Requires separate subscription Not Included - Requires separate subscription AMQ Broker Red Hat Integration - AMQ Broker LTS Not Included - Requires separate subscription Not Included - Requires separate subscription Red Hat Integration - AMQ Interconnect Not Included - Requires separate subscription Not Included - Requires separate subscription AMQ Interconnect Red Hat Integration - AMQ Online Not Included - Requires separate subscription Not Included - Requires separate subscription Red Hat Integration - AMQ Streams Not Included - Requires separate subscription Not Included - Requires separate subscription AMQ Streams Red Hat Integration - Camel K Not Included - Requires separate subscription Not Included - Requires separate subscription Camel K Red Hat Integration - Fuse Console Not Included - Requires separate subscription Not Included - Requires separate subscription Fuse Console Red Hat Integration - Fuse Online Not Included - Requires separate subscription Not Included - Requires separate subscription Fuse Online Red Hat Integration - Service Registry Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Service Registry API Designer provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription API Designer JBoss EAP provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription JBoss EAP JBoss Web Server provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription JBoss Web Server Smart Gateway Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Smart Gateway Operator Kubernetes NMState Operator Included Included N/A 3.2. Subscription limitations OpenShift Kubernetes Engine is a subscription offering that provides OpenShift Container Platform with a limited set of supported features at a lower list price. OpenShift Kubernetes Engine and OpenShift Container Platform are the same product and, therefore, all software and features are delivered in both. There is only one download, OpenShift Container Platform. OpenShift Kubernetes Engine uses the OpenShift Container Platform documentation and support services and bug errata for this reason. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/about/oke-about |
Chapter 5. Migration | Chapter 5. Migration This chapter provides information on migrating to versions of components included in Red Hat Software Collections 3.0. 5.1. Migrating to MariaDB 10.2 Red Hat Enterprise Linux 6 contains MySQL 5.1 as the default MySQL implementation. Red Hat Enterprise Linux 7 includes MariaDB 5.5 as the default MySQL implementation. MariaDB is a community-developed drop-in replacement for MySQL . MariaDB 10.1 has been available as a Software Collection since Red Hat Software Collections 2.2; Red Hat Software Collections 3.0 is distributed with MariaDB 10.2 . The rh-mariadb102 Software Collection, available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7, does not conflict with the mysql or mariadb packages from the core systems, so it is possible to install the rh-mariadb102 Software Collection together with the mysql or mariadb packages. It is also possible to run both versions at the same time, however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Additionally, it is possible to install the rh-mariadb102 Software Collection while the rh-mariadb101 Collection is still installed and even running. Note that if you are using MariaDB 5.5 or MariaDB 10.0 , it is necessary to upgrade to the rh-mariadb101 Software Collection first, which is described in the Red Hat Software Collections 2.4 Release Notes . For more information about MariaDB 10.2 , see the upstream documentation about changes in version 10.2 and about upgrading . Note The rh-mariadb102 Software Collection supports neither mounting over NFS nor dynamical registering using the scl register command. 5.1.1. Notable Differences Between the rh-mariadb101 and rh-mariadb102 Software Collections Major changes in MariaDB 10.2 are described in Section 1.3.4, "Changes in MariaDB" . Since MariaDB 10.2 , behavior of the SQL_MODE variable has been changed; see the upstream documentation for details. Multiple options have changed their default values or have been deprecated or removed. For details, see the Knowledgebase article Migrating from MariaDB 10.1 to the MariaDB 10.2 Software Collection . The rh-mariadb102 Software Collection includes the rh-mariadb102-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mariadb102*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mariadb102* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. 5.1.2. Upgrading from the rh-mariadb101 to the rh-mariadb102 Software Collection Important Prior to upgrading, back up all your data, including any MariaDB databases. Stop the rh-mariadb101 database server if it is still running. Before stopping the server, set the innodb_fast_shutdown option to 0 , so that InnoDB performs a slow shutdown, including a full purge and insert buffer merge. Read more about this option in the upstream documentation . This operation can take a longer time than in case of a normal shutdown. mysql -uroot -p -e "SET GLOBAL innodb_fast_shutdown = 0" Stop the rh-mariadb101 server. service rh-mariadb101-mariadb stop Install the rh-mariadb102 Software Collection. yum install rh-mariadb102-mariadb-server Note that it is possible to install the rh-mariadb102 Software Collection while the rh-mariadb101 Software Collection is still installed because these Collections do not conflict. Inspect configuration of rh-mariadb102 , which is stored in the /etc/opt/rh/rh-mariadb102/my.cnf file and the /etc/opt/rh/rh-mariadb102/my.cnf.d/ directory. Compare it with configuration of rh-mariadb101 stored in /etc/opt/rh/rh-mariadb101/my.cnf and /etc/opt/rh/rh-mariadb101/my.cnf.d/ and adjust it if necessary. All data of the rh-mariadb101 Software Collection is stored in the /var/opt/rh/rh-mariadb101/lib/mysql/ directory unless configured differently. Copy the whole content of this directory to /var/opt/rh/rh-mariadb102/lib/mysql/ . You can move the content but remember to back up your data before you continue to upgrade. Make sure the data are owned by the mysql user and SELinux context is correct. Start the rh-mariadb102 database server. service rh-mariadb102-mariadb start Perform the data migration. scl enable rh-mariadb102 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mariadb102 -- mysql_upgrade -p 5.2. Migrating to MongoDB 3.4 Red Hat Software Collections 3.0 is released with MongoDB 3.4 , provided by the rh-mongodb34 Software Collection. 5.2.1. Notable Differences Between MongoDB 3.2 and MongoDB 3.4 General Changes The rh-mongodb34 Software Collection introduces various general changes. Major changes are listed in Section 1.3.6, "Changes in MongoDB" and in the Knowledgebase article Migrating from MongoDB 3.2 to MongoDB 3.4 . For detailed changes, see the upstream release notes . In addition, this Software Collection includes the rh-mongodb34-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mongodb34*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mongodb34* packages. Compatibility Changes MongoDB 3.4 includes various minor changes that can affect compatibility with versions of MongoDB . For details, see the Knowledgebase article Migrating from MongoDB 3.2 to MongoDB 3.4 and the upstream documentation . Notably, the following MongoDB 3.4 features are backwards incompatible and require that the version is set to 3.4 using the featureCompatibilityVersion command: Support for creating read-only views from existing collections or other views Index version v: 2 , which adds support for collation, decimal data and case-insensitive indexes Support for the decimal128 format with the new decimal data type For details regarding backward incompatible changes in MongoDB 3.4 , see the upstream release notes . 5.2.2. Upgrading from the rh-mongodb32 to the rh-mongodb34 Software Collection Note that once you have upgraded to MongoDB 3.4 and started using new features, cannot downgrade to version 3.2.7 or earlier. You can only downgrade to version 3.2.8 or later. Important Before migrating from the rh-mongodb32 to the rh-mongodb34 Software Collection, back up all your data, including any MongoDB databases, which are by default stored in the /var/opt/rh/rh-mongodb32/lib/mongodb/ directory. In addition, see the compatibility changes to ensure that your applications and deployments are compatible with MongoDB 3.4 . To upgrade to the rh-mongodb34 Software Collection, perform the following steps. Install the MongoDB servers and shells from the rh-mongodb34 Software Collections: ~]# yum install rh-mongodb34 Stop the MongoDB 3.2 server: ~]# systemctl stop rh-mongodb32-mongod.service Use the service rh-mongodb32-mongodb stop command on a Red Hat Enterprise Linux 6 system. Copy your data to the new location: ~]# cp -a /var/opt/rh/rh-mongodb32/lib/mongodb/* /var/opt/rh/rh-mongodb34/lib/mongodb/ Configure the rh-mongodb34-mongod daemon in the /etc/opt/rh/rh-mongodb34/mongod.conf file. Start the MongoDB 3.4 server: ~]# systemctl start rh-mongodb34-mongod.service On Red Hat Enterprise Linux 6, use the service rh-mongodb34-mongodb start command instead. Enable backwards-incompatible features: ~]USD scl enable rh-mongodb34 'mongo --host localhost --port 27017 admin' --eval 'db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } )' If the mongod server is configured with enabled access control, add the --username and --password options to mongo command. Note that it is recommended to run the deployment after the upgrade without enabling these features first. For detailed information about upgrading, see the upstream release notes . For information about upgrading a Replica Set, see the upstream MongoDB Manual . For information about upgrading a Sharded Cluster, see the upstream MongoDB Manual . 5.3. Migrating to MySQL 5.7 Red Hat Enterprise Linux 6 contains MySQL 5.1 as the default MySQL implementation. Red Hat Enterprise Linux 7 includes MariaDB 5.5 as the default MySQL implementation. In addition to these basic versions, MySQL 5.6 has been available as a Software Collection for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 since Red Hat Software Collections 2.0. The rh-mysql57 Software Collection, available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7, conflicts neither with the mysql or mariadb packages from the core systems nor with the rh-mysql56 Software Collection, so it is possible to install the rh-mysql57 Software Collection together with the mysql , mariadb , or rh-mysql56 packages. It is also possible to run multiple versions at the same time; however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Note that it is possible to upgrade to MySQL 5.7 only from MySQL 5.6 . If you need to upgrade from an earlier version, upgrade to MySQL 5.6 first. Instructions how to upgrade to MySQL 5.6 are available in the Red Hat Software Collections 2.2 Release Notes . 5.3.1. Notable Differences Between MySQL 5.6 and MySQL 5.7 The mysql-bench subpackage is not included in the rh-mysql57 Software Collection. Since MySQL 5.7.7 , the default SQL mode includes NO_AUTO_CREATE_USER . Therefore it is necessary to create MySQL accounts using the CREATE USER statement because the GRANT statement no longer creates a user by default. See the upstream documentation for details. To find out about more detailed changes in MySQL 5.7 compared to earlier versions, see the upstream documentation: What Is New in MySQL 5.7 and Changes Affecting Upgrades to MySQL 5.7 . 5.3.2. Upgrading to the rh-mysql57 Software Collection Important Prior to upgrading, back-up all your data, including any MySQL databases. Install the rh-mysql57 Software Collection. yum install rh-mysql57-mysql-server Inspect the configuration of rh-mysql57 , which is stored in the /etc/opt/rh/rh-mysql57/my.cnf file and the /etc/opt/rh/rh-mysql57/my.cnf.d/ directory. Compare it with the configuration of rh-mysql56 stored in /etc/opt/rh/rh-mysql56/my.cnf and /etc/opt/rh/rh-mysql56/my.cnf.d/ and adjust it if necessary. Stop the rh-mysql56 database server, if it is still running. service rh-mysql56-mysqld stop All data of the rh-mysql56 Software Collection is stored in the /var/opt/rh/rh-mysql56/lib/mysql/ directory. Copy the whole content of this directory to /var/opt/rh/rh-mysql57/lib/mysql/ . You can also move the content but remember to back up your data before you continue to upgrade. Start the rh-mysql57 database server. service rh-mysql57-mysqld start Perform the data migration. scl enable rh-mysql57 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mysql57 -- mysql_upgrade -p 5.4. Migrating to PostgreSQL 9.6 Red Hat Software Collections 3.0 is distributed with PostgreSQL 9.6 , which can be safely installed on the same machine in parallel with PostgreSQL 8.4 from Red Hat Enterprise Linux 6, PostgreSQL 9.2 from Red Hat Enterprise Linux 7, or any version of PostgreSQL released in versions of Red Hat Software Collections. It is also possible to run more than one version of PostgreSQL on a machine at the same time, but you need to use different ports or IP addresses and adjust SELinux policy. 5.4.1. Notable Differences Between PostgreSQL 9.5 and PostgreSQL 9.6 The most notable changes between PostgreSQL 9.5 and PostgreSQL 9.6 are described in the upstream release notes . The rh-postgresql96 Software Collection includes the rh-postgresql96-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-postgreqsl96*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-postgreqsl96* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. The following table provides an overview of different paths in a Red Hat Enterprise Linux system version of PostgreSQL ( postgresql ) and in the postgresql92 , rh-postgresql95 , and rh-postgresql96 Software Collections. Note that the paths of PostgreSQL 8.4 distributed with Red Hat Enterprise Linux 6 and the system version of PostgreSQL 9.2 shipped with Red Hat Enterprise Linux 7 are the same; the paths for the rh-postgresql94 Software Collection are analogous to rh-postgresql95 . Table 5.1. Diferences in the PostgreSQL paths Content postgresql postgresql92 rh-postgresql95 rh-postgresql96 Executables /usr/bin/ /opt/rh/postgresql92/root/usr/bin/ /opt/rh/rh-postgresql95/root/usr/bin/ /opt/rh/rh-postgresql96/root/usr/bin/ Libraries /usr/lib64/ /opt/rh/postgresql92/root/usr/lib64/ /opt/rh/rh-postgresql95/root/usr/lib64/ /opt/rh/rh-postgresql96/root/usr/lib64/ Documentation /usr/share/doc/postgresql/html/ /opt/rh/postgresql92/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql95/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql/html/ PDF documentation /usr/share/doc/postgresql-docs/ /opt/rh/postgresql92/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql95/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql-docs/ Contrib documentation /usr/share/doc/postgresql-contrib/ /opt/rh/postgresql92/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql95/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql-contrib/ Source not installed not installed not installed not installed Data /var/lib/pgsql/data/ /opt/rh/postgresql92/root/var/lib/pgsql/data/ /var/opt/rh/rh-postgresql95/lib/pgsql/data/ /var/opt/rh/rh-postgresql96/lib/pgsql/data/ Backup area /var/lib/pgsql/backups/ /opt/rh/postgresql92/root/var/lib/pgsql/backups/ /var/opt/rh/rh-postgresql95/lib/pgsql/backups/ /var/opt/rh/rh-postgresql96/lib/pgsql/backups/ Templates /usr/share/pgsql/ /opt/rh/postgresql92/root/usr/share/pgsql/ /opt/rh/rh-postgresql95/root/usr/share/pgsql/ /opt/rh/rh-postgresql96/root/usr/share/pgsql/ Procedural Languages /usr/lib64/pgsql/ /opt/rh/postgresql92/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql95/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql96/root/usr/lib64/pgsql/ Development Headers /usr/include/pgsql/ /opt/rh/postgresql92/root/usr/include/pgsql/ /opt/rh/rh-postgresql95/root/usr/include/pgsql/ /opt/rh/rh-postgresql96/root/usr/include/pgsql/ Other shared data /usr/share/pgsql/ /opt/rh/postgresql92/root/usr/share/pgsql/ /opt/rh/rh-postgresql95/root/usr/share/pgsql/ /opt/rh/rh-postgresql96/root/usr/share/pgsql/ Regression tests /usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/postgresql92/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql95/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql96/root/usr/lib64/pgsql/test/regress/ (in the -test package) For changes between PostgreSQL 8.4 and PostgreSQL 9.2 , refer to the Red Hat Software Collections 1.2 Release Notes . Notable changes between PostgreSQL 9.2 and PostgreSQL 9.4 are described in Red Hat Software Collections 2.0 Release Notes . For differences between PostgreSQL 9.4 and PostgreSQL 9.5 , refer to Red Hat Software Collections 2.2 Release Notes . 5.4.2. Migrating from a Red Hat Enterprise Linux System Version of PostgreSQL to the PostgreSQL 9.6 Software Collection Red Hat Enterprise Linux 6 includes PostgreSQL 8.4 , Red Hat Enterprise Linux 7 is distributed with PostgreSQL 9.2 . To migrate your data from a Red Hat Enterprise Linux system version of PostgreSQL to the rh-postgresql96 Software Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. The following procedures are applicable for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 system versions of PostgreSQL . Important Before migrating your data from a Red Hat Enterprise Linux system version of PostgreSQL to PostgreSQL 9.6, make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/lib/pgsql/data/ directory. Procedure 5.1. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : service postgresql stop To verify that the server is not running, type: service postgresql status Verify that the old directory /var/lib/pgsql/data/ exists: file /var/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql96/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql96/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 9.6 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql96/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql96 -- postgresql-setup --upgrade Alternatively, you can use the /opt/rh/rh-postgresql96/root/usr/bin/postgresql-setup --upgrade command. Note that you can use the --upgrade-from option for upgrade from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql96-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : service rh-postgresql96-postgresql start It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql96 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.2. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : service postgresql start Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'pg_dumpall > ~/pgdump_file.sql' Stop the old server by running the following command as root : service postgresql stop Initialize the data directory for the new server as root : scl enable rh-postgresql96-postgresql -- postgresql-setup --initdb Start the new server as root : service rh-postgresql96-postgresql start Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql96 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.4.3. Migrating from the PostgreSQL 9.5 Software Collection to the PostgreSQL 9.6 Software Collection To migrate your data from the rh-postgresql95 Software Collection to the rh-postgresql96 Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from PostgreSQL 9.5 to PostgreSQL 9.6 , make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/opt/rh/rh-postgresql95/lib/pgsql/data/ directory. Procedure 5.3. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : service rh-postgresql95-postgresql stop To verify that the server is not running, type: service rh-postgresql95-postgresql status Verify that the old directory /var/opt/rh/rh-postgresql95/lib/pgsql/data/ exists: file /var/opt/rh/rh-postgresql95/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql96/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql96/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 9.6 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql96/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql96 -- postgresql-setup --upgrade --upgrade-from=rh-postgresql95-postgresql Alternatively, you can use the /opt/rh/rh-postgresql96/root/usr/bin/postgresql-setup --upgrade --upgrade-from=rh-postgresql95-postgresql command. Note that you can use the --upgrade-from option for upgrading from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql96-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : service rh-postgresql96-postgresql start It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql96 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old PostgreSQL 9.5 server, type the following command as root : chkconfig rh-postgresql95-postgreqsql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.4. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : service rh-postgresql95-postgresql start Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'scl enable rh-postgresql95 "pg_dumpall > ~/pgdump_file.sql"' Stop the old server by running the following command as root : service rh-postgresql95-postgresql stop Initialize the data directory for the new server as root : scl enable rh-postgresql96-postgresql -- postgresql-setup --initdb Start the new server as root : service rh-postgresql96-postgresql start Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql96 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old PostgreSQL 9.5 server, type the following command as root : chkconfig rh-postgresql95-postgresql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. If you need to migrate from the postgresql92 Software Collection, refer to Red Hat Software Collections 2.0 Release Notes ; the procedure is the same, you just need to adjust the version of the new Collection. The same applies to migration from the rh-postgresql94 Software Collection, which is described in Red Hat Software Collections 2.2 Release Notes . 5.5. Migrating to nginx 1.12 The rh-nginx112 Software Collection is available only for Red Hat Enterprise Linux 7.4 and later versions. The root directory for the rh-nginx112 Software Collection is located in /opt/rh/rh-nginx112/root/ . The error log is stored in /var/opt/rh/rh-nginx112/log/nginx by default. Configuration files are stored in the /etc/opt/rh/rh-nginx112/nginx/ directory. Configuration files in nginx 1.12 have the same syntax and largely the same format as nginx Software Collections. Configuration files (with a .conf extension) in the /etc/opt/rh/rh-nginx112/nginx/default.d/ directory are included in the default server block configuration for port 80 . Important Before upgrading from nginx 1.10 to nginx 1.12 , back up all your data, including web pages located in the /opt/rh/nginx110/root/ tree and configuration files located in the /etc/opt/rh/nginx110/nginx/ tree. If you have made any specific changes, such as changing configuration files or setting up web applications, in the /opt/rh/nginx110/root/ tree, replicate those changes in the new /opt/rh/rh-nginx112/root/ and /etc/opt/rh/rh-nginx112/nginx/ directories, too. You can use this procedure to upgrade directly from nginx 1.4 , nginx 1.6 , nginx 1.8 , or nginx 1.10 to nginx 1.12 . Use the appropriate paths in this case. For the official nginx documentation, refer to http://nginx.org/en/docs/ . | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.0_release_notes/chap-migration |
Chapter 3. Inviting users and managing rights | Chapter 3. Inviting users and managing rights Note The invite feature is only available for Pro and Enterprise customers. In order to share the workload of administering your application programming interfaces (APIs), you may wish to invite team members from your organization to access the 3scale Admin Portal. The instructions below describe how to give access rights to the 3scale Admin Portal, to one or more team members. With users , we refer to the members of your team. The 3scale Admin Portal has two types of users: Admins Have full access to all areas and services, and can invite other members if your plan allows it. Members Have limited access to areas of the product, for example, Analytics, the Developer Portal if you are an enterprise customer, also to services. If you create a new 3scale user from a single sign-on (SSO) integration, this user has the member role by default, regardless of the SSO token content. 3scale does not map its roles to SSO roles. 3.1. Navigate to user administration To see the list of users of your 3scale installation, follow these steps in the Admin Portal page: In the navigation bar, click the gear icon located in the upper right of the window. Navigate to Users > Listing from the left side menu. 3.2. Send an invitation From the list of users, you can invite a new team member. To send the invitation: Click on the Invite user link, located on the upper-right side above the list. Enter the email address of the person you want to invite and click Send . As a confirmation of the sending, on the upper-right corner of the window you will see a message: Invitation was successfully sent. . An invitation email will be sent to the address you entered. If the email does not arrive, make sure it was not marked as spam in the recipient's email account. Additionally, you can find the list and status of sent invitations in Users > Invitations . 3.3. Accept the invitation Your new administrator or member must click the link in the invitation email and complete the form to complete the process. Once the form is submitted, their account will be activated. 3.4. Give new users rights There are two main type of rights you can give to members of your team: By area Through analytics, billing, or developer administration. By service Choose which services to give access to members amongst all of your services. Note: This feature is only available for enterprise customers. To give a new user rights, edit the new user by selecting them from the user menu and clicking on Edit . You have the following user roles: Changing their rights to Admin will give them full access to control the Admin Portal. Changing their rights to Member will give you the option of choosing which areas and services the team member has access to. As Member , select an area to list all the available services related to said area. Giving access to certain areas of the Admin Portal will give members access only to the equivalent API: Developer accounts - Applications Gives access to the Account management API. Analytics Gives access to the Analytics API. Billing Gives access to the Billing API. | null | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/inviting-users-managing-rights |
Chapter 13. Monitoring RHACS | Chapter 13. Monitoring RHACS You can monitor Red Hat Advanced Cluster Security for Kubernetes (RHACS) by using the built-in monitoring for Red Hat OpenShift or by using custom Prometheus monitoring. If you use RHACS with Red Hat OpenShift, OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. RHACS exposes metrics to Red Hat OpenShift monitoring via an encrypted and authenticated endpoint. 13.1. Monitoring with Red Hat OpenShift Monitoring with Red Hat OpenShift is enabled by default. No configuration is required for this default behavior. Important If you have previously configured monitoring with the Prometheus Operator, consider removing your custom ServiceMonitor resources. RHACS ships with a pre-configured ServiceMonitor for Red Hat OpenShift monitoring. Multiple ServiceMonitors might result in duplicated scraping. Monitoring with Red Hat OpenShift is not supported by Scanner. If you want to monitor Scanner, you must first disable the default Red Hat OpenShift monitoring. Then, configure custom Prometheus monitoring. For more information on disabling Red Hat OpenShift monitoring, see "Disabling Red Hat OpenShift monitoring for Central services by using the RHACS Operator" or "Disabling Red Hat OpenShift monitoring for Central services by using Helm". For more information on configuring Prometheus, see "Monitoring with custom Prometheus". 13.2. Monitoring with custom Prometheus Prometheus is an open-source monitoring and alerting platform. You can use it to monitor health and availability of Central and Sensor components of RHACS. When you enable monitoring, RHACS creates a new monitoring service on port number 9090 and a network policy allowing inbound connections to that port. Note This monitoring service exposes an endpoint that is not encrypted by TLS and has no authorization. Use this only when you do not want to use Red Hat OpenShift monitoring. Before you can use custom Prometheus monitoring, if you have Red Hat OpenShift, you must disable the default monitoring. If you are using Kubernetes, you do not need to perform this step. 13.2.1. Disabling Red Hat OpenShift monitoring for Central services by using the RHACS Operator To disable the default monitoring by using the Operator, change the configuration of the Central custom resource as shown in the following example. For more information on configuration options, see "Central configuration options using the Operator" in the "Additional resources" section. Procedure On the OpenShift Container Platform web console, go to the Operators Installed Operators page. Select the RHACS Operator from the list of installed Operators. Click on the Central tab. From the list of Central instances, click on a Central instance for which you want to enable monitoring. Click on the YAML tab and update the YAML configuration as shown in the following example: monitoring: openshift: enabled: false 13.2.2. Disabling Red Hat OpenShift monitoring for Central services by using Helm To disable the default monitoring by using Helm, change the configuration options in the central-services Helm chart. For more information on configuration options, see the documents in the "Additional resources" section. Procedure Update the configuration file with the following value: monitoring.openshift.enabled: false Run the helm upgrade command and specify the configuration files. 13.2.3. Monitoring Central services by using the RHACS Operator You can monitor Central services, Central and Scanner, by changing the configuration of the Central custom resource. For more information on configuration options, see "Central configuration options using the Operator" in the "Additional resources" section. Procedure On the OpenShift Container Platform web console, go to the Operators Installed Operators page. Select the Red Hat Advanced Cluster Security for Kubernetes Operator from the list of installed Operators. Click on the Central tab. From the list of Central instances, click on a Central instance for which you want to enable monitoring for. Click on the YAML tab and update the YAML configuration: For monitoring Central, enable the central.monitoring.exposeEndpoint configuration option for the Central custom resource. For monitoring Scanner, enable the scanner.monitoring.exposeEndpoint configuration option for the Central custom resource. Click Save . 13.3. Monitoring Central services by using Helm You can monitor Central services, Central and Scanner, by changing the configuration options in the central-services Helm chart. For more information, see "Changing configuration options after deploying the central-services Helm chart" in the "Additional resources" section. Procedure Update the values-public.yaml configuration file with the following values: central.exposeMonitoring: true scanner.exposeMonitoring: true Run the helm upgrade command and specify the configuration files. 13.3.1. Monitoring Central by using Prometheus service monitor If you are using the Prometheus Operator, you can use a service monitor to scrape the metrics from Red Hat Advanced Cluster Security for Kubernetes (RHACS). Note If you are not using the Prometheus operator, you must edit the Prometheus configuration files to receive the data from RHACS. Procedure Create a new servicemonitor.yaml file with the following content: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-stackrox namespace: stackrox spec: endpoints: - interval: 30s port: monitoring scheme: http selector: matchLabels: app.kubernetes.io/name: <stackrox-service> 1 1 The labels must match with the Service resource that you want to monitor. For example, central or scanner . Apply the YAML to the cluster: USD oc apply -f servicemonitor.yaml 1 1 If you use Kubernetes, enter kubectl instead of oc . Verification Run the following command to check the status of service monitor: USD oc get servicemonitor --namespace stackrox 1 1 If you use Kubernetes, enter kubectl instead of oc . 13.4. Additional resources Central configuration options using the Operator Changing configuration options after deploying the central-services Helm chart Helm documentation | [
"monitoring: openshift: enabled: false",
"monitoring.openshift.enabled: false",
"central.exposeMonitoring: true scanner.exposeMonitoring: true",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-stackrox namespace: stackrox spec: endpoints: - interval: 30s port: monitoring scheme: http selector: matchLabels: app.kubernetes.io/name: <stackrox-service> 1",
"oc apply -f servicemonitor.yaml 1",
"oc get servicemonitor --namespace stackrox 1"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/configuring/monitor-acs |
Chapter 2. Use a Playbook to establish a connection to a managed node | Chapter 2. Use a Playbook to establish a connection to a managed node To confirm your credentials, you can connect to a network device manually and retrieve its configuration. Replace the sample user and device name with your real credentials. For example, for a VyOS router: ssh [email protected] show config exit 2.1. Run a network Ansible command Instead of manually connecting and running a command on the network device, you can retrieve its configuration with a single Ansible command. ansible all -i vyos.example.net, -c ansible.netcommon.network_cli -u \ my_vyos_user -k -m vyos.vyos.vyos_facts -e \ ansible_network_os=vyos.vyos.vyos The flags in this command set seven values: the host group(s) to which the command should apply (in this case, all ) the inventory ( -i , the device or devices to target - without the trailing comma -i points to an inventory file) the connection method ( -c , the method for connecting and executing ansible) the user ( -u , the username for the SSH connection) the SSH connection method (- k , prompt for the password) the module ( -m , the Ansible module to run, using the fully qualified collection name (FQCN)) an extra variable ( -e , in this case, setting the network OS value) Note If you use ssh-agent with ssh keys, Ansible loads them automatically. You can omit the -k flag. If you are running Ansible in a virtual environment, you must also add the variable ansible_python_interpreter=/path/to/venv/bin/python . 2.2. Running a network Ansible Playbook If you want to run a particular command every day, you can save it in a playbook and run it with ansible-playbook instead of ansible. The playbook can store a lot of the parameters you provided with flags at the command line, leaving less to type at the command line. You need two files for this, a playbook and an inventory file. Prerequisites Download first_playbook.yml from here . The playbook looks like this: --- - name: Network Getting Started First Playbook connection: ansible.netcommon.network_cli gather_facts: false 1 hosts: all tasks: - name: Get config for VyOS devices vyos.vyos.vyos_facts: gather_subset: all - name: Display the config debug: msg: "The hostname is {{ ansible_net_hostname }} and the OS is {{ ansible_net_version }}" Label Description gather_facts Ansible's native fact gathering ( ansible.builtin.setup ) is disabled here because the playbook relies on the facts provided by a platform-specific module ( vyos.vyos.vyos_facts ) in this networking collection. The playbook sets three of the seven values from the command line above: the group ( hosts: all ) the connection method ( connection: ansible.netcommon.network_cli ) and the module (in each task). With those values set in the playbook, you can omit them on the command line. The playbook also adds a second task to show the configuration output. When facts are gathered from a system, either through a collection-specific fact module such as vyos.vyos.vyos_facts or ansible.builtin.setup , the gathered data is held in memory for use by future tasks instead of being written to the console. When a module runs in a playbook, the output is held in memory for use by future tasks instead of written to the console. With most other modules you must explicitly register a variable to store and reuse the output of a module or task. For more information about facts, see [Ansible facts] in the Ansiible Playbook Reference Guide . The following debug task lets you see the results in your shell. Procedure Run the playbook with the following command. ansible-playbook -i vyos.example.net, -u ansible -k -e ansible_network_os=vyos.vyos.vyos first_playbook.yml The playbook contains one play with two tasks, and generates output like this. USD ansible-playbook -i vyos.example.net, -u ansible -k -e ansible_network_os=vyos.vyos.vyos first_playbook.yml PLAY [Network Getting Started First Playbook] *************************************************************************************************************************** TASK [Get config for VyOS devices] *************************************************************************************************************************** ok: [vyos.example.net] TASK [Display the config] *************************************************************************************************************************** ok: [vyos.example.net] => { "msg": "The hostname is vyos and the OS is VyOS 1.1.8" } Now that you can retrieve the device configuration, you can try updating it with Ansible. Download first_playbook_ext.yml from here , which is an extended version of the first playbook: The playbook looks like this: --- - name: Network Getting Started First Playbook Extended connection: ansible.netcommon.network_cli gather_facts: false hosts: all tasks: - name: Get config for VyOS devices vyos.vyos.vyos_facts: gather_subset: all - name: Display the config debug: msg: "The hostname is {{ ansible_net_hostname }} and the OS is {{ ansible_net_version }}" - name: Update the hostname vyos.vyos.vyos_config: backup: yes lines: - set system host-name vyos-changed - name: Get changed config for VyOS devices vyos.vyos.vyos_facts: gather_subset: all - name: Display the changed config debug: msg: "The new hostname is {{ ansible_net_hostname }} and the OS is {{ ansible_net_version }}" The extended first playbook has five tasks in a single play. Run the playbook with the following command. USD ansible-playbook -i vyos.example.net, -u ansible -k -e ansible_network_os=vyos.vyos.vyos first_playbook_ext.yml The output shows you the change Ansible made to the configuration: USD ansible-playbook -i vyos.example.net, -u ansible -k -e ansible_network_os=vyos.vyos.vyos first_playbook_ext.yml PLAY [Network Getting Started First Playbook Extended] ************************************************************************************************************************************ TASK [Get config for VyOS devices] ********************************************************************************************************************************** ok: [vyos.example.net] TASK [Display the config] ************************************************************************************************************************************* ok: [vyos.example.net] => { "msg": "The hostname is vyos and the OS is VyOS 1.1.8" } TASK [Update the hostname] ************************************************************************************************************************************* changed: [vyos.example.net] TASK [Get changed config for VyOS devices] ************************************************************************************************************************************* ok: [vyos.example.net] TASK [Display the changed config] ************************************************************************************************************************************* ok: [vyos.example.net] => { "msg": "The new hostname is vyos-changed and the OS is VyOS 1.1.8" } PLAY RECAP ************************************************************************************************************************************ vyos.example.net : ok=5 changed=1 unreachable=0 failed=0 2.3. Gather facts from network devices The gather_facts keyword supports gathering network device facts in standardized key/value pairs. You can feed these network facts into further tasks to manage the network device. You can also use the gather_network_resources parameter with the network *_facts modules (such as arista.eos.eos_facts ) to return a subset of the device configuration, as shown below. - hosts: arista gather_facts: True gather_subset: interfaces module_defaults: arista.eos.eos_facts: gather_network_resources: interfaces The playbook returns the following interface facts: "network_resources": { "interfaces": [ { "description": "test-interface", "enabled": true, "mtu": "512", "name": "Ethernet1" }, { "enabled": true, "mtu": "3000", "name": "Ethernet2" }, { "enabled": true, "name": "Ethernet3" }, { "enabled": true, "name": "Ethernet4" }, { "enabled": true, "name": "Ethernet5" }, { "enabled": true, "name": "Ethernet6" }, ] } Note gather_network_resources renders configuration data as facts for all supported resources ( interfaces/bgp/ospf/etc` ), whereas gather_subset is primarily used to fetch operational data. You can store these facts and use them directly in another task, such as with the eos_interfaces resource module. | [
"ssh [email protected] show config exit",
"ansible all -i vyos.example.net, -c ansible.netcommon.network_cli -u my_vyos_user -k -m vyos.vyos.vyos_facts -e ansible_network_os=vyos.vyos.vyos",
"--- - name: Network Getting Started First Playbook connection: ansible.netcommon.network_cli gather_facts: false 1 hosts: all tasks: - name: Get config for VyOS devices vyos.vyos.vyos_facts: gather_subset: all - name: Display the config debug: msg: \"The hostname is {{ ansible_net_hostname }} and the OS is {{ ansible_net_version }}\"",
"ansible-playbook -i vyos.example.net, -u ansible -k -e ansible_network_os=vyos.vyos.vyos first_playbook.yml",
"ansible-playbook -i vyos.example.net, -u ansible -k -e ansible_network_os=vyos.vyos.vyos first_playbook.yml PLAY [Network Getting Started First Playbook] *************************************************************************************************************************** TASK [Get config for VyOS devices] *************************************************************************************************************************** ok: [vyos.example.net] TASK [Display the config] *************************************************************************************************************************** ok: [vyos.example.net] => { \"msg\": \"The hostname is vyos and the OS is VyOS 1.1.8\" }",
"--- - name: Network Getting Started First Playbook Extended connection: ansible.netcommon.network_cli gather_facts: false hosts: all tasks: - name: Get config for VyOS devices vyos.vyos.vyos_facts: gather_subset: all - name: Display the config debug: msg: \"The hostname is {{ ansible_net_hostname }} and the OS is {{ ansible_net_version }}\" - name: Update the hostname vyos.vyos.vyos_config: backup: yes lines: - set system host-name vyos-changed - name: Get changed config for VyOS devices vyos.vyos.vyos_facts: gather_subset: all - name: Display the changed config debug: msg: \"The new hostname is {{ ansible_net_hostname }} and the OS is {{ ansible_net_version }}\"",
"ansible-playbook -i vyos.example.net, -u ansible -k -e ansible_network_os=vyos.vyos.vyos first_playbook_ext.yml",
"ansible-playbook -i vyos.example.net, -u ansible -k -e ansible_network_os=vyos.vyos.vyos first_playbook_ext.yml PLAY [Network Getting Started First Playbook Extended] ************************************************************************************************************************************ TASK [Get config for VyOS devices] ********************************************************************************************************************************** ok: [vyos.example.net] TASK [Display the config] ************************************************************************************************************************************* ok: [vyos.example.net] => { \"msg\": \"The hostname is vyos and the OS is VyOS 1.1.8\" } TASK [Update the hostname] ************************************************************************************************************************************* changed: [vyos.example.net] TASK [Get changed config for VyOS devices] ************************************************************************************************************************************* ok: [vyos.example.net] TASK [Display the changed config] ************************************************************************************************************************************* ok: [vyos.example.net] => { \"msg\": \"The new hostname is vyos-changed and the OS is VyOS 1.1.8\" } PLAY RECAP ************************************************************************************************************************************ vyos.example.net : ok=5 changed=1 unreachable=0 failed=0",
"- hosts: arista gather_facts: True gather_subset: interfaces module_defaults: arista.eos.eos_facts: gather_network_resources: interfaces",
"\"network_resources\": { \"interfaces\": [ { \"description\": \"test-interface\", \"enabled\": true, \"mtu\": \"512\", \"name\": \"Ethernet1\" }, { \"enabled\": true, \"mtu\": \"3000\", \"name\": \"Ethernet2\" }, { \"enabled\": true, \"name\": \"Ethernet3\" }, { \"enabled\": true, \"name\": \"Ethernet4\" }, { \"enabled\": true, \"name\": \"Ethernet5\" }, { \"enabled\": true, \"name\": \"Ethernet6\" }, ] }"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/getting_started_with_ansible_playbooks/assembly-networking-playbook |
Developing Applications with Red Hat build of Apache Camel for Quarkus | Developing Applications with Red Hat build of Apache Camel for Quarkus Red Hat build of Apache Camel 4.8 Developing Applications with Red Hat build of Apache Camel for Quarkus | [
"<project> <properties> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>com.redhat.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.version> <!-- The latest 3.15.x version from https://maven.repository.redhat.com/ga/com/redhat/quarkus/platform/quarkus-bom --> </quarkus.platform.version> </properties> <dependencyManagement> <dependencies> <!-- The BOMs managing the dependency versions --> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-camel-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- The extensions you chose in the project generator tool --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-sql</artifactId> <!-- No explicit version required here and below --> </dependency> </dependencies> </project>",
"import org.apache.camel.builder.RouteBuilder; public class TimerRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"timer:foo?period=1000\") .log(\"Hello World\"); } }",
"import org.apache.camel.builder.RouteBuilder; import static org.apache.camel.builder.endpoint.StaticEndpointBuilders.timer; public class TimerRoute extends RouteBuilder { @Override public void configure() throws Exception { from(timer(\"foo\").period(1000)) .log(\"Hello World\"); } }",
"camel.component.log.exchange-formatter = #class:org.apache.camel.support.processor.DefaultExchangeFormatter camel.component.log.exchange-formatter.show-exchange-pattern = false camel.component.log.exchange-formatter.show-body-type = false",
"import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.event.Observes; import org.apache.camel.quarkus.core.events.ComponentAddEvent; import org.apache.camel.component.log.LogComponent; import org.apache.camel.support.processor.DefaultExchangeFormatter; @ApplicationScoped public static class EventHandler { public void onComponentAdd(@Observes ComponentAddEvent event) { if (event.getComponent() instanceof LogComponent) { /* Perform some custom configuration of the component */ LogComponent logComponent = ((LogComponent) event.getComponent()); DefaultExchangeFormatter formatter = new DefaultExchangeFormatter(); formatter.setShowExchangePattern(false); formatter.setShowBodyType(false); logComponent.setExchangeFormatter(formatter); } } }",
"import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import org.apache.camel.component.log.LogComponent; import org.apache.camel.support.processor.DefaultExchangeFormatter; @ApplicationScoped public class Configurations { /** * Produces a {@link LogComponent} instance with a custom exchange formatter set-up. */ @Named(\"log\") 1 LogComponent log() { DefaultExchangeFormatter formatter = new DefaultExchangeFormatter(); formatter.setShowExchangePattern(false); formatter.setShowBodyType(false); LogComponent component = new LogComponent(); component.setExchangeFormatter(formatter); return component; } }",
"import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Inject; import org.apache.camel.builder.RouteBuilder; import org.eclipse.microprofile.config.inject.ConfigProperty; @ApplicationScoped 1 public class TimerRoute extends RouteBuilder { @ConfigProperty(name = \"timer.period\", defaultValue = \"1000\") 2 String period; @Inject Counter counter; @Override public void configure() throws Exception { fromF(\"timer:foo?period=%s\", period) .setBody(exchange -> \"Incremented the counter: \" + counter.increment()) .to(\"log:cdi-example?showExchangePattern=false&showBodyType=false\"); } }",
"import jakarta.inject.Inject; import jakarta.enterprise.context.ApplicationScoped; import java.util.stream.Collectors; import java.util.List; import org.apache.camel.CamelContext; @ApplicationScoped public class MyBean { @Inject CamelContext context; public List<String> listRouteIds() { return context.getRoutes().stream().map(Route::getId).sorted().collect(Collectors.toList()); } }",
"import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.EndpointInject; import org.apache.camel.FluentProducerTemplate; import org.apache.camel.Produce; import org.apache.camel.ProducerTemplate; @ApplicationScoped class MyBean { @EndpointInject(\"direct:myDirect1\") ProducerTemplate producerTemplate; @EndpointInject(\"direct:myDirect2\") FluentProducerTemplate fluentProducerTemplate; @EndpointInject(\"direct:myDirect3\") DirectEndpoint directEndpoint; @Produce(\"direct:myDirect4\") ProducerTemplate produceProducer; @Produce(\"direct:myDirect5\") FluentProducerTemplate produceProducerFluent; }",
"import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.Produce; @ApplicationScoped class MyProduceBean { public interface ProduceInterface { String sayHello(String name); } @Produce(\"direct:myDirect6\") ProduceInterface produceInterface; void doSomething() { produceInterface.sayHello(\"Kermit\") } }",
"import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import io.quarkus.runtime.annotations.RegisterForReflection; @ApplicationScoped @Named(\"myNamedBean\") @RegisterForReflection public class NamedBean { public String hello(String name) { return \"Hello \" + name + \" from the NamedBean\"; } }",
"import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { from(\"direct:named\") .bean(\"myNamedBean\", \"hello\"); /* ... which is an equivalent of the following: */ from(\"direct:named\") .to(\"bean:myNamedBean?method=hello\"); } }",
"import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.runtime.annotations.RegisterForReflection; import io.smallrye.common.annotation.Identifier; @ApplicationScoped @Identifier(\"myBeanIdentifier\") @RegisterForReflection public class MyBean { public String hello(String name) { return \"Hello \" + name + \" from MyBean\"; } }",
"import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { from(\"direct:start\") .bean(\"myBeanIdentifier\", \"Camel\"); } }",
"import org.apache.camel.Consume; public class Foo { @Consume(\"activemq:cheese\") public void onCheese(String name) { } }",
"from(\"activemq:cheese\").bean(\"foo1234\", \"onCheese\")",
"curl -s localhost:9000/q/health/live",
"curl -s localhost:9000/q/health/ready",
"mvn clean compile quarkus:dev",
"<dependencies> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-opentelemetry</artifactId> </dependency> <dependency> <groupId>io.quarkiverse.micrometer.registry</groupId> <artifactId>quarkus-micrometer-registry-prometheus</artifactId> </dependency> </dependencies>",
".to(\"micrometer:counter:org.acme.observability.greeting-provider?tags=type=events,purpose=example\")",
"@Inject MeterRegistry registry;",
"void countGreeting(Exchange exchange) { registry.counter(\"org.acme.observability.greeting\", \"type\", \"events\", \"purpose\", \"example\").increment(); }",
"from(\"platform-http:/greeting\") .removeHeaders(\"*\") .process(this::countGreeting)",
"@ApplicationScoped @Named(\"timerCounter\") public class TimerCounter { @Counted(value = \"org.acme.observability.timer-counter\", extraTags = { \"purpose\", \"example\" }) public void count() { } }",
".bean(\"timerCounter\", \"count\")",
"curl -s localhost:9000/q/metrics",
"curl -s localhost:9000/q/metrics | grep -i 'purpose=\"example\"'",
"<dependencies> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-opentelemetry</artifactId> </dependency> <dependency> <groupId>io.quarkiverse.micrometer.registry</groupId> <artifactId>quarkus-micrometer-registry-prometheus</artifactId> </dependency> </dependencies>",
"We are using a property placeholder to be able to test this example in convenient way in a cloud environment quarkus.otel.exporter.otlp.traces.endpoint = http://USD{TELEMETRY_COLLECTOR_COLLECTOR_SERVICE_HOST:localhost}:4317",
"docker-compose up -d",
"mvn clean package java -jar target/quarkus-app/quarkus-run.jar [io.quarkus] (main) camel-quarkus-examples-... started in 1.163s. Listening on: http://0.0.0.0:8080",
"mvn clean package -Pnative ./target/*-runner [io.quarkus] (main) camel-quarkus-examples-... started in 0.013s. Listening on: http://0.0.0.0:8080",
"Charset.defaultCharset(), US-ASCII, ISO-8859-1, UTF-8, UTF-16BE, UTF-16LE, UTF-16",
"quarkus.native.add-all-charsets = true",
"quarkus.native.user-country=US quarkus.native.user-language=en",
"quarkus.native.resources.includes = docs/*,images/* quarkus.native.resources.excludes = docs/ignored.adoc,images/ignored.png",
"onException(MyException.class).handled(true); from(\"direct:route-that-could-produce-my-exception\").throw(MyException.class);",
"import io.quarkus.runtime.annotations.RegisterForReflection; @RegisterForReflection class MyClassAccessedReflectively { } @RegisterForReflection( targets = { org.third-party.Class1.class, org.third-party.Class2.class } ) class ReflectionRegistrations { }",
"quarkus.camel.native.reflection.include-patterns = org.apache.commons.lang3.tuple.* quarkus.camel.native.reflection.exclude-patterns = org.apache.commons.lang3.tuple.*Triple",
"quarkus.index-dependency.commons-lang3.group-id = org.apache.commons quarkus.index-dependency.commons-lang3.artifact-id = commons-lang3",
"Client side SSL quarkus.cxf.client.hello.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/hello quarkus.cxf.client.hello.service-interface = io.quarkiverse.cxf.it.security.policy.HelloService 1 quarkus.cxf.client.hello.trust-store-type = pkcs12 2 quarkus.cxf.client.hello.trust-store = client-truststore.pkcs12 quarkus.cxf.client.hello.trust-store-password = client-truststore-password",
"Server side SSL quarkus.tls.key-store.p12.path = localhost-keystore.pkcs12 quarkus.tls.key-store.p12.password = localhost-keystore-password quarkus.tls.key-store.p12.alias = localhost quarkus.tls.key-store.p12.alias-password = localhost-keystore-password",
"Server keystore for Simple TLS quarkus.tls.localhost-pkcs12.key-store.p12.path = localhost-keystore.pkcs12 quarkus.tls.localhost-pkcs12.key-store.p12.password = localhost-keystore-password quarkus.tls.localhost-pkcs12.key-store.p12.alias = localhost quarkus.tls.localhost-pkcs12.key-store.p12.alias-password = localhost-keystore-password Server truststore for Mutual TLS quarkus.tls.localhost-pkcs12.trust-store.p12.path = localhost-truststore.pkcs12 quarkus.tls.localhost-pkcs12.trust-store.p12.password = localhost-truststore-password Select localhost-pkcs12 as the TLS configuration for the HTTP server quarkus.http.tls-configuration-name = localhost-pkcs12 Do not allow any clients which do not prove their indentity through an SSL certificate quarkus.http.ssl.client-auth = required CXF service quarkus.cxf.endpoint.\"/mTls\".implementor = io.quarkiverse.cxf.it.auth.mtls.MTlsHelloServiceImpl CXF client with a properly set certificate for mTLS quarkus.cxf.client.mTls.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/mTls quarkus.cxf.client.mTls.service-interface = io.quarkiverse.cxf.it.security.policy.HelloService quarkus.cxf.client.mTls.key-store = target/classes/client-keystore.pkcs12 quarkus.cxf.client.mTls.key-store-type = pkcs12 quarkus.cxf.client.mTls.key-store-password = client-keystore-password quarkus.cxf.client.mTls.key-password = client-keystore-password quarkus.cxf.client.mTls.trust-store = target/classes/client-truststore.pkcs12 quarkus.cxf.client.mTls.trust-store-type = pkcs12 quarkus.cxf.client.mTls.trust-store-password = client-truststore-password Include the keystores in the native executable quarkus.native.resources.includes = *.pkcs12,*.jks",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <wsp:Policy wsp:Id=\"HttpsSecurityServicePolicy\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\" xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <wsp:ExactlyOne> <wsp:All> <sp:TransportBinding> <wsp:Policy> <sp:TransportToken> <wsp:Policy> <sp:HttpsToken RequireClientCertificate=\"false\" /> </wsp:Policy> </sp:TransportToken> <sp:IncludeTimestamp /> <sp:AlgorithmSuite> <wsp:Policy> <sp:Basic128 /> </wsp:Policy> </sp:AlgorithmSuite> </wsp:Policy> </sp:TransportBinding> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>",
"package io.quarkiverse.cxf.it.security.policy; import jakarta.jws.WebMethod; import jakarta.jws.WebService; import org.apache.cxf.annotations.Policy; /** * A service implementation with a transport policy set */ @WebService(serviceName = \"HttpsPolicyHelloService\") @Policy(placement = Policy.Placement.BINDING, uri = \"https-policy.xml\") public interface HttpsPolicyHelloService extends AbstractHelloService { @WebMethod @Override public String hello(String text); }",
"ERROR [org.apa.cxf.ws.pol.PolicyVerificationInInterceptor] Inbound policy verification failed: These policy alternatives can not be satisfied: {http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702}TransportBinding: TLS is not enabled",
"quarkus.cxf.client.basicAuth.wsdl = http://localhost:USD{quarkus.http.test-port}/soap/basicAuth?wsdl quarkus.cxf.client.basicAuth.client-endpoint-url = http://localhost:USD{quarkus.http.test-port}/soap/basicAuth quarkus.cxf.client.basicAuth.username = bob quarkus.cxf.client.basicAuth.password = bob234",
"quarkus.cxf.client.basicAuthSecureWsdl.wsdl = http://localhost:USD{quarkus.http.test-port}/soap/basicAuth?wsdl quarkus.cxf.client.basicAuthSecureWsdl.client-endpoint-url = http://localhost:USD{quarkus.http.test-port}/soap/basicAuthSecureWsdl quarkus.cxf.client.basicAuthSecureWsdl.username = bob quarkus.cxf.client.basicAuthSecureWsdl.password = USD{client-server.bob.password} quarkus.cxf.client.basicAuthSecureWsdl.secure-wsdl-access = true",
"quarkus.http.auth.basic = true quarkus.security.users.embedded.enabled = true quarkus.security.users.embedded.plain-text = true quarkus.security.users.embedded.users.alice = alice123 quarkus.security.users.embedded.roles.alice = admin quarkus.security.users.embedded.users.bob = bob234 quarkus.security.users.embedded.roles.bob = app-user",
"package io.quarkiverse.cxf.it.auth.basic; import jakarta.annotation.security.RolesAllowed; import jakarta.jws.WebService; import io.quarkiverse.cxf.it.HelloService; @WebService(serviceName = \"HelloService\", targetNamespace = HelloService.NS) @RolesAllowed(\"app-user\") public class BasicAuthHelloServiceImpl implements HelloService { @Override public String hello(String person) { return \"Hello \" + person + \"!\"; } }",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <wsp:Policy wsp:Id=\"UsernameTokenSecurityServicePolicy\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\" xmlns:sp13=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200802\" xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <wsp:ExactlyOne> <wsp:All> <sp:SupportingTokens> <wsp:Policy> <sp:UsernameToken sp:IncludeToken=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/IncludeToken/AlwaysToRecipient\"> <wsp:Policy> <sp:WssUsernameToken11 /> <sp13:Created /> <sp13:Nonce /> </wsp:Policy> </sp:UsernameToken> </wsp:Policy> </sp:SupportingTokens> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>",
"@WebService(serviceName = \"UsernameTokenPolicyHelloService\") @Policy(placement = Policy.Placement.BINDING, uri = \"username-token-policy.xml\") public interface UsernameTokenPolicyHelloService extends AbstractHelloService { }",
"A service with a UsernameToken policy assertion quarkus.cxf.endpoint.\"/helloUsernameToken\".implementor = io.quarkiverse.cxf.it.security.policy.UsernameTokenPolicyHelloServiceImpl quarkus.cxf.endpoint.\"/helloUsernameToken\".security.callback-handler = #usernameTokenPasswordCallback These properties are used in UsernameTokenPasswordCallback and in the configuration of the helloUsernameToken below wss.user = cxf-user wss.password = secret A client with a UsernameToken policy assertion quarkus.cxf.client.helloUsernameToken.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/helloUsernameToken quarkus.cxf.client.helloUsernameToken.service-interface = io.quarkiverse.cxf.it.security.policy.UsernameTokenPolicyHelloService quarkus.cxf.client.helloUsernameToken.security.username = USD{wss.user} quarkus.cxf.client.helloUsernameToken.security.password = USD{wss.password}",
"package io.quarkiverse.cxf.it.security.policy; import java.io.IOException; import javax.security.auth.callback.Callback; import javax.security.auth.callback.CallbackHandler; import javax.security.auth.callback.UnsupportedCallbackException; import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import org.apache.wss4j.common.ext.WSPasswordCallback; import org.eclipse.microprofile.config.inject.ConfigProperty; @ApplicationScoped @Named(\"usernameTokenPasswordCallback\") /* We refer to this bean by this name from application.properties */ public class UsernameTokenPasswordCallback implements CallbackHandler { /* These two configuration properties are set in application.properties */ @ConfigProperty(name = \"wss.password\") String password; @ConfigProperty(name = \"wss.user\") String user; @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { if (callbacks.length < 1) { throw new IllegalStateException(\"Expected a \" + WSPasswordCallback.class.getName() + \" at possition 0 of callbacks. Got array of length \" + callbacks.length); } if (!(callbacks[0] instanceof WSPasswordCallback)) { throw new IllegalStateException( \"Expected a \" + WSPasswordCallback.class.getName() + \" at possition 0 of callbacks. Got an instance of \" + callbacks[0].getClass().getName() + \" at possition 0\"); } final WSPasswordCallback pc = (WSPasswordCallback) callbacks[0]; if (user.equals(pc.getIdentifier())) { pc.setPassword(password); } else { throw new IllegalStateException(\"Unexpected user \" + user); } } }",
"package io.quarkiverse.cxf.it.security.policy; import org.assertj.core.api.Assertions; import org.junit.jupiter.api.Test; import io.quarkiverse.cxf.annotation.CXFClient; import io.quarkus.test.junit.QuarkusTest; @QuarkusTest public class UsernameTokenTest { @CXFClient(\"helloUsernameToken\") UsernameTokenPolicyHelloService helloUsernameToken; @Test void helloUsernameToken() { Assertions.assertThat(helloUsernameToken.hello(\"CXF\")).isEqualTo(\"Hello CXF from UsernameToken!\"); } }",
"<soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soap:Header> <wsse:Security xmlns:wsse=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd\" soap:mustUnderstand=\"1\"> <wsse:UsernameToken xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" wsu:Id=\"UsernameToken-bac4f255-147e-42a4-aeec-e0a3f5cd3587\"> <wsse:Username>cxf-user</wsse:Username> <wsse:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">secret</wsse:Password> <wsse:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">3uX15dZT08jRWFWxyWmfhg==</wsse:Nonce> <wsu:Created>2024-10-02T17:32:10.497Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soap:Header> <soap:Body> <ns2:hello xmlns:ns2=\"http://policy.security.it.cxf.quarkiverse.io/\"> <arg0>CXF</arg0> </ns2:hello> </soap:Body> </soap:Envelope>",
"export USDCAMEL_VAULT_AWS_ACCESS_KEY=accessKey export USDCAMEL_VAULT_AWS_SECRET_KEY=secretKey export USDCAMEL_VAULT_AWS_REGION=region",
"camel.vault.aws.accessKey = accessKey camel.vault.aws.secretKey = secretKey camel.vault.aws.region = region",
"export USDCAMEL_VAULT_AWS_USE_DEFAULT_CREDENTIALS_PROVIDER=true export USDCAMEL_VAULT_AWS_REGION=region",
"camel.vault.aws.defaultCredentialsProvider = true camel.vault.aws.region = region",
"export USDCAMEL_VAULT_AWS_USE_PROFILE_CREDENTIALS_PROVIDER=true export USDCAMEL_VAULT_AWS_PROFILE_NAME=test-account export USDCAMEL_VAULT_AWS_REGION=region",
"camel.vault.aws.profileCredentialsProvider = true camel.vault.aws.profileName = test-account camel.vault.aws.region = region",
"<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{aws:route}}\"/> </route> </camelContext>",
"<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{aws:route:default}}\"/> </route> </camelContext>",
"{ \"username\": \"admin\", \"password\": \"password123\", \"engine\": \"postgres\", \"host\": \"127.0.0.1\", \"port\": \"3128\", \"dbname\": \"db\" }",
"<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{aws:database/username}}\"/> </route> </camelContext>",
"<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{aws:database/username:admin}}\"/> </route> </camelContext>",
"export USDCAMEL_VAULT_GCP_SERVICE_ACCOUNT_KEY=file:////path/to/service.accountkey export USDCAMEL_VAULT_GCP_PROJECT_ID=projectId",
"camel.vault.gcp.serviceAccountKey = accessKey camel.vault.gcp.projectId = secretKey",
"export USDCAMEL_VAULT_GCP_USE_DEFAULT_INSTANCE=true export USDCAMEL_VAULT_GCP_PROJECT_ID=projectId",
"camel.vault.gcp.useDefaultInstance = true camel.vault.aws.projectId = region",
"<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{gcp:route}}\"/> </route> </camelContext>",
"<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{gcp:route:default}}\"/> </route> </camelContext>",
"{ \"username\": \"admin\", \"password\": \"password123\", \"engine\": \"postgres\", \"host\": \"127.0.0.1\", \"port\": \"3128\", \"dbname\": \"db\" }",
"<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{gcp:database/username}}\"/> </route> </camelContext>",
"<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{gcp:database/username:admin}}\"/> </route> </camelContext>",
"export USDCAMEL_VAULT_AZURE_TENANT_ID=tenantId export USDCAMEL_VAULT_AZURE_CLIENT_ID=clientId export USDCAMEL_VAULT_AZURE_CLIENT_SECRET=clientSecret export USDCAMEL_VAULT_AZURE_VAULT_NAME=vaultName",
"camel.vault.azure.tenantId = accessKey camel.vault.azure.clientId = clientId camel.vault.azure.clientSecret = clientSecret camel.vault.azure.vaultName = vaultName",
"export USDCAMEL_VAULT_AZURE_IDENTITY_ENABLED=true export USDCAMEL_VAULT_AZURE_VAULT_NAME=vaultName",
"camel.vault.azure.azureIdentityEnabled = true camel.vault.azure.vaultName = vaultName",
"<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{azure:route}}\"/> </route> </camelContext>",
"<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{azure:route:default}}\"/> </route> </camelContext>",
"{ \"username\": \"admin\", \"password\": \"password123\", \"engine\": \"postgres\", \"host\": \"127.0.0.1\", \"port\": \"3128\", \"dbname\": \"db\" }",
"<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{azure:database/username}}\"/> </route> </camelContext>",
"<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{azure:database/username:admin}}\"/> </route> </camelContext>",
"export USDCAMEL_VAULT_HASHICORP_TOKEN=token export USDCAMEL_VAULT_HASHICORP_HOST=host export USDCAMEL_VAULT_HASHICORP_PORT=port export USDCAMEL_VAULT_HASHICORP_SCHEME=http/https",
"camel.vault.hashicorp.token = token camel.vault.hashicorp.host = host camel.vault.hashicorp.port = port camel.vault.hashicorp.scheme = scheme",
"<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{hashicorp:secret:route}}\"/> </route> </camelContext>",
"<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{hashicorp:secret:route:default}}\"/> </route> </camelContext>",
"{ \"username\": \"admin\", \"password\": \"password123\", \"engine\": \"postgres\", \"host\": \"127.0.0.1\", \"port\": \"3128\", \"dbname\": \"db\" }",
"<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{hashicorp:secret:database/username}}\"/> </route> </camelContext>",
"<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{hashicorp:secret:database/username:admin}}\"/> </route> </camelContext>",
"<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{hashicorp:secret:route@2}}\"/> </route> </camelContext>",
"<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{hashicorp:route:default@2}}\"/> </route> </camelContext>",
"<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{hashicorp:secret:database/username:admin@2}}\"/> </route> </camelContext>",
"export USDCAMEL_VAULT_AWS_USE_DEFAULT_CREDENTIALS_PROVIDER=accessKey export USDCAMEL_VAULT_AWS_REGION=region",
"camel.vault.aws.useDefaultCredentialProvider = true camel.vault.aws.region = region",
"camel.vault.aws.refreshEnabled=true camel.vault.aws.refreshPeriod=60000 camel.vault.aws.secrets=Secret camel.main.context-reload-enabled = true",
"{ \"source\": [\"aws.secretsmanager\"], \"detail-type\": [\"AWS API Call via CloudTrail\"], \"detail\": { \"eventSource\": [\"secretsmanager.amazonaws.com\"] } }",
"{ \"Policy\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Id\\\":\\\"<queue_arn>/SQSDefaultPolicy\\\",\\\"Statement\\\":[{\\\"Sid\\\": \\\"EventsToMyQueue\\\", \\\"Effect\\\": \\\"Allow\\\", \\\"Principal\\\": {\\\"Service\\\": \\\"events.amazonaws.com\\\"}, \\\"Action\\\": \\\"sqs:SendMessage\\\", \\\"Resource\\\": \\\"<queue_arn>\\\", \\\"Condition\\\": {\\\"ArnEquals\\\": {\\\"aws:SourceArn\\\": \\\"<eventbridge_rule_arn>\\\"}}}]}\" }",
"aws sqs set-queue-attributes --queue-url <queue_url> --attributes file://policy.json",
"camel.vault.aws.refreshEnabled=true camel.vault.aws.refreshPeriod=60000 camel.vault.aws.secrets=Secret camel.main.context-reload-enabled = true camel.vault.aws.useSqsNotification=true camel.vault.aws.sqsQueueUrl=<queue_url>",
"export USDCAMEL_VAULT_GCP_USE_DEFAULT_INSTANCE=true export USDCAMEL_VAULT_GCP_PROJECT_ID=projectId",
"camel.vault.gcp.useDefaultInstance = true camel.vault.aws.projectId = projectId",
"camel.vault.gcp.projectId= projectId camel.vault.gcp.refreshEnabled=true camel.vault.gcp.refreshPeriod=60000 camel.vault.gcp.secrets=hello* camel.vault.gcp.subscriptionName=subscriptionName camel.main.context-reload-enabled = true",
"export USDCAMEL_VAULT_AZURE_TENANT_ID=tenantId export USDCAMEL_VAULT_AZURE_CLIENT_ID=clientId export USDCAMEL_VAULT_AZURE_CLIENT_SECRET=clientSecret export USDCAMEL_VAULT_AZURE_VAULT_NAME=vaultName",
"camel.vault.azure.tenantId = accessKey camel.vault.azure.clientId = clientId camel.vault.azure.clientSecret = clientSecret camel.vault.azure.vaultName = vaultName",
"export USDCAMEL_VAULT_AZURE_IDENTITY_ENABLED=true export USDCAMEL_VAULT_AZURE_VAULT_NAME=vaultName",
"camel.vault.azure.azureIdentityEnabled = true camel.vault.azure.vaultName = vaultName",
"camel.vault.azure.refreshEnabled=true camel.vault.azure.refreshPeriod=60000 camel.vault.azure.secrets=Secret camel.vault.azure.eventhubConnectionString=eventhub_conn_string camel.vault.azure.blobAccountName=blob_account_name camel.vault.azure.blobContainerName=blob_container_name camel.vault.azure.blobAccessKey=blob_access_key camel.main.context-reload-enabled = true"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html-single/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/%7BLinkCEQReference%7Dextensions-vertx-websocket |
Chapter 6. Installing a cluster on OpenStack with Kuryr on your own infrastructure | Chapter 6. Installing a cluster on OpenStack with Kuryr on your own infrastructure Important Kuryr is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. In OpenShift Container Platform version 4.12, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.12 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have an RHOSP account where you want to install OpenShift Container Platform. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . On the machine from which you run the installation program, you have: A single directory in which you can keep the files you create during the installation process Python 3 6.2. About Kuryr SDN Important Kuryr is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Kuryr is a container network interface (CNI) plugin solution that uses the Neutron and Octavia Red Hat OpenStack Platform (RHOSP) services to provide networking for pods and Services. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr namespace: kuryr-controller - a single service instance installed on a master node. This is modeled in OpenShift Container Platform as a Deployment object. kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet object. The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs. Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform SDN over an RHOSP network. If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial. Kuryr is not recommended in deployments where all of the following criteria are true: The RHOSP version is less than 16. The deployment uses UDP services, or a large number of TCP services on few hypervisors. or The ovn-octavia Octavia driver is disabled. The deployment uses a large number of TCP services on few hypervisors. 6.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires. Use the following quota to satisfy a default cluster's minimum requirements: Table 6.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP with Kuryr Resource Value Floating IP addresses 3 - plus the expected number of Services of LoadBalancer type Ports 1500 - 1 needed per Pod Routers 1 Subnets 250 - 1 needed per Namespace/Project Networks 250 - 1 needed per Namespace/Project RAM 112 GB vCPUs 28 Volume storage 275 GB Instances 7 Security groups 250 - 1 needed per Service and per NetworkPolicy Security group rules 1000 Server groups 2 - plus 1 for each additional availability zone in each machine pool Load balancers 100 - 1 needed per Service Load balancer listeners 500 - 1 needed per Service-exposed port Load balancer pools 500 - 1 needed per Service-exposed port A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Important If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects. Take the following notes into consideration when setting resources: The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time. Each network policy is mapped into an RHOSP security group, and depending on the NetworkPolicy spec, one or more rules are added to the security group. Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating the number of security groups required for the quota. If you are using RHOSP version 15 or earlier, or the ovn-octavia driver , each load balancer has a security group with the user project. The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the RHOSP deployment's size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them. If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. To enable Kuryr SDN, your environment must meet the following requirements: Run RHOSP 13+. Have Overcloud with Octavia. Use Neutron Trunk ports extension. Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid . 6.3.1. Increasing quota When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP) resources used by pods, services, namespaces, and network policies. Procedure Increase the quotas for a project by running the following command: USD sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project> 6.3.2. Configuring Neutron Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work. In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr can properly handle network policies. 6.3.3. Configuring Octavia Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use Kuryr SDN. To enable Octavia, you must include the Octavia service during the installation of the RHOSP Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update. Note The following steps only capture the key pieces required during the deployment of RHOSP when dealing with Octavia. It is also important to note that registry methods vary. This example uses the local registry method. Procedure If you are using the local registry, create a template to upload the images to the registry. For example: (undercloud) USD openstack overcloud container image prepare \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ --namespace=registry.access.redhat.com/rhosp13 \ --push-destination=<local-ip-from-undercloud.conf>:8787 \ --prefix=openstack- \ --tag-from-label {version}-{product-version} \ --output-env-file=/home/stack/templates/overcloud_images.yaml \ --output-images-file /home/stack/local_registry_images.yaml Verify that the local_registry_images.yaml file contains the Octavia images. For example: ... - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787 Note The Octavia container versions vary depending upon the specific RHOSP release installed. Pull the container images from registry.redhat.io to the Undercloud node: (undercloud) USD sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verbose This may take some time depending on the speed of your network and Undercloud disk. Install or update your Overcloud environment with Octavia: USD openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ -e octavia_timeouts.yaml Note This command only includes the files associated with Octavia; it varies based on your specific installation of RHOSP. See the RHOSP documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director . Note When leveraging Kuryr SDN, the Overcloud installation requires the Neutron trunk extension. This is available by default on director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN. 6.3.3.1. The Octavia OVN Driver Octavia supports multiple provider drivers through the Octavia API. To see all available Octavia provider drivers, on a command line, enter: USD openstack loadbalancer provider list Example output +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+ Beginning with RHOSP version 16, the Octavia OVN provider driver ( ovn ) is supported on OpenShift Container Platform on RHOSP deployments. ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by Director on deployments that use OVN Neutron ML2. The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it. If Kuryr uses ovn instead of Amphora, it offers the following benefits: Decreased resource requirements. Kuryr does not require a load balancer VM for each service. Reduced network latency. Increased service creation speed by using OpenFlow rules instead of a VM for each service. Distributed load balancing actions across all nodes instead of centralized on Amphora VMs. 6.3.4. Known limitations of installing with Kuryr Using OpenShift Container Platform with Kuryr SDN has several known limitations. RHOSP general limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that apply to all versions and environments: Service objects with the NodePort type are not supported. Clusters that use the OVN Octavia provider driver support Service objects for which the .spec.selector property is unspecified only if the .subsets.addresses property of the Endpoints object includes the subnet of the nodes or pods. If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer . Configuring the sessionAffinity=ClientIP property on Service objects does not have an effect. Kuryr does not support this setting. RHOSP version limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that depend on the RHOSP version. RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OpenShift Container Platform service. Creating too many services can cause you to run out of resources. Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of RHOSP. Kuryr SDN does not support automatic unidling by a service. RHOSP upgrade limitations As a result of the RHOSP upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required. You can address API changes on an individual basis. If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two ways: Upgrade each VM by triggering a load balancer failover . Leave responsibility for upgrading the VMs to users. If the operator takes the first option, there might be short downtimes during failovers. If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features. 6.3.5. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 6.3.6. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 6.3.7. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 6.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.5. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 6.6. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-containers.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 6.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 6.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.9. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.12 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 6.10. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 6.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 6.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 6.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 6.12. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 6.13. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. 6.14. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.14.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.2. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.14.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 6.3. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.14.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.4. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 6.14.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 6.5. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 6.14.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 6.6. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installation program selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . compute.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. Additional networks that are attached to a control plane machine are also attached to the bootstrap node. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installation program selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . platform.openstack.clusterOSImage The location from which the installation program downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 6.14.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 6.14.7. Sample customized install-config.yaml file for RHOSP with Kuryr To deploy with Kuryr SDN instead of the default OVN-Kubernetes network plugin, you must modify the install-config.yaml file to include Kuryr as the desired networking.networkType . This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr 2 platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 3 octaviaSupport: true 4 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 The Amphora Octavia driver creates two ports per load balancer. As a result, the service subnet that the installer creates is twice the size of the CIDR that is specified as the value of the serviceNetwork property. The larger range is required to prevent IP address conflicts. 2 The cluster network plugin to install. The supported values are Kuryr , OVNKubernetes , and OpenShiftSDN . The default value is OVNKubernetes . 3 4 Both trunkSupport and octaviaSupport are automatically discovered by the installer, so there is no need to set them. But if your environment does not meet both requirements, Kuryr SDN will not properly work. Trunks are needed to connect the pods to the RHOSP network and Octavia is required to create the OpenShift Container Platform services. 6.14.8. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 6.14.8.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 6.14.8.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 6.14.9. Kuryr ports pools A Kuryr ports pool maintains a number of ports on standby for pod creation. Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted. The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes. Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair. Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior: The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add Neutron ports to the pools when the first pod that is configured to use the dedicated network for pods is created in a namespace. The default value is false . The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1 . The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting. If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted. The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3 . 6.14.10. Adjusting Kuryr ports pools during installation During installation, you can configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation. Prerequisites Create and modify the install-config.yaml file. Procedure From a command line, create the manifest files: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-network-03-config.yml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-network-* Example output cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml Open the cluster-network-03-config.yml file in an editor, and enter a custom resource (CR) that describes the Cluster Network Operator configuration that you want: USD oc edit networks.operator.openshift.io cluster Edit the settings to meet your requirements. The following file is provided as an example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5 1 Set enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports when the first pod on the network for pods is created in a namespace. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false . 2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts . The default value is 1 . 3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts . The default value is 3 . 4 If the number of free ports in a pool is higher than the value of poolMaxPorts , Kuryr deletes them until the number matches that value. Setting this value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0 . 5 The openStackServiceNetwork parameter defines the CIDR range of the network from which IP addresses are allocated to RHOSP Octavia's LoadBalancers. If this parameter is used with the Amphora driver, Octavia takes two IP addresses from this network for each load balancer: one for OpenShift and the other for VRRP connections. Because these IP addresses are managed by OpenShift Container Platform and Neutron respectively, they must come from different pools. Therefore, the value of openStackServiceNetwork must be at least twice the size of the value of serviceNetwork , and the value of serviceNetwork must overlap entirely with the range that is defined by openStackServiceNetwork . The CNO verifies that VRRP IP addresses that are taken from the range that is defined by this parameter do not overlap with the range that is defined by the serviceNetwork parameter. If this parameter is not set, the CNO uses an expanded value of serviceNetwork that is determined by decrementing the prefix size by 1. Save the cluster-network-03-config.yml file, and exit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory while creating the cluster. 6.14.11. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1 open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24 . To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet. 6.14.12. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 6.14.13. Modifying the network type By default, the installation program selects the OpenShiftSDN network type. To use Kuryr instead, change the value in the installation configuration file that the program generated. Prerequisites You have the file install-config.yaml that was generated by the OpenShift Container Platform installation program Procedure In a command prompt, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["networkType"] = "Kuryr"; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set networking.networkType to "Kuryr" . 6.15. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the compute machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 6.16. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 6.17. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 6.18. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform postinstallation network configuration. If you do not define a value for os_bootstrap_fip , the installer cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, create a network, subnet, and router by running the network.yaml playbook: USD ansible-playbook -i inventory.yaml network.yaml Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" 6.19. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 6.20. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files aren't already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 6.21. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.22. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 6.23. Creating compute machines on RHOSP After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and compute-nodes.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml steps Approve the certificate signing requests for the machines. 6.24. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 6.25. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. 6.26. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.27. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . | [
"sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>",
"(undercloud) USD openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml --namespace=registry.access.redhat.com/rhosp13 --push-destination=<local-ip-from-undercloud.conf>:8787 --prefix=openstack- --tag-from-label {version}-{product-version} --output-env-file=/home/stack/templates/overcloud_images.yaml --output-images-file /home/stack/local_registry_images.yaml",
"- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787",
"(undercloud) USD sudo openstack overcloud container image upload --config-file /home/stack/local_registry_images.yaml --verbose",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml -e octavia_timeouts.yaml",
"openstack loadbalancer provider list",
"+---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack",
"sudo alternatives --set python /usr/bin/python3",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/openstack/down-containers.yaml'",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"file <name_of_downloaded_file>",
"openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"bootstrap machine\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr 2 platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 3 octaviaSupport: true 4 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-network-03-config.yml 1",
"ls <installation_directory>/manifests/cluster-network-*",
"cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"networkType\"] = \"Kuryr\"; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". βββ auth β βββ kubeadmin-password β βββ kubeconfig βββ bootstrap.ign βββ master.ign βββ metadata.json βββ worker.ign",
"export INFRA_ID=USD(jq -r .infraID metadata.json)",
"import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)",
"openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>",
"openstack image show <image_name>",
"openstack catalog show image",
"openstack token issue -c id -f value",
"{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }",
"for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done",
"# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'",
"# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'",
"ansible-playbook -i inventory.yaml security-groups.yaml",
"ansible-playbook -i inventory.yaml network.yaml",
"openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"",
"ansible-playbook -i inventory.yaml bootstrap.yaml",
"openstack console log show \"USDINFRA_ID-bootstrap\"",
"ansible-playbook -i inventory.yaml control-plane.yaml",
"openshift-install wait-for bootstrap-complete",
"INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml",
"ansible-playbook -i inventory.yaml compute-nodes.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0",
"openshift-install --log-level debug wait-for install-complete"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_openstack/installing-openstack-user-kuryr |
C.5. Selection Criteria Processing Examples | C.5. Selection Criteria Processing Examples This section provides a series of examples showing how to use selection criteria in commands that process LVM logical volumes. This example shows the initial configuration of a group of logical volumes, including thin snapshots. Thin snapshots have the "skip activation" flag set by default. This example also includes the logical volume lvol4 which also has the "skip activation" flag set. The following command removes the skip activation flag from all logical volmes that are thin snapshots. The following command shows the configuration of the logical volumes after executing the lvchange command. Note that the "skip activation" flag has not been unset from the logical volume that is not a thin snapshot. The following command shows the configuration of the logical volumes after an additional thin origin/snapshot volume has been created. The following command activates logical volumes that are both thin snapshot volumes and have an origin volume of lvol2 . If you execute a command on a whole item while specifying selection criteria that match an item from that whole, the entire whole item is processed. For example, if you change a volume group while selecting one or more items from that volume group, the whole volume group is selected. This example selects logical volume lvol1 , which is part of volume group vg . All of the logical volumes in volume group vg are processed. The following example shows a more complex selection criteria statement. In this example, all logical volumes are tagged with "mytag" if they have a role of origin and are also named lvol[456] or the logical volume size is more than 5g. | [
"lvs -o name,skip_activation,layout,role LV SkipAct Layout Role root linear public swap linear public lvol1 thin,sparse public lvol2 thin,sparse public,origin,thinorigin lvol3 skip activation thin,sparse public,snapshot,thinsnapshot lvol4 skip activation linear public pool thin,pool private",
"lvchange --setactivationskip n -S 'role=thinsnapshot' Logical volume \"lvol3\" changed.",
"lvs -o name,active,skip_activation,layout,role LV Active SkipAct Layout Role root active linear public swap active linear public lvol1 active thin,sparse public lvol2 active thin,sparse public,origin,thinorigin lvol3 thin,sparse public,snapshot,thinsnapshot lvol4 active skip activation linear public pool active thin,pool private",
"lvs -o name,active,skip_activation,origin,layout,role LV Active SkipAct Origin Layout Role root active linear public swap active linear public lvol1 active thin,sparse public lvol2 active thin,sparse public,origin,thinorigin lvol3 lvol2 thin,sparse public,snapshot,thinsnapshot lvol4 active skip activation linear public lvol5 active thin,sparse public,origin,thinorigin lvol6 lvol5 thin,sparse public,snapshot,thinsnapshot pool active thin,pool private",
"lvchange -ay -S 'lv_role=thinsnapshot && origin=lvol2' lvs -o name,active,skip_activation,origin,layout,role LV Active SkipAct Origin Layout Role root active linear public swap active linear public lvol1 active thin,sparse public lvol2 active thin,sparse public,origin,thinorigin lvol3 active lvol2 thin,sparse public,snapshot,thinsnapshot lvol4 active skip activation linear public lvol5 active thin,sparse public,origin,thinorigin lvol6 lvol5 thin,sparse public,snapshot,thinsnapshot pool active thin,pool private",
"lvs -o name,vg_name LV VG root fedora swap fedora lvol1 vg lvol2 vg lvol3 vg lvol4 vg lvol5 vg lvol6 vg pool vg vgchange -ay -S 'lv_name=lvol1' 7 logical volume(s) in volume group \"vg\" now active",
"lvchange --addtag mytag -S '(role=origin && lv_name=~lvol[456]) || lv_size > 5g' Logical volume \"root\" changed. Logical volume \"lvol5\" changed."
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/selection_processing_examples |
5.275. RDMA | 5.275. RDMA 5.275.1. RHBA-2012:0770 - RDMA stack bug fix and enhancement update Updated RDMA packages that fix various bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. Red Hat Enterprise Linux includes a collection of InfiniBand and iWARP utilities, libraries and development packages for writing applications that use Remote Direct Memory Access (RDMA) technology. Note The RDMA packages have been upgraded to the latest upstream versions which provide a number of bug fixes and enhancements over the versions (BZ# 739138 ). BZ# 814845 The rdma_bw and rdma_lat utilities provided by the perftest package are now deprecated and will be removed from the perftest package in a future update. Users should use the following utilities instead: ib_write_bw , ib_write_lat , ib_read_bw , and ib_read_lat . Bug Fixes BZ# 696019 Previously, the rping utility did not properly join threads on shutdown. Consequently, on iWARP connections in particular, a race condition was triggered that resulted in the rping utility terminating unexpectedly with a segmentation fault. This update modifies rping to properly handle thread teardown. As a result, rping no longer crashes on iWARP connections. BZ# 700289 Previously, the kernel RDMA Connection Manager (rdmacm) did not have an option to reuse a socket port before the timeout had expired on that port after the last close. Consequently, when trying to open and close large numbers of sockets rapidly, it was possible to run out of suitable sockets that were not waiting in the timewait state. This update improves the kernel rdmacm provider to implement the SO_REUSEADDR option available for TCP/UDP sockets, which allows a socket that is closed but still in the timewait state to be reused when needed. As a result, it is now much more difficult to run out of sockets because the rdmacm provider does not need to wait for them to expire from the timewait state before they can be reused. BZ# 735954 The framework of the MVAPICH2 process manager mpirun_rsh in the mvapich2 package was broken. Consequently, all attempts to use mpirun_rsh failed. This update upgrades mpirun_rsh to later MVAPICH2 upstream sources that resolve the problem. As a result, mpirun_rsh works as expected. BZ# 747406 Previously, the permissions on the /dev/ipath* files were not permissive enough for normal users to access. Consequently, when a normal user attempted to run a Message Passing Interface (MPI) application using the Performance Scaled Messaging (PSM) Byte Transfer Layer (BTL), it failed due to the inability to open files starting with /dev/ipath . This update makes sure that the files starting with /dev/ipath have the correct permissions to be opened in read-write mode by normal users. As a result, attempts to run an MPI application using the PSM BTL succeed. BZ# 750609 Previously, mappings from InfiniBand bit values to link speeds only extended to Quad Data Rate (QDR). Consequently, attempts to use newer InfiniBand cards that supported speeds faster than QDR did not work because the stack did not understand the bit values in the link speed field. This update adds FDR (Fourteen Data Rate), FDR10, and EDR (Enhanced Data Rate) link speeds to the kernel and user space libraries. Users can now make use of newer InfiniBand cards at these higher speeds. BZ# 754196 OpenSM did not support the subnet_prefix option on the command line. Consequently, in order to have two instances of OpenSM running on two different fabrics at the same time and on the same machine, the sysadmin had to edit two different opensm.conf files and specify the subnet_prefix separately in each file in order to have different prefixes on the different subnets. With this update, OpenSM accepts a subnet_prefix option and the OpenSM init script now starts OpenSM using this option when it is being started on multiple fabrics. As a result, a sysadmin is no longer required to hand edit multiple opensm.conf files to create otherwise identical configurations that only vary by which fabric they are managing. BZ# 755459 Previously, ibv_devinfo (a program included in libibverbs-utils) did not catch bad port numbers on the command line and return an error code. Consequently, scripts could not reliably tell whether or not the command had succeeded or failed due to a bad port number. This update fixes ibv_devinfo so that it returns a non-zero error condition when a user attempts to run it on a non-existent InfiniBand device port. As a result, scripts can now tell for certain if the port value they pass to ibv_devinfo was a valid port or was out of range. BZ# 758498 Initialization of RDMA over Converged Ethernet (RoCE) based queue pairs (QPs) was not completed successfully when initialization was done through libibverbs and not through librdmacm. Consequently, attempting to open the connection failed and the following error message was displayed: This updated kernel stack provides a fix for the libibverbs based RoCE QP creation and now users can properly create QPs whether they use libibverbs or librdmacm as the connection initiation method. BZ# 768109 Previously, the openmpi library did not honor the tcp_port_range settings. Consequently, if users wished to limit the TCP ports that openmpi used they could not do so. This update to a later upstream version that does not have this problem allows users to now limit which TCP ports openmpi attempts to use. BZ# 768457 Previously, the shared OpenType font library " libotf.so.0 " was provided by both the openmpi package and the libotf package. Consequently, when an RPM spec file requested libotf.so.0 in order to operate properly, Yum could install either openmpi or libotf to satisfy the dependency, but as these two packages do not provide compatible libotf.so.0 libraries, the program might or might not work depending on whether or not the right provider was selected. The libotf.so.0 in openmpi is not intended for other applications to link against, it is an internal library. With this update, libotf.so.0 in openmpi is excluded from RPM's library identification searches. As a result, applications linking against libotf will get the right libotf, and openmpi will not accidentally be installed to satisfy the need for libotf. BZ# 773713 There was a race condition in handling of completion events in the perftest programs. Under certain conditions, the perftest program being used would terminate unexpectedly with a segmentation fault. This update adds separate send receive completion queues in place of the single completion queue for both send and receive operations. The race between the finish of a send and the finish of a receive is thereby avoided. As a result, the perftest applications no longer crash with a segmentation fault. BZ# 804002 The rds-ping tool did not check to make sure that a socket was available before sending the ping packet. Consequently, when the timeout between packets was set very small by the user, packets could fill up all available sockets and then overwrite one of the sockets before any ping-packets were returned. This resulted in corruption in the rds-ping data structures and eventually rds-ping terminated unexpectedly with a segmentation fault. With this update, the rds-ping program stalls on sending any more packets if there are no sockets without outstanding packets. As a result, rds-ping no longer crashes with a segmentation fault when the timeout between packets is very small. BZ# 805129 Due to a bug in the libmlx4.conf modprobe configuration, usage of modprobe could result in an infinite loop of modprobe processes. If the bug was encountered, the processes would continually fork until there were no processes able to run and the system would become unresponsive. This update improves the code and as a result an incorrect configuration of options in /etc/modprobe.d/libmlx4.conf no longer results in a system that is unresponsive and that requires a hard reboot in order to be restored to proper operation. BZ# 808673 The qperf application had an outdated constant for PF_RDS in its source code that did not match the officially assigned value for PF_RDS and so qperf would compile with the wrong PF_RDS constant. Consequently, when it was run it would mistakenly think RDS (Reliable Datagram Service) was not supported on the machine even when it was and would refuse to run any RDS tests. This update removes the PF_RDS constant from the qperf source code so that it will pick up the correct constant from the system header files. As a result, qperf now properly runs RDS performance tests. BZ# 815215 The srptools RPM did not automatically add the SCSI Remote Protocol daemon (srpd) to the service list. Consequently, the chkconfig --list command would not show the srpd service at all and the service could not be enabled. The srptools RPM now properly adds the srpd init script to the list of available services (it is disabled by default). Users can now see the srpd service using chkconfig --list and can enable the srpd service with the chkconfig --level 345 srpd on command. BZ# 815622 There was a bad test in the rdma init script. Consequently, the rds module would be loaded even if the user had configured it not to load. This update corrects the test in the init script so that all conditions must be met instead of just the first condition. As a result, the rds module is only loaded when the user has configured it to be loaded or if autoloaded by the kernel due to rds usage on the local machine. Enhancements BZ# 700285 On large InfiniBand networks, Subnet Administration service lookups consumed a large amount of bandwidth. Consequently, it could take upwards of 1 minute to look up a route from one machine to another if the network InfiniBand Subnet Manager (OpenSM) was heavily congested. This update adds the InfiniBand Communication Management Assistant (ibacm) that caches routes in a similar manner to the ARP cache for Ethernet. The ibacm program caches PathRecords from the Subnet Administration service (SA) which includes information such as MTU (Maximum Transmission Unit), SL (Service Level), SLID (Source Local Identifier) and DLID (Destination Local Identifier) for InfiniBand paths. This information is important to set up QP's properly. As a result, large subnets with many nodes will have reduced overall SA Query traffic and route lookup times. Users of RDMA should upgrade to these updated packages, which provide numerous bug fixes and enhancements. | [
"cannot transition QP to RTR state"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/rdma |
Part VI. Administration: Managing Policies | Part VI. Administration: Managing Policies This part provides instructions on how to define password policies, manage the Kerberos domain, use the sudo utility, how to configure Host-Based Access Control and define SELinux User Maps. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/p.administration-guide-policies |
20.32. Deleting a Storage Volume's Contents | 20.32. Deleting a Storage Volume's Contents The virsh vol-wipe vol pool command wipes a volume, to ensure data previously on the volume is not accessible to future reads. The command requires a --pool pool which is the name or UUID of the storage pool the volume is in as well as pool which is the name the name or key or path of the volume to wipe. Note that it is possible to choose different wiping algorithms instead of re-writing volume with zeroes, using the argument --algorithm and using one of the following supported algorithm types: zero - 1-pass all zeroes nnsa - 4-pass NNSA Policy Letter NAP-14.1-C (XVI-8) for sanitizing removable and non-removable hard disks: random x2, 0x00, verify. dod - 4-pass DoD 5220.22-M section 8-306 procedure for sanitizing removable and non-removable rigid disks: random, 0x00, 0xff, verify. bsi - 9-pass method recommended by the German Center of Security in Information Technologies (http://www.bsi.bund.de): 0xff, 0xfe, 0xfd, 0xfb, 0xf7, 0xef, 0xdf, 0xbf, 0x7f. gutmann - The canonical 35-pass sequence described in Gutmann's paper. schneier - 7-pass method described by Bruce Schneier in "Applied Cryptography" (1996): 0x00, 0xff, random x5. pfitzner7 - Roy Pfitzner's 7-random-pass method: random x7 pfitzner33 - Roy Pfitzner's 33-random-pass method: random x33. random - 1-pass pattern: random.s Note The availability of algorithms may be limited by the version of the "scrub" binary installed on the host. Example 20.92. How to delete a storage volume's contents (How to wipe the storage volume) The following example wipes the contents of the storage volume new-vol , which has the storage pool vdisk associated with it: | [
"virsh vol-wipe new-vol vdisk vol new-vol wiped"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-virsh-vol-wipe |
Deploying and managing OpenShift Data Foundation using Google Cloud | Deploying and managing OpenShift Data Foundation using Google Cloud Red Hat OpenShift Data Foundation 4.9 Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Google Cloud clusters Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install and manage Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Google Cloud. Important Deploying and managing OpenShift Data Foundation on Google Cloud is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback: For simple comments on specific passages: Make sure you are viewing the documentation in the HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document. Use your mouse cursor to highlight the part of text that you want to comment on. Click the Add Feedback pop-up that appears below the highlighted text. Follow the displayed instructions. For submitting more complex feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Preface Red Hat OpenShift Data Foundation 4.9 supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) Google Cloud clusters. Note Only internal OpenShift Data Foundation clusters are supported on Google Cloud. See Planning your deployment for more information about deployment requirements. To deploy OpenShift Data Foundation in internal mode, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and follow the appropriate deployment process based on your requirement: Deploy OpenShift Data Foundation on Google Cloud Deploy standalone Multicloud Object Gateway component Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps: Optional: If you want to enable cluster-wide encryption using an external Key Management System (KMS): Ensure that a policy with a token exists and the key value backend path in Vault is enabled. See enabled the key value backend path and policy in Vault . Ensure that you are using signed certificates on your Vault servers. Minimum starting node requirements [Technology Preview] An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide. Regional-DR requirements [Developer Preview] Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Regional-DR requirements and RHACM requirements . 1.1. Enabling key value backend path and policy in Vault Prerequisites Administrator access to Vault. Carefully, choose a unique path name as the backend path that follows the naming convention since it cannot be changed later. Procedure Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict users to perform a write or delete operation on the secret using the following commands. Create a token matching the above policy. Chapter 2. Deploying OpenShift Data Foundation on Google Cloud You can deploy OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Google Cloud installer-provisioned infrastructure. This enables you to create internal cluster resources and it results in internal provisioning of the base services, which helps to make additional storage classes available to applications. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Note Only internal OpenShift Data Foundation clusters are supported on Google Cloud. See Planning your deployment for more information about deployment requirements. Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.9 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Note We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result. Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Operators and verify if OpenShift Data Foundation is available. Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . 2.2. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator . Be aware that the default storage class of the Google Cloud platform uses hard disk drive (HDD). To use solid state drive (SSD) based disks for better performance, you need to create a storage class, using pd-ssd as shown in the following ssd-storeageclass.yaml example: Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set to standard . However, if you created a storage class to use SSD based disks for better performance, you need to select that storage class. Expand Advanced and select Full Deployment for the Deployment type option. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Choose either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. 2.3. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 2.3.1. Verifying the state of the pods Procedure Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information about the expected number of pods for each component and how it varies depending on the number of nodes, see Table 2.1, "Pods corresponding to OpenShift Data Foundation cluster" . Click the Running and Completed tabs to verify that the following pods are in Running and Completed state: Table 2.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each worker node) csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes) rbd csi-rbdplugin-* (1 pod on each worker node) csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 2.3.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.3.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . 2.3.4. Verifying that the OpenShift Data Foundation specific storage classes exist Procedure Click Storage -> Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway 3.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.9 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Note We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result. Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Operators and verify if OpenShift Data Foundation is available. Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . 3.2. Creating standalone Multicloud Object Gateway Use this section to create only the Multicloud Object Gateway component with OpenShift Data Foundation. Prerequisites Ensure that OpenShift Data Foundation Operator is installed. (For deploying using local storage devices only) Ensure that Local Storage Operator is installed. Ensure that you have a storage class and is set as the default. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, expand Advanced . Select Multicloud Object Gateway for Deployment type . Click . Optional: In the Security page, select Connect to an external key management service . Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https:// <hostname or ip> '), Port number , and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate , and Client Private Key . Click Save . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verify the state of the pods Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any worker node) noobaa-db-pg-* (1 pod on any worker node) noobaa-endpoint-* (1 pod on any worker node) Chapter 4. Uninstalling OpenShift Data Foundation 4.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledge base article on Uninstalling OpenShift Data Foundation . Chapter 5. Storage classes and storage pools The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create a custom storage class if you want the storage class to have a different behaviour. You can create multiple storage pools which map to storage classes that provide the following features: Enable applications with their own high availability to use persistent volumes with two replicas, potentially improving application performance. Save space for persistent volume claims using storage classes with compression enabled. Note Multiple storage classes and multiple pools are not supported for external mode OpenShift Data Foundation clusters. Note With a minimal cluster of a single device set, only two new storage classes can be created. Every storage cluster expansion allows two new additional storage classes. 5.1. Creating storage classes and pools You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage -> StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Reclaim Policy is set to Delete as the default option. Use this setting. If you change the reclaim policy to Retain in the storage class, the persistent volume (PV) remains in Released state even after deleting the persistent volume claim (PVC). Volume binding mode is set to WaitForConsumer as the default option. If you choose the Immediate option, then the PV is created at the same time while creating the PVC. Select RBD Provisioner which is the plugin used for provisioning the persistent volumes. Select an existing Storage Pool from the list or create a new pool. Create new pool Click Create New Pool . Enter Pool name . Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy. Select Enable compression if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Create to create the new storage pool. Click Finish after the pool is created. Optional: Select Enable Encryption checkbox. Click Create to create the storage class. 5.2. Creating a storage class for persistent volume encryption Persistent volume (PV) encryption guarantees isolation and confidentiality between tenants (applications). Before you can use PV encryption, you must create a storage class for PV encryption. OpenShift Data Foundation supports storing encryption passphrases in HashiCorp Vault. Use the following procedure to create an encryption enabled storage class using an external key management system (KMS) for persistent volume encryption. Persistent volume encryption is only available for RBD PVs. You can configure access to the KMS in two different ways: Using vaulttokens : allows users to authenticate using a token Using vaulttenantsa (technology preview): allows users to use serviceaccounts to authenticate with Vault Important Accessing the KMS using vaulttenantsa is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . See the relevant prerequisites section for your use case before following the procedure for creating the storage class: Section 5.2.1, "Prerequisites for using vaulttokens " Section 5.2.2, "Prerequisites for using vaulttenantsa " 5.2.1. Prerequisites for using vaulttokens The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy with a token exists and the key value backend path in Vault is enabled. For more information, see Enabling key value and policy in Vault . Ensure that you are using signed certificates on your Vault servers. Create a secret in the tenant's namespace as follows: On the OpenShift Container Platform web console, navigate to Workloads -> Secrets . Click Create -> Key/value secret . Enter Secret Name as ceph-csi-kms-token . Enter Key as token . Enter Value . It is the token from Vault. You can either click Browse to select and upload the file containing the token or enter the token directly in the text box. Click Create . Note The token can be deleted only after all the encrypted PVCs using the ceph-csi-kms-token have been deleted. , follow the steps in Section 5.2.3, "Procedure for creating a storage class for PV encryption" . 5.2.2. Prerequisites for using vaulttenantsa The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy exists and the key value backend path in Vault is enabled. For more information, see Enabling key value and policy in Vault . Ensure that you are using signed certificates on your Vault servers. Create the following serviceaccount in the tenant namespace as shown below: The Kubernetes authentication method must be configured before OpenShift Data Foundation can authenticate with and start using Vault. The instructions below create and configure serviceAccount , ClusterRole , and ClusterRoleBinding required to allow OpenShift Data Foundation to authenticate with Vault. Apply the following YAML to your Openshift cluster: Identify the secret name associated with the serviceaccount (SA) created above: Get the token and the CA certificate from the secret: Retrieve the OCP cluster endpoint: Use the information collected in the steps above to setup the kubernetes authentication method in Vault as shown below: Create a role in Vault for the tenant namespace: csi-kubernetes is the default role name that OpenShift Data Foundation looks for in Vault. The default service account name in the tenant namespace in the Openshift Data Foundation cluster is ceph-csi-vault-sa . These default values can be overridden by creating a ConfigMap in the tenant namespace. For more information about overriding the default names, see Overriding Vault connection details using tenant ConfigMap . In order to create a storageclass that uses the vaulttenantsa method for PV encrytpion, you must either edit the existing ConfigMap or create a ConfigMap named csi-kms-connection-details that will hold all the information needed to establish the connection with Vault. The sample yaml given below can be used to update or create the csi-kms-connection-detail ConfigMap: encryptionKMSType : should be set to vaulttenantsa to use service accounts for authentication with vault. vaultAddress : The hostname or IP address of the vault server with the port number. vaultTLSServerName : (Optional) The vault TLS server name vaultAuthPath : (Optional) The path where kubernetes auth method is enabled in Vault. The default path is kubernetes . If the auth method is enabled in a different path other than kubernetes , this variable needs to be set as "/v1/auth/<path>/login" . vaultAuthNamespace : (Optional) The Vault namespace where kubernetes auth method is enabled. vaultNamespace : (Optional) The Vault namespace where the backend path being used to store the keys exists vaultBackendPath : The backend path in Vault where the encryption keys will be stored vaultCAFromSecret : The secret in the OpenShift Data Foundation cluster containing the CA certificate from Vault vaultClientCertFromSecret : The secret in the OpenShift Data Foundation cluster containing the client certificate from Vault vaultClientCertKeyFromSecret : The secret in the OpenShift Data Foundation cluster containing the client private key from Vault tenantSAName : (Optional) The service account name in the tenant namespace. The default value is ceph-csi-vault-sa . If a different name is to be used, this variable has to be set accordingly. , follow the steps in Section 5.2.3, "Procedure for creating a storage class for PV encryption" . 5.2.3. Procedure for creating a storage class for PV encryption After performing the required prerequisites for either vaulttokens or vaulttenantsa , perform the steps below to create a storageclass with encryption enabled. Navigate to Storage -> StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Select either Delete or Retain for the Reclaim Policy . By default, Delete is selected. Select either Immediate or WaitForFirstConsumer as the Volume binding mode . WaitForConsumer is set as the default option. Select RBD Provisioner openshift-storage.rbd.csi.ceph.com which is the plugin used for provisioning the persistent volumes. Select Storage Pool where the volume data is stored from the list or create a new pool. Select the Enable encryption checkbox. There are two options available to set the KMS connection details: Choose existing KMS connection : Select an existing KMS connection from the drop-down list. The list is populated from the the connection details available in the csi-kms-connection-details ConfigMap. Create new KMS connection : This is applicable for vaulttokens only. Key Management Service Provider is set to Vault by default. Enter a unique Vault Service Name , host Address of the Vault server ( https://<hostname or ip> ), and Port number. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration. Enter the key value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional : Enter TLS Server Name and Vault Enterprise Namespace . Provide CA Certificate , Client Certificate and Client Private Key by uploading the respective PEM encoded certificate file. Click Save . Click Save . Click Create . Edit the ConfigMap to add the VAULT_BACKEND or vaultBackend parameter if the HashiCorp Vault setup does not allow automatic detection of the Key/Value (KV) secret engine API version used by the backend path. Note VAULT_BACKEND or vaultBackend are optional parameters that has added to the configmap to specify the version of the KV secret engine API associated with the backend path. Ensure that the value matches the KV secret engine API version that is set for the backend path, otherwise it might result in a failure during persistent volume claim (PVC) creation. Identify the encryptionKMSID being used by the newly created storage class. On the OpenShift Web Console, navigate to Storage -> Storage Classes . Click the Storage class name -> YAML tab. Capture the encryptionKMSID being used by the storage class. Example: On the OpenShift Web Console, navigate to Workloads -> ConfigMaps . To view the KMS connection details, click csi-kms-connection-details . Edit the ConfigMap. Click Action menu (...) -> Edit ConfigMap . Add the VAULT_BACKEND or vaultBackend parameter depending on the backend that is configured for the previously identified encryptionKMSID. You can assign kv for KV secret engine API, version 1 and kv-v2 for KV secret engine API, version 2. Example: Click Save steps The storage class can be used to create encrypted persistent volumes. For more information, see managing persistent volume claims . Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the HashiCorp product. For technical assistance with this product, contact HashiCorp . 5.2.3.1. Overriding Vault connection details using tenant ConfigMap The Vault connections details can be reconfigured per tenant by creating a ConfigMap in the Openshift namespace with configuration options that differ from the values set in the csi-kms-connection-details ConfigMap in the openshift-storage namespace. The ConfigMap needs to be located in the tenant namespace. The values in the ConfigMap in the tenant namespace will override the values set in the csi-kms-connection-details ConfigMap for the encrypted Persistent Volumes created in that namespace. Procedure Ensure that you are in the tenant namespace. Click on Workloads -> ConfigMaps . Click on Create ConfigMap . The following is a sample yaml. The values to be overidden for the given tenant namespace can be specified under the data section as shown below: Once the yaml is edited, click on Create . Chapter 6. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as image registry, monitoring, and logging. The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have plenty of storage capacity for these services. If the storage for these critical services runs out of space, the cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data sub section of Configuring persistent storage in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 6.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the Container Image Registry. On Google Cloud, it is not required to change the storage for the registry. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators -> Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration -> Cluster Settings -> Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage -> StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage -> Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration -> Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) -> Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads -> Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 6.2. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators -> Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration -> Cluster Settings -> Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage -> StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads -> Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage -> Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 6.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads -> Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 6.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 6.3. Persistent Volume Claims attached to prometheus-k8s-* pod 6.3. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 6.3.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 6.3.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration -> Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage -> Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 6.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload -> Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide. Chapter 7. Backing OpenShift Container Platform applications with OpenShift Data Foundation You cannot directly install OpenShift Data Foundation during the OpenShift Container Platform installation. However, you can install OpenShift Data Foundation on an existing OpenShift Container Platform by using the Operator Hub and then configure the OpenShift Container Platform applications to be backed by OpenShift Data Foundation. Prerequisites OpenShift Container Platform is installed and you have administrative access to OpenShift Web Console. OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure In the OpenShift Web Console, perform one of the following: Click Workloads -> Deployments . In the Deployments page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. Click Workloads -> Deployment Configs . In the Deployment Configs page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment Config to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. In the Add Storage page, you can choose one of the following options: Click the Use existing claim option and select a suitable PVC from the drop-down list. Click the Create new claim option. Select the appropriate CephFS or RBD storage class from the Storage Class drop-down list. Provide a name for the Persistent Volume Claim. Select ReadWriteOnce (RWO) or ReadWriteMany (RWX) access mode. Note ReadOnlyMany (ROX) is deactivated as it is not supported. Select the size of the desired storage capacity. Note You can expand the block PVs but cannot reduce the storage capacity after the creation of Persistent Volume Claim. Specify the mount path and subpath (if required) for the mount path volume inside the container. Click Save . Verification steps Depending on your configuration, perform one of the following: Click Workloads -> Deployments . Click Workloads -> Deployment Configs . Set the Project as required. Click the deployment for which you added storage to display the deployment details. Scroll down to Volumes and verify that your deployment has a Type that matches the Persistent Volume Claim that you assigned. Click the Persistent Volume Claim name and verify the storage class name in the Persistent Volume Claim Overview page. Chapter 8. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 8.3, "Manual creation of infrastructure nodes" section for more information. 8.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: https://access.redhat.com/solutions/6592171 . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 8.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 8.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements. Chapter 9. Scaling storage nodes To scale the storage capacity of OpenShift Data Foundation, you can do either of the following: Scale up storage nodes - Add storage capacity to the existing OpenShift Data Foundation worker nodes Scale out storage nodes - Add new worker nodes containing storage capacity 9.1. Requirements for scaling storage nodes Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Data Foundation instance: Platform requirements Storage device requirements Dynamic storage devices Capacity planning Warning Always ensure that you have plenty of storage capacity. If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space. Completely full storage is very difficult to recover. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. If you do run out of storage space completely, contact Red Hat Customer Support. 9.2. Scaling up storage by adding capacity to your OpenShift Data Foundation nodes on Google Cloud infrastructure You can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites A running OpenShift Data Foundation Platform. Administrative privileges on the OpenShift Web Console. To scale using a storage class other than the one provisioned during deployment, first define an additional storage class. See Creating a storage class for details. Procedure Log in to the OpenShift Web Console. Click Operators -> Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Set the storage class to standard if you are using the default storage class that uses HDD. However, if you created a storage class to use SSD based disks for better performance, you need to select that storage class. + The Raw Capacity field shows the size set during storage class creation. The total amount of storage consumed is three times this amount, because OpenShift Data Foundation uses a replica count of 3. Click Add . To check the status, navigate to Storage -> OpenShift Data Foundation and verify that Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance.. 9.3. Scaling out storage capacity by adding new nodes To scale out storage capacity, you need to perform the following: Add a new node to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs, which is the increment of 3 OSDs of the capacity selected during initial configuration. Verify that the new node is added successfully Scale up the storage capacity after the node is added 9.3.1. Adding a node on Google Cloud installer-provisioned infrastructure Prerequisites You must be logged into OpenShift Container Platform cluster. Procedure Navigate to Compute -> Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute -> Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps To verify that the new node is added, see Verifying the addition of a new node . 9.3.2. Verifying the addition of a new node Execute the following command and verify that the new node is present in the output: Click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 9.3.3. Scaling up storage capacity After you add a new node to OpenShift Data Foundation, you must scale up the storage capacity as described in Scaling up storage by adding capacity . Chapter 10. Multicloud Object Gateway 10.1. About the Multicloud Object Gateway The Multicloud Object Gateway (MCG) is a lightweight object storage service for OpenShift, allowing users to start small and then scale as needed on-premise, in multiple clusters, and with cloud-native storage. 10.2. Accessing the Multicloud Object Gateway with your applications You can access the object service with any application targeting AWS S3 or code that uses AWS S3 Software Development Kit (SDK). Applications need to specify the Multicloud Object Gateway (MCG) endpoint, an access key, and a secret access key. You can use your terminal or the MCG CLI to retrieve this information. Prerequisites A running OpenShift Data Foundation Platform. Download the MCG command-line interface for easier management. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found at Download RedHat OpenShift Data Foundation page . Note Choose the correct Product Variant according to your architecture. You can access the relevant endpoint, access key, and secret access key in two ways: Section 10.2.1, "Accessing the Multicloud Object Gateway from the terminal" Section 10.2.2, "Accessing the Multicloud Object Gateway from the MCG command-line interface" Example 10.1. Example Accessing the MCG bucket(s) using the virtual-hosted style If the client application tries to access https:// <bucket-name> .s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com <bucket-name> is the name of the MCG bucket For example, https://mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com A DNS entry is needed for mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com to point to the S3 Service. Important Ensure that you have a DNS entry in order to point the client application to the MCG bucket(s) using the virtual-hosted style. 10.2.1. Accessing the Multicloud Object Gateway from the terminal Procedure Run the describe command to view information about the Multicloud Object Gateway (MCG) endpoint, including its access key ( AWS_ACCESS_KEY_ID value) and secret access key ( AWS_SECRET_ACCESS_KEY value). The output will look similar to the following: 1 access key ( AWS_ACCESS_KEY_ID value) 2 secret access key ( AWS_SECRET_ACCESS_KEY value) 3 MCG endpoint 10.2.2. Accessing the Multicloud Object Gateway from the MCG command-line interface Prerequisites Download the MCG command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Procedure Run the status command to access the endpoint, access key, and secret access key: The output will look similar to the following: 1 endpoint 2 access key 3 secret access key You now have the relevant endpoint, access key, and secret access key in order to connect to your applications. Example 10.2. Example If AWS S3 CLI is the application, the following command will list the buckets in OpenShift Data Foundation: 10.3. Allowing user access to the Multicloud Object Gateway Console To allow access to the Multicloud Object Gateway (MCG) Console to a user, ensure that the user meets the following conditions: User is in cluster-admins group. User is in system:cluster-admins virtual group. Prerequisites A running OpenShift Data Foundation Platform. Procedure Enable access to the MCG console. Perform the following steps once on the cluster : Create a cluster-admins group. Bind the group to the cluster-admin role. Add or remove users from the cluster-admins group to control access to the MCG console. To add a set of users to the cluster-admins group : where <user-name> is the name of the user to be added. Note If you are adding a set of users to the cluster-admins group, you do not need to bind the newly added users to the cluster-admin role to allow access to the OpenShift Data Foundation dashboard. To remove a set of users from the cluster-admins group : where <user-name> is the name of the user to be removed. Verification steps On the OpenShift Web Console, login as a user with access permission to Multicloud Object Gateway Console. Navigate to Storage -> OpenShift Data Foundation . In the Storage Systems tab, select the storage system and then click Overview -> Object tab. Select the Multicloud Object Gateway link. Click Allow selected permissions . 10.4. Adding storage resources for hybrid or Multicloud 10.4.1. Creating a new backing store Use this procedure to create a new backing store in OpenShift Data Foundation. Prerequisites Administrator access to OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . Click the Backing Store tab. Click Create Backing Store . On the Create New Backing Store page, perform the following: Enter a Backing Store Name . Select a Provider . Select a Region . Enter an Endpoint . This is optional. Select a Secret from the drop-down list, or create your own secret. Optionally, you can Switch to Credentials view which lets you fill in the required secrets. For more information on creating an OCP secret, see the section Creating the secret in the Openshift Container Platform documentation. Each backingstore requires a different secret. For more information on creating the secret for a particular backingstore, see the Section 10.4.2, "Adding storage resources for hybrid or Multicloud using the MCG command line interface" and follow the procedure for the addition of storage resources using a YAML. Note This menu is relevant for all providers except Google Cloud and local PVC. Enter the Target bucket . The target bucket is a container storage that is hosted on the remote cloud service. It allows you to create a connection that tells the MCG that it can use this bucket for the system. Click Create Backing Store . Verification steps In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . Click the Backing Store tab to view all the backing stores. 10.4.2. Adding storage resources for hybrid or Multicloud using the MCG command line interface The Multicloud Object Gateway (MCG) simplifies the process of spanning data across cloud provider and clusters. You must add a backing storage that can be used by the MCG. Depending on the type of your deployment, you can choose one of the following procedures to create a backing storage: For creating an AWS-backed backingstore, see Section 10.4.2.1, "Creating an AWS-backed backingstore" For creating an IBM COS-backed backingstore, see Section 10.4.2.2, "Creating an IBM COS-backed backingstore" For creating an Azure-backed backingstore, see Section 10.4.2.3, "Creating an Azure-backed backingstore" For creating a GCP-backed backingstore, see Section 10.4.2.4, "Creating a GCP-backed backingstore" For creating a local Persistent Volume-backed backingstore, see Section 10.4.2.5, "Creating a local Persistent Volume-backed backingstore" For VMware deployments, skip to Section 10.4.3, "Creating an s3 compatible Multicloud Object Gateway backingstore" for further instructions. 10.4.2.1. Creating an AWS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: You can also add storage resources using a YAML: Create a secret with the credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <backingstore-secret-name> with a unique name. Apply the following YAML for a specific backing store: Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Replace <backingstore-secret-name> with the name of the secret created in the step. 10.4.2.2. Creating an IBM COS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. To generate the above keys on IBM cloud, you must include HMAC credentials while creating the service credentials for your target bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: You can also add storage resources using a YAML: Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <backingstore-secret-name> with a unique name. Apply the following YAML for a specific backing store: Replace <bucket-name> with an existing IBM COS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Replace <endpoint> with a regional endpoint that corresponds to the location of the existing IBM bucket name. This argument tells Multicloud Object Gateway which endpoint to use for its backing store, and subsequently, data storage and administration. Replace <backingstore-secret-name> with the name of the secret created in the step. 10.4.2.3. Creating an Azure-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <AZURE ACCOUNT KEY> and <AZURE ACCOUNT NAME> with an AZURE account key and account name you created for this purpose. Replace <blob container name> with an existing Azure blob container name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: You can also add storage resources using a YAML: Create a secret with the credentials: You must supply and encode your own Azure Account Name and Account Key using Base64, and use the results in place of <AZURE ACCOUNT NAME ENCODED IN BASE64> and <AZURE ACCOUNT KEY ENCODED IN BASE64> . Replace <backingstore-secret-name> with a unique name. Apply the following YAML for a specific backing store: Replace <blob-container-name> with an existing Azure blob container name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Replace <backingstore-secret-name> with the name of the secret created in the step. 10.4.2.4. Creating a GCP-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <PATH TO GCP PRIVATE KEY JSON FILE> with a path to your GCP private key created for this purpose. Replace <GCP bucket name> with an existing GCP object storage bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: You can also add storage resources using a YAML: Create a secret with the credentials: You must supply and encode your own GCP service account private key using Base64, and use the results in place of <GCP PRIVATE KEY ENCODED IN BASE64> . Replace <backingstore-secret-name> with a unique name. Apply the following YAML for a specific backing store: Replace <target bucket> with an existing Google storage bucket. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Replace <backingstore-secret-name> with the name of the secret created in the step. 10.4.2.5. Creating a local Persistent Volume-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <NUMBER OF VOLUMES> with the number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. Replace <VOLUME SIZE> with the required size, in GB, of each volume. Replace <LOCAL STORAGE CLASS> with the local storage class, recommended to use ocs-storagecluster-ceph-rbd . The output will be similar to the following: You can also add storage resources using a YAML: Apply the following YAML for a specific backing store: Replace <backingstore_name> with the name of the backingstore. Replace <NUMBER OF VOLUMES> with the number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. Replace <VOLUME SIZE> with the required size, in GB, of each volume. Note that the letter G should remain. Replace <LOCAL STORAGE CLASS> with the local storage class, recommended to use ocs-storagecluster-ceph-rbd . 10.4.3. Creating an s3 compatible Multicloud Object Gateway backingstore The Multicloud Object Gateway (MCG) can use any S3 compatible object storage as a backing store, for example, Red Hat Ceph Storage's RADOS Object Gateway (RGW). The following procedure shows how to create an S3 compatible MCG backing store for Red Hat Ceph Storage's RGW. Note that when the RGW is deployed, OpenShift Data Foundation operator creates an S3 compatible backingstore for MCG automatically. Procedure From the MCG command-line interface, run the following command: To get the <RGW ACCESS KEY> and <RGW SECRET KEY> , run the following command using your RGW user secret name: Decode the access key ID and the access key from Base64 and keep them. Replace <RGW USER ACCESS KEY> and <RGW USER SECRET ACCESS KEY> with the appropriate, decoded data from the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . The output will be similar to the following: You can also create the backingstore using a YAML: Create a CephObjectStore user. This also creates a secret containing the RGW credentials: Replace <RGW-Username> and <Display-name> with a unique username and display name. Apply the following YAML for an S3-Compatible backing store: Replace <backingstore-secret-name> with the name of the secret that was created with CephObjectStore in the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . 10.4.4. Adding storage resources for hybrid and Multicloud using the user interface Procedure In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . In the Storage Systems tab, select the storage system and then click Overview -> Object tab. Select the Multicloud Object Gateway link. Select the Resources tab in the left, highlighted below. From the list that populates, select Add Cloud Resource . Select Add new connection . Select the relevant native cloud provider or S3 compatible option and fill in the details. Select the newly created connection and map it to the existing bucket. Repeat these steps to create as many backing stores as needed. Note Resources created in NooBaa UI cannot be used by OpenShift UI or MCG CLI. 10.4.5. Creating a new bucket class Bucket class is a CRD representing a class of buckets that defines tiering policies and data placements for an Object Bucket Class (OBC). Use this procedure to create a bucket class in OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . Click the Bucket Class tab. Click Create Bucket Class . On the Create new Bucket Class page, perform the following: Select the bucket class type and enter a bucket class name. Select the BucketClass type . Choose one of the following options: Standard : data will be consumed by a Multicloud Object Gateway (MCG), deduped, compressed and encrypted. Namespace : data is stored on the NamespaceStores without performing de-duplication, compression or encryption. By default, Standard is selected. Enter a Bucket Class Name . Click . In Placement Policy , select Tier 1 - Policy Type and click . You can choose either one of the options as per your requirements. Spread allows spreading of the data across the chosen resources. Mirror allows full duplication of the data across the chosen resources. Click Add Tier to add another policy tier. Select at least one Backing Store resource from the available list if you have selected Tier 1 - Policy Type as Spread and click . Alternatively, you can also create a new backing store . Note You need to select at least 2 backing stores when you select Policy Type as Mirror in step. Review and confirm Bucket Class settings. Click Create Bucket Class . Verification steps In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . Click the Bucket Class tab and search the new Bucket Class. 10.4.6. Editing a bucket class Use the following procedure to edit the bucket class components through the YAML file by clicking the edit button on the Openshift web console. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class . You are redirected to the YAML file, make the required changes in this file and click Save . 10.4.7. Editing backing stores for bucket class Use the following procedure to edit an existing Multicloud Object Gateway (MCG) bucket class to change the underlying backing stores used in a bucket class. Prerequisites Administrator access to OpenShift Web Console. A bucket class. Backing stores. Procedure In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class Resources . On the Edit Bucket Class Resources page, edit the bucket class resources either by adding a backing store to the bucket class or by removing a backing store from the bucket class. You can also edit bucket class resources created with one or two tiers and different placement policies. To add a backing store to the bucket class, select the name of the backing store. To remove a backing store from the bucket class, clear the name of the backing store. Click Save . 10.5. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. You can interact with objects in a namespace bucket using the S3 API. See S3 API endpoints for objects in namespace buckets for more information. Note A namespace bucket can only be used if its write target is available and functional. 10.5.1. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API. Red Hat OpenShift Data Foundation 4.6 onwards supports the following namespace bucket operations: ListObjectVersions ListObjects PutObject CopyObject ListParts CreateMultipartUpload CompleteMultipartUpload UploadPart UploadPartCopy AbortMultipartUpload GetObjectAcl GetObject HeadObject DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 10.5.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 10.5.2.1. Adding an AWS S3 namespace bucket using YAML Prerequisites A running OpenShift Data Foundation Platform Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Create a NamespaceStore resource using OpenShift Custom Resource Definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: Replace <resource-name> with the name you want to give to the resource. Replace <namespacestore-secret-name> with the secret created in step 1. Replace <namespace-secret> with the namespace where the secret can be found. Replace <target-bucket> with the target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: Replace <my-bucket-class> with a unique namespace bucket class name. Replace <resource> with the name of a single namespace-store that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: Replace <my-bucket-class> with a unique bucket class name. Replace <write-resource> with the name of a single namespace-store that defines the write target of the namespace bucket. Replace <read-resources> with a list of the names of the namespace-stores that defines the read targets of the namespace bucket. Apply the following YAML to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2. Replace <resource-name> with the name you want to give to the resource. Replace <my-bucket> with the name you want to give to the bucket. Replace <my-bucket-class> with the bucket class created in the step. Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC. 10.5.2.2. Adding an IBM COS namespace bucket using YAML Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Create a NamespaceStore resource using OpenShift Custom Resource Definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint. Replace <namespacestore-secret-name> with the secret created in step 1. Replace <namespace-secret> with the namespace where the secret can be found. Replace <target-bucket> with the target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: Replace <my-bucket-class> with a unique namespace bucket class name. Replace <resource> with a the name of a single namespace-store that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: Replace <my-bucket-class> with a unique bucket class name. Replace <write-resource> with the name of a single namespace-store that defines the write target of the namespace bucket. Replace <read-resources> with a list of the names of namespace-stores that defines the read targets of the namespace bucket. Apply the following YAML to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2. Replace <resource-name> with the name you want to give to the resource. Replace <my-bucket> with the name you want to give to the bucket. Replace <my-bucket-class> with the bucket class created in the step. Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC. 10.5.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . Run the following command to create a namespace bucket class with a namespace policy of type single : Replace <resource-name> with the name you want to give the resource. Replace <my-bucket-class> with a unique bucket class name. Replace <resource> with a single namespace-store that defines the read and write target of the namespace bucket. Run the following command to create a namespace bucket class with a namespace policy of type multi : Replace <resource-name> with the name you want to give the resource. Replace <my-bucket-class> with a unique bucket class name. Replace <write-resource> with a single namespace-store that defines the write target of the namespace bucket. Replace <read-resources> with a list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Run the following command to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2. Replace <bucket-name> with a bucket name of your choice. Replace <custom-bucket-class> with the name of the bucket class created in step 2. Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC. 10.5.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . Run the following command to create a namespace bucket class with a namespace policy of type single : Replace <resource-name> with the name you want to give the resource. Replace <my-bucket-class> with a unique bucket class name. Replace <resource> with a single namespace-store that defines the read and write target of the namespace bucket. Run the following command to create a namespace bucket class with a namespace policy of type multi : Replace <resource-name> with the name you want to give the resource. Replace <my-bucket-class> with a unique bucket class name. Replace <write-resource> with a single namespace-store that defines the write target of the namespace bucket. Replace <read-resources> with a list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Run the following command to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2. Replace <bucket-name> with a bucket name of your choice. Replace <custom-bucket-class> with the name of the bucket class created in step 2. Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC. 10.5.3. Adding a namespace bucket using the OpenShift Container Platform user interface With the release of OpenShift Data Foundation 4.8, namespace buckets can be added using the OpenShift Container Platform user interface. For more information about namespace buckets, see Managing namespace buckets . Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). Procedure Log into the OpenShift Web Console. Click Storage -> OpenShift Data Foundation. Click the Namespace Store tab to create a namespacestore resources to be used in the namespace bucket. Click Create namespace store . Enter a namespacestore name. Choose a provider. Choose a region. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key. Choose a target bucket. Click Create . Verify the namespacestore is in the Ready state. Repeat these steps until you have the desired amount of resources. Click the Bucket Class tab -> Create a new Bucket Class . Select the Namespace radio button. Enter a Bucket Class name. Add a description (optional). Click . Choose a namespace policy type for your namespace bucket, and then click . Select the target resource(s). If your namespace policy type is Single , you need to choose a read resource. If your namespace policy type is Multi , you need to choose read resources and a write resource. If your namespace policy type is Cache , you need to choose a Hub namespace store that defines the read and write target of the namespace bucket. Click . Review your new bucket class, and then click Create Bucketclass . On the BucketClass page, verify that your newly created resource is in the Created phase. In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . In the Status card, click Storage System and click the storage system link from the pop up that appears. In the Object tab, click Multicloud Object Gateway -> Buckets -> Namespace Buckets tab . Click Create Namespace Bucket . On the Choose Name tab, specify a Name for the namespace bucket and click . On the Set Placement tab: Under Read Policy , select the checkbox for each namespace resource created in step 5 that the namespace bucket should read data from. If the namespace policy type you are using is Multi , then Under Write Policy , specify which namespace resource the namespace bucket should write data to. Click . Click Create . Verification Verify that the namespace bucket is listed with a green check mark in the State column, the expected number of read resources, and the expected write resource name. 10.6. Mirroring data for hybrid and Multicloud buckets The Multicloud Object Gateway (MCG) simplifies the process of spanning data across cloud provider and clusters. Prerequisites You must first add a backing storage that can be used by the MCG, see Section 10.4, "Adding storage resources for hybrid or Multicloud" . Then you create a bucket class that reflects the data management policy, mirroring. Procedure You can set up mirroring data in three ways: Section 10.6.1, "Creating bucket classes to mirror data using the MCG command-line-interface" Section 10.6.2, "Creating bucket classes to mirror data using a YAML" Section 10.6.3, "Configuring buckets to mirror data using the user interface" 10.6.1. Creating bucket classes to mirror data using the MCG command-line-interface From the Multicloud Object Gateway (MCG) command-line interface, run the following command to create a bucket class with a mirroring policy: Set the newly created bucket class to a new bucket claim, generating a new bucket that will be mirrored between two locations: 10.6.2. Creating bucket classes to mirror data using a YAML Apply the following YAML. This YAML is a hybrid example that mirrors data between local Ceph storage and AWS: Add the following lines to your standard Object Bucket Claim (OBC): For more information about OBCs, see Section 10.8, "Object Bucket Claim" . 10.6.3. Configuring buckets to mirror data using the user interface In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . In the Status card, click Storage System and click the storage system link from the pop up that appears. In the Object tab, click the Multicloud Object Gateway link. On the NooBaa page, click the buckets icon on the left side. You can see a list of your buckets: Click the bucket you want to update. Click Edit Tier 1 Resources : Select Mirror and check the relevant resources you want to use for this bucket. In the following example, the data between noobaa-default-backing-store which is on RGW and AWS-backingstore which is on AWS is mirrored: Click Save . Note Resources created in NooBaa UI cannot be used by OpenShift UI or Multicloud Object Gateway (MCG) CLI. 10.7. Bucket policies in the Multicloud Object Gateway OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them. 10.7.1. About bucket policies Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview . 10.7.2. Using bucket policies Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Section 10.2, "Accessing the Multicloud Object Gateway with your applications" Procedure To use bucket policies in the MCG: Create the bucket policy in JSON format. See the following example: There are many available elements for bucket policies with regard to access permissions. For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview . For more examples of bucket policies, see AWS Bucket Policy Examples . Instructions for creating S3 users can be found in Section 10.7.3, "Creating an AWS S3 user in the Multicloud Object Gateway" . Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket: Replace ENDPOINT with the S3 endpoint. Replace MyBucket with the bucket to set the policy on. Replace BucketPolicy with the bucket policy JSON file. Add --no-verify-ssl if you are using the default self signed certificates. For example: For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy . Note The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io . Note Bucket policy conditions are not supported. 10.7.3. Creating an AWS S3 user in the Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Section 10.2, "Accessing the Multicloud Object Gateway with your applications" Procedure In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . In the Status card, click Storage System and click the storage system link from the pop up that appears. In the Object tab, click the Multicloud Object Gateway link. Under the Accounts tab, click Create Account . Select S3 Access Only , provide the Account Name , for example, [email protected] . Click . Select S3 default placement , for example, noobaa-default-backing-store . Select Buckets Permissions . A specific bucket or all buckets can be selected. Click Create . 10.8. Object Bucket Claim An Object Bucket Claim can be used to request an S3 compatible bucket backend for your workloads. You can create an Object Bucket Claim in three ways: Section 10.8.1, "Dynamic Object Bucket Claim" Section 10.8.2, "Creating an Object Bucket Claim using the command line interface" Section 10.8.3, "Creating an Object Bucket Claim using the OpenShift Web Console" An object bucket claim creates a new bucket and an application account in NooBaa with permissions to the bucket, including a new access key and secret access key. The application account is allowed to access only a single bucket and can't create new buckets by default. 10.8.1. Dynamic Object Bucket Claim Similar to Persistent Volumes, you can add the details of the Object Bucket claim (OBC) to your application's YAML, and get the object service endpoint, access key, and secret access key available in a configuration map and secret. It is easy to read this information dynamically into environment variables of your application. Procedure Add the following lines to your application YAML: These lines are the OBC itself. Replace <obc-name> with the a unique OBC name. Replace <obc-bucket-name> with a unique bucket name for your OBC. You can add more lines to the YAML file to automate the use of the OBC. The example below is the mapping between the bucket claim result, which is a configuration map with data and a secret with the credentials. This specific job claims the Object Bucket from NooBaa, which creates a bucket and an account. Replace all instances of <obc-name> with your OBC name. Replace <your application image> with your application image. Apply the updated YAML file: Replace <yaml.file> with the name of your YAML file. To view the new configuration map, run the following: Replace obc-name with the name of your OBC. You can expect the following environment variables in the output: BUCKET_HOST - Endpoint to use in the application. BUCKET_PORT - The port available for the application. The port is related to the BUCKET_HOST . For example, if the BUCKET_HOST is https://my.example.com , and the BUCKET_PORT is 443, the endpoint for the object service would be https://my.example.com:443 . BUCKET_NAME - Requested or generated bucket name. AWS_ACCESS_KEY_ID - Access key that is part of the credentials. AWS_SECRET_ACCESS_KEY - Secret access key that is part of the credentials. Important Retrieve the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . The names are used so that it is compatible with the AWS S3 API. You need to specify the keys while performing S3 operations, especially when you read, write or list from the Multicloud Object Gateway (MCG) bucket. The keys are encoded in Base64. Decode the keys before using them. <obc_name> Specify the name of the object bucket claim. 10.8.2. Creating an Object Bucket Claim using the command line interface When creating an Object Bucket Claim (OBC) using the command-line interface, you get a configuration map and a Secret that together contain all the information your application needs to use the object storage service. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Procedure Use the command-line interface to generate the details of a new bucket and credentials. Run the following command: Replace <obc-name> with a unique OBC name, for example, myappobc . Additionally, you can use the --app-namespace option to specify the namespace where the OBC configuration map and secret will be created, for example, myapp-namespace . Example output: The MCG command-line-interface has created the necessary configuration and has informed OpenShift about the new OBC. Run the following command to view the OBC: Example output: Run the following command to view the YAML file for the new OBC: Example output: Inside of your openshift-storage namespace, you can find the configuration map and the secret to use this OBC. The CM and the secret have the same name as the OBC. Run the following command to view the secret: Example output: The secret gives you the S3 access credentials. Run the following command to view the configuration map: Example output: The configuration map contains the S3 endpoint information for your application. 10.8.3. Creating an Object Bucket Claim using the OpenShift Web Console You can create an Object Bucket Claim (OBC) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. In order for your applications to communicate with the OBC, you need to use the configmap and secret. For more information about this, see Section 10.8.1, "Dynamic Object Bucket Claim" . Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage -> Object Bucket Claims -> Create Object Bucket Claim . Enter a name for your object bucket claim and select the appropriate storage class based on your deployment, internal or external, from the dropdown menu: Internal mode The following storage classes, which were created after deployment, are available for use: ocs-storagecluster-ceph-rgw uses the Ceph Object Gateway (RGW) openshift-storage.noobaa.io uses the Multicloud Object Gateway (MCG) External mode The following storage classes, which were created after deployment, are available for use: ocs-external-storagecluster-ceph-rgw uses the RGW openshift-storage.noobaa.io uses the MCG Note The RGW OBC storage class is only available with fresh installations of OpenShift Data Foundation version 4.5. It does not apply to clusters upgraded from OpenShift Data Foundation releases. Click Create . Once you create the OBC, you are redirected to its detail page: Additional Resources Section 10.8, "Object Bucket Claim" 10.8.4. Attaching an Object Bucket Claim to a deployment Once created, Object Bucket Claims (OBCs) can be attached to specific deployments. Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage -> Object Bucket Claims . Click the Action menu (...) to the OBC you created. From the drop-down menu, select Attach to Deployment . Select the desired deployment from the Deployment Name list, then click Attach . Additional Resources Section 10.8, "Object Bucket Claim" 10.8.5. Viewing object buckets using the OpenShift Web Console You can view the details of object buckets created for Object Bucket Claims (OBCs) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage -> Object Buckets . Alternatively, you can also navigate to the details page of a specific OBC and click the Resource link to view the object buckets for that OBC. Select the object bucket you want to see details for. You are navigated to the Object Bucket Details page. Additional Resources Section 10.8, "Object Bucket Claim" 10.8.6. Deleting Object Bucket Claims Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage -> Object Bucket Claims . Click the Action menu (...) to the Object Bucket Claim (OBC) you want to delete. Select Delete Object Bucket Claim . Click Delete . Additional Resources Section 10.8, "Object Bucket Claim" 10.9. Caching policy for object buckets A cache bucket is a namespace bucket with a hub target and a cache target. The hub target is an S3 compatible large object storage bucket. The cache bucket is the local Multicloud Object Gateway bucket. You can create a cache bucket that caches an AWS bucket or an IBM COS bucket. Important Cache buckets are a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . AWS S3 IBM COS 10.9.1. Creating an AWS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. In case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the namespacestore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First create a secret with credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <namespacestore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-cache-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim (OBC) resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 10.9.2. Creating an IBM COS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First, Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint. Replace <backingstore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 10.10. Scaling Multicloud Object Gateway performance by adding endpoints The Multicloud Object Gateway performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints. The Multicloud Object Gateway resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default: Storage service S3 endpoint service 10.10.1. Scaling the Multicloud Object Gateway with storage nodes Prerequisites A running OpenShift Data Foundation cluster on OpenShift Container Platform with access to the Multicloud Object Gateway (MCG). A storage node in the MCG is a NooBaa daemon container attached to one or more Persistent Volumes (PVs) and used for local object service data storage. NooBaa daemons can be deployed on Kubernetes nodes. This can be done by creating a Kubernetes pool consisting of StatefulSet pods. Procedure Log in to OpenShift Web Console . From the MCG user interface, click Overview -> Add Storage Resources . In the window, click Deploy Kubernetes Pool . In the Create Pool step create the target pool for the future installed nodes. In the Configure step, configure the number of requested pods and the size of each PV. For each new pod, one PV is to be created. In the Review step, you can find the details of the new pool and select the deployment method you wish to use: local or external deployment. If local deployment is selected, the Kubernetes nodes will deploy within the cluster. If external deployment is selected, you will be provided with a YAML file to run externally. All nodes will be assigned to the pool you chose in the first step, and can be found under Resources -> Storage resources -> Resource name . 10.11. Automatic scaling of MultiCloud Object Gateway endpoints The number of MultiCloud Object Gateway (MCG) endpoints scale automatically when the load on the MCG S3 service increases or decreases. OpenShift Data Foundation clusters are deployed with one active MCG endpoint. Each MCG endpoint pod is configured by default with 1 CPU and 2Gi memory request, with limits matching the request. When the CPU load on the endpoint crosses over an 80% usage threshold for a consistent period of time, a second endpoint is deployed lowering the load on the first endpoint. When the average CPU load on both endpoints falls below the 80% threshold for a consistent period of time, one of the endpoints is deleted. This feature improves performance and serviceability of the MCG. Chapter 11. Managing persistent volume claims Important Expanding PVCs is not supported for PVCs backed by OpenShift Data Foundation. 11.1. Configuring application pods to use OpenShift Data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for an application pod. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators -> Installed Operators to view installed operators. The default storage classes provided by OpenShift Data Foundation are available. In OpenShift Web Console, click Storage -> StorageClasses to view default storage classes. Procedure Create a Persistent Volume Claim (PVC) for the application to use. In OpenShift Web Console, click Storage -> Persistent Volume Claims . Set the Project for the application pod. Click Create Persistent Volume Claim . Specify a Storage Class provided by OpenShift Data Foundation. Specify the PVC Name , for example, myclaim . Select the required Access Mode . Note The Access Mode , Shared access (RWX) is not supported in IBM FlashSystem. For Rados Block Device (RBD), if the Access mode is ReadWriteOnce ( RWO ), select the required Volume mode . The default volume mode is Filesystem . Specify a Size as per application requirement. Click Create and wait until the PVC is in Bound status. Configure a new or existing application pod to use the new PVC. For a new application pod, perform the following steps: Click Workloads -> Pods . Create a new application pod. Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod. For example: For an existing application pod, perform the following steps: Click Workloads -> Deployment Configs . Search for the required deployment config associated with the application pod. Click on its Action menu (...) -> Edit Deployment Config . Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod and click Save . For example: Verify that the new configuration is being used. Click Workloads -> Pods . Set the Project for the application pod. Verify that the application pod appears with a status of Running . Click the application pod name to view pod details. Scroll down to Volumes section and verify that the volume has a Type that matches your new Persistent Volume Claim, for example, myclaim . 11.2. Viewing Persistent Volume Claim request status Use this procedure to view the status of a PVC request. Prerequisites Administrator access to OpenShift Data Foundation. Procedure Log in to OpenShift Web Console. Click Storage -> Persistent Volume Claims Search for the required PVC name by using the Filter textbox. You can also filter the list of PVCs by Name or Label to narrow down the list Check the Status column corresponding to the required PVC. Click the required Name to view the PVC details. 11.3. Reviewing Persistent Volume Claim request events Use this procedure to review and address Persistent Volume Claim (PVC) request events. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage -> OpenShift Data Foundation . In the Storage systems tab, select the storage system and then click Overview -> Block and File . Locate the Inventory card to see the number of PVCs with errors. Click Storage -> Persistent Volume Claims Search for the required PVC using the Filter textbox. Click on the PVC name and navigate to Events Address the events as required or as directed. 11.4. Dynamic provisioning 11.4.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any intimate knowledge about the underlying storage volume sources. The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Container Platform. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plug-in APIs. 11.4.2. Dynamic provisioning in OpenShift Data Foundation Red Hat OpenShift Data Foundation is software-defined storage that is optimised for container environments. It runs as an operator on OpenShift Container Platform to provide highly integrated and simplified persistent storage management for containers. OpenShift Data Foundation supports a variety of storage types, including: Block storage for databases Shared file storage for continuous integration, messaging, and data aggregation Object storage for archival, backup, and media storage Version 4 uses Red Hat Ceph Storage to provide the file, block, and object storage that backs persistent volumes, and Rook.io to manage and orchestrate provisioning of persistent volumes and claims. NooBaa provides object storage, and its Multicloud Gateway allows object federation across multiple cloud environments (available as a Technology Preview). In OpenShift Data Foundation 4, the Red Hat Ceph Storage Container Storage Interface (CSI) driver for RADOS Block Device (RBD) and Ceph File System (CephFS) handles the dynamic provisioning requests. When a PVC request comes in dynamically, the CSI driver has the following options: Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on Ceph RBDs with volume mode Block Create a PVC with ReadWriteOnce (RWO) access that is based on Ceph RBDs with volume mode Filesystem Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on CephFS for volume mode Filesystem The judgment of which driver (RBD or CephFS) to use is based on the entry in the storageclass.yaml file. 11.4.3. Available dynamic provisioning plug-ins OpenShift Container Platform provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plug-in name Notes OpenStack Cinder kubernetes.io/cinder AWS Elastic Block Store (EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. AWS Elastic File System (EFS) Dynamic provisioning is accomplished through the EFS provisioner pod and not through a provisioner plug-in. Azure Disk kubernetes.io/azure-disk Azure File kubernetes.io/azure-file The persistent-volume-binder ServiceAccount requires permissions to create and get Secrets to store the Azure storage account and keys. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. VMware vSphere kubernetes.io/vsphere-volume Red Hat Virtualization csi.ovirt.org Important Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. Chapter 12. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. You can create multiple snapshots of the same persistent volume claim (PVC). For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note You cannot schedule periodic creation of snapshots. 12.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Data Foundation only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage -> Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) -> Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions -> Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage -> Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 12.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Note For Rados Block Device (RBD), you must select a storage class with the same pool as that of the parent PVC. Restoring the snapshot of an encrypted PVC using a storage class where encryption is not enabled and vice versa is not supported. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Note For Rados Block Device (RBD), you must select a storage class with the same pool as that of the parent PVC. Restoring the snapshot of an encrypted PVC using a storage class where encryption is not enabled and vice versa is not supported. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage -> Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 12.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) -> Delete Volume Snapshot . From Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) -> Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage -> Volume Snapshots and ensure that the deleted volume snapshot is not listed. Chapter 13. Volume cloning A clone is a duplicate of an existing storage volume that is used as any standard volume. You create a clone of a volume to make a point in time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size. You can create up to 512 clones per PVC for both CephFS and RADOS Block Device (RBD). 13.1. Creating a clone Prerequisites Source PVC must be in Bound state and must not be in use. Note Do not create a clone of a PVC if a Pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Procedure Click Storage -> Persistent Volume Claims from the OpenShift Web Console. To create a clone, do one of the following: Beside the desired PVC, click Action menu (...) -> Clone PVC . Click on the PVC that you want to clone and click Actions -> Clone PVC . Enter a Name for the clone. Select the access mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Click Clone . You are redirected to the new PVC details page. Wait for the cloned PVC status to become Bound . The cloned PVC is now available to be consumed by the pods. This cloned PVC is independent of its dataSource PVC. Chapter 14. Replacing storage nodes You can choose one of the following procedures to replace storage nodes: Section 14.1, "Replacing operational nodes on Google Cloud installer-provisioned infrastructure" Section 14.2, "Replacing failed nodes on Google Cloud installer-provisioned infrastructure" 14.1. Replacing operational nodes on Google Cloud installer-provisioned infrastructure Use this procedure to replace an operational node on Google Cloud installer-provisioned infrastructure (IPI). Procedure Log in to OpenShift Web Console and click Compute -> Nodes . Identify the node that needs to be replaced. Take a note of its Machine Name . Mark the node as unschedulable using the following command: Drain the node using the following command: Important This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional. Click Compute -> Machines . Search for the required machine. Besides the required machine, click the Action menu (...) -> Delete Machine . Click Delete to confirm the machine deletion. A new machine is automatically created. Wait for new machine to start and transition into Running state. Important This activity may take at least 5-10 minutes or more. Click Compute -> Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) -> Edit Labels Add cluster.ocs.openshift.io/openshift-storage and click Save . From Command line interface Execute the following command to apply the OpenShift Data Foundation label to the new node: Verification steps Execute the following command and verify that the new node is present in the output: Click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Verify that new OSD pods are running on the replacement node. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in step, do the following: Create a debug pod and open a chroot environment for the selected host(s). Run "lsblk" and check for the "crypt" keyword beside the ocs-deviceset name(s) If verification steps fail, contact Red Hat Support . 14.2. Replacing failed nodes on Google Cloud installer-provisioned infrastructure Perform this procedure to replace a failed node which is not operational on Google Cloud installer-provisioned infrastructure (IPI) for OpenShift Data Foundation. Procedure Log in to OpenShift Web Console and click Compute -> Nodes . Identify the faulty node and click on its Machine Name . Click Actions -> Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining and click Save . Click Actions -> Delete Machine , and click Delete . A new machine is automatically created, wait for new machine to start. Important This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional. Click Compute -> Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the web user interface For the new node, click Action Menu (...) -> Edit Labels Add cluster.ocs.openshift.io/openshift-storage and click Save . From the command line interface Execute the following command to apply the OpenShift Data Foundation label to the new node: [Optional]: If the failed Google Cloud instance is not removed automatically, terminate the instance from Google Cloud console. Verification steps Execute the following command and verify that the new node is present in the output: Click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Verify that new OSD pods are running on the replacement node. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in step, do the following: Create a debug pod and open a chroot environment for the selected host(s). Run "lsblk" and check for the "crypt" keyword beside the ocs-deviceset name(s) If verification steps fail, contact Red Hat Support . Chapter 15. Replacing storage devices 15.1. Replacing operational or failed storage devices on Google Cloud installer-provisioned infrastructure When you need to replace a device in a dynamically created storage cluster on an Google Cloud installer-provisioned infrastructure, you must replace the storage node. For information about how to replace nodes, see: Replacing operational nodes on Google Cloud installer-provisioned infrastructure Replacing failed nodes on Google Cloud installer-provisioned infrastructures . Chapter 16. Upgrading to OpenShift Data Foundation 16.1. Overview of the OpenShift Data Foundation update process OpenShift Container Storage, based on the open source Ceph technology, has expanded its scope and foundational role in a containerized, hybrid cloud environment since its introduction. It complements existing storage in addition to other data-related hardware and software, making them rapidly attachable, accessible, and scalable in a hybrid cloud environment. To better reflect these foundational and infrastructure distinctives, OpenShift Container Storage is now OpenShift Data Foundation . Important You can perform the upgrade process for OpenShift Data Foundation version 4.9 from OpenShift Container Storage version 4.8 only by installing the OpenShift Data Foundation operator from OpenShift Container Platform OperatorHub. In the future release, you can upgrade Red Hat OpenShift Data Foundation, either between minor releases like 4.9 and 4.x, or between batch updates like 4.9.0 and 4.9.1 by enabling automatic updates (if not done so during operator installation) or performing manual updates. You also need to upgrade the different parts of Red Hat OpenShift Data Foundation in the following order for both internal and external mode deployments: Update OpenShift Container Platform according to the Updating clusters documentation for OpenShift Container Platform. Update Red Hat OpenShift Data Foundation. To prepare a disconnected environment for updates , see Operators guide to using Operator Lifecycle Manager on restricted networks to be able to update Red Hat OpenShift Data Foundation as well as Local Storage Operator when in use. Update Red Hat OpenShift Container Storage operator version 4.8 to version 4.9 by installing the Red Hat OpenShift Data Foundation operator from the OperatorHub on OpenShift Container Platform web console. See Updating Red Hat OpenShift Container Storage 4.8 to Red Hat OpenShift Data Foundation 4.9 . Update Red Hat OpenShift Data Foundation from 4.9.x to 4.9.y . See Updating Red Hat OpenShift Data Foundation 4.9.x to 4.9.y . For updating external mode deployments , you must also perform the steps from section Updating the OpenShift Data Foundation external secret . If you use local storage: Update the Local Storage operator . See Checking for Local Storage Operator deployments if you are unsure. Perform post-update configuration changes for clusters backed by local storage. See Post-update configuration for clusters backed by local storage for details. Update considerations Review the following important considerations before you begin. Red Hat recommends using the same version of Red Hat OpenShift Container Platform with Red Hat OpenShift Data Foundation. See the Interoperability Matrix for more information about supported combinations of OpenShift Container Platform and Red Hat OpenShift Data Foundation. The Local Storage Operator is fully supported only when the Local Storage Operator version matches the Red Hat OpenShift Container Platform version. The flexible scaling feature is available only in new deployments of Red Hat OpenShift Data Foundation versions 4.7 and later. Storage clusters upgraded from a version to version 4.7 or later do not support flexible scaling. For more information, see Flexible scaling of OpenShift Container Storage cluster in the New features section of 4.7 Release Notes . 16.2. Updating Red Hat OpenShift Container Storage 4.8 to Red Hat OpenShift Data Foundation 4.9 This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Container Storage upgrades all OpenShift Container Storage services including the backend Ceph Storage cluster. For External mode deployments, upgrading OpenShift Container Storage only upgrades the OpenShift Container Storage service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. We recommend upgrading RHCS along with OpenShift Container Storage in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about Red Hat Ceph Storage releases. Important Upgrading to 4.9 directly from any version older than 4.8 is unsupported. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.9.X, see Updating Clusters . Ensure that the OpenShift Container Storage cluster is healthy and data is resilient. Navigate to Storage -> Overview and check both Block and File and Object tabs for the green tick on the status card. Green tick indicates that the storage cluster , object service and data resiliency are all healthy. Ensure that all OpenShift Container Storage Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads -> Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to OperatorHub . Search for OpenShift Data Foundation using the Filter by keyword box and click on the OpenShift Data Foundation tile. Click Install . On the install Operator page, click Install . Wait for the Operator installation to complete. Note We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result. Verification steps Verify that the page displays Succeeded message along with the option to Create StorageSystem . Note For the upgraded clusters, since the storage system is automatically created, do not create it again. On the notification popup, click Refresh web console link to reflect the OpenShift Data Foundation changes in the OpenShift console. Verify the state of the pods on the OpenShift Web Console. Click Workloads -> Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Wait for all the pods in the openshift-storage namespace to restart and reach Running state. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> OpenShift Data foundation -> Storage Systems tab and then click on the storage system name. Check both Block and File and Object tabs for the green tick on the status card. Green tick indicates that the storage cluster, object service and data resiliency are all healthy. Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . After updating external mode deployments, you must also update the external secret. For instructions, see Updating the OpenShift Data Foundation external secret . Additional Resources If you face any issues while updating OpenShift Data Foundation, see the Commonly required logs for troubleshooting section in the Troubleshooting guide . 16.3. Updating Red Hat OpenShift Data Foundation 4.9.x to 4.9.y This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Container Storage upgrades all OpenShift Container Storage services including the backend Ceph Storage cluster. For External mode deployments, upgrading OpenShift Container Storage only upgrades the OpenShift Container Storage service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. Hence, we recommend upgrading RHCS along with OpenShift Container Storage in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about Red Hat Ceph Storage releases. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic . If the update strategy is set to Manual then use the following procedure. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.9.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> OpenShift Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads -> Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators -> Installed Operators . Select openshift-storage project. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click the OpenShift Data Foundation operator name. Click the Subscription tab. If the Upgrade Status shows require approval , click on requires approval link. On the InstallPlan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . Verification steps Verify that the Version below the OpenShift Data Foundation name and the operator status is the latest version. Navigate to Operators -> Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> OpenShift Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . If verification steps fail, contact Red Hat Support . 16.4. Changing the update approval strategy To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic . Changing the update approval strategy to Manual will need manual approval for each upgrade. Procedure Navigate to Operators -> Installed Operators . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click on OpenShift Data Foundation operator name Go to the Subscription tab. Click on the pencil icon for changing the Update approval . Select the update approval strategy and click Save . Verification steps Verify that the Update approval shows the newly selected approval strategy below it. | [
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json",
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: faster provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Delete",
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: ceph-csi-vault-sa EOF",
"apiVersion: v1 kind: ServiceAccount metadata: name: rbd-csi-vault-token-review --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review rules: - apiGroups: [\"authentication.k8s.io\"] resources: [\"tokenreviews\"] verbs: [\"create\", \"get\", \"list\"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review subjects: - kind: ServiceAccount name: rbd-csi-vault-token-review namespace: openshift-storage roleRef: kind: ClusterRole name: rbd-csi-vault-token-review apiGroup: rbac.authorization.k8s.io",
"oc -n openshift-storage get sa rbd-csi-vault-token-review -o jsonpath=\"{.secrets[*]['name']}\"",
"oc get secret <secret associated with SA> -o jsonpath=\"{.data['token']}\" | base64 --decode; echo oc get secret <secret associated with SA> -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo",
"oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\"",
"vault auth enable kubernetes vault write auth/kubernetes/config token_reviewer_jwt=<SA token> kubernetes_host=<OCP cluster endpoint> kubernetes_ca_cert=<SA CA certificate>",
"vault write \"auth/kubernetes/role/csi-kubernetes\" bound_service_account_names=\"ceph-csi-vault-sa\" bound_service_account_namespaces=<tenant_namespace> policies=<policy_name_in_vault>",
"apiVersion: v1 data: vault-tenant-sa: |- { \"encryptionKMSType\": \"vaulttenantsa\", \"vaultAddress\": \"<https://hostname_or_ip_of_vault_server:port>\", \"vaultTLSServerName\": \"<vault TLS server name>\", \"vaultAuthPath\": \"/v1/auth/kubernetes/login\", \"vaultAuthNamespace\": \"<vault auth namespace name>\" \"vaultNamespace\": \"<vault namespace name>\", \"vaultBackendPath\": \"<vault backend path name>\", \"vaultCAFromSecret\": \"<secret containing CA cert>\", \"vaultClientCertFromSecret\": \"<secret containing client cert>\", \"vaultClientCertKeyFromSecret\": \"<secret containing client private key>\", \"tenantSAName\": \"<service account name in the tenant namespace>\" } metadata: name: csi-kms-connection-details",
"encryptionKMSID: 1-vault",
"kind: ConfigMap apiVersion: v1 metadata: name: csi-kms-connection-details [...] data: 1-vault: |- { \"KMS_PROVIDER\": \"vaulttokens\", \"KMS_SERVICE_NAME\": \"1-vault\", [...] \"VAULT_BACKEND\": \"kv-v2\" } 2-vault: |- { \"encryptionKMSType\": \"vaulttenantsa\", [...] \"vaultBackend\": \"kv-v2\" }",
"--- apiVersion: v1 kind: ConfigMap metadata: name: ceph-csi-kms-config data: vaultAddress: \"<vault_address:port>\" vaultBackendPath: \"<backend_path>\" vaultTLSServerName: \"<vault_tls_server_name>\" vaultNamespace: \"<vault_namespace>\"",
"storage: pvc: claim: <new-pvc-name>",
"storage: pvc: claim: ocs4registry",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, e.g. 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}",
"spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd",
"config.yaml: | openshift-storage: delete: days: 5",
"spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"",
"adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule",
"oc get -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"get -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"oc describe noobaa -n openshift-storage",
"Name: noobaa Namespace: openshift-storage Labels: <none> Annotations: <none> API Version: noobaa.io/v1alpha1 Kind: NooBaa Metadata: Creation Timestamp: 2019-07-29T16:22:06Z Generation: 1 Resource Version: 6718822 Self Link: /apis/noobaa.io/v1alpha1/namespaces/openshift-storage/noobaas/noobaa UID: 019cfb4a-b21d-11e9-9a02-06c8de012f9e Spec: Status: Accounts: Admin: Secret Ref: Name: noobaa-admin Namespace: openshift-storage Actual Image: noobaa/noobaa-core:4.0 Observed Generation: 1 Phase: Ready Readme: Welcome to NooBaa! ----------------- Welcome to NooBaa! ----------------- NooBaa Core Version: NooBaa Operator Version: Lets get started: 1. Connect to Management console: Read your mgmt console login information (email & password) from secret: \"noobaa-admin\". kubectl get secret noobaa-admin -n openshift-storage -o json | jq '.data|map_values(@base64d)' Open the management console service - take External IP/DNS or Node Port or use port forwarding: kubectl port-forward -n openshift-storage service/noobaa-mgmt 11443:443 & open https://localhost:11443 2. Test S3 client: kubectl port-forward -n openshift-storage service/s3 10443:443 & 1 NOOBAA_ACCESS_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_ACCESS_KEY_ID|@base64d') 2 NOOBAA_SECRET_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|@base64d') alias s3='AWS_ACCESS_KEY_ID=USDNOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=USDNOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3' s3 ls Services: Service Mgmt: External DNS: https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443 Internal DNS: https://noobaa-mgmt.openshift-storage.svc:443 Internal IP: https://172.30.235.12:443 Node Ports: https://10.0.142.103:31385 Pod Ports: https://10.131.0.19:8443 serviceS3: External DNS: 3 https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443 Internal DNS: https://s3.openshift-storage.svc:443 Internal IP: https://172.30.86.41:443 Node Ports: https://10.0.142.103:31011 Pod Ports: https://10.131.0.19:6443",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa status -n openshift-storage",
"INFO[0000] Namespace: openshift-storage INFO[0000] INFO[0000] CRD Status: INFO[0003] β
Exists: CustomResourceDefinition \"noobaas.noobaa.io\" INFO[0003] β
Exists: CustomResourceDefinition \"backingstores.noobaa.io\" INFO[0003] β
Exists: CustomResourceDefinition \"bucketclasses.noobaa.io\" INFO[0004] β
Exists: CustomResourceDefinition \"objectbucketclaims.objectbucket.io\" INFO[0004] β
Exists: CustomResourceDefinition \"objectbuckets.objectbucket.io\" INFO[0004] INFO[0004] Operator Status: INFO[0004] β
Exists: Namespace \"openshift-storage\" INFO[0004] β
Exists: ServiceAccount \"noobaa\" INFO[0005] β
Exists: Role \"ocs-operator.v0.0.271-6g45f\" INFO[0005] β
Exists: RoleBinding \"ocs-operator.v0.0.271-6g45f-noobaa-f9vpj\" INFO[0006] β
Exists: ClusterRole \"ocs-operator.v0.0.271-fjhgh\" INFO[0006] β
Exists: ClusterRoleBinding \"ocs-operator.v0.0.271-fjhgh-noobaa-pdxn5\" INFO[0006] β
Exists: Deployment \"noobaa-operator\" INFO[0006] INFO[0006] System Status: INFO[0007] β
Exists: NooBaa \"noobaa\" INFO[0007] β
Exists: StatefulSet \"noobaa-core\" INFO[0007] β
Exists: Service \"noobaa-mgmt\" INFO[0008] β
Exists: Service \"s3\" INFO[0008] β
Exists: Secret \"noobaa-server\" INFO[0008] β
Exists: Secret \"noobaa-operator\" INFO[0008] β
Exists: Secret \"noobaa-admin\" INFO[0009] β
Exists: StorageClass \"openshift-storage.noobaa.io\" INFO[0009] β
Exists: BucketClass \"noobaa-default-bucket-class\" INFO[0009] β
(Optional) Exists: BackingStore \"noobaa-default-backing-store\" INFO[0010] β
(Optional) Exists: CredentialsRequest \"noobaa-cloud-creds\" INFO[0010] β
(Optional) Exists: PrometheusRule \"noobaa-prometheus-rules\" INFO[0010] β
(Optional) Exists: ServiceMonitor \"noobaa-service-monitor\" INFO[0011] β
(Optional) Exists: Route \"noobaa-mgmt\" INFO[0011] β
(Optional) Exists: Route \"s3\" INFO[0011] β
Exists: PersistentVolumeClaim \"db-noobaa-core-0\" INFO[0011] β
System Phase is \"Ready\" INFO[0011] β
Exists: \"noobaa-admin\" #------------------# #- Mgmt Addresses -# #------------------# ExternalDNS : [https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31385] InternalDNS : [https://noobaa-mgmt.openshift-storage.svc:443] InternalIP : [https://172.30.235.12:443] PodPorts : [https://10.131.0.19:8443] #--------------------# #- Mgmt Credentials -# #--------------------# email : [email protected] password : HKLbH1rSuVU0I/souIkSiA== #----------------# #- S3 Addresses -# #----------------# 1 ExternalDNS : [https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31011] InternalDNS : [https://s3.openshift-storage.svc:443] InternalIP : [https://172.30.86.41:443] PodPorts : [https://10.131.0.19:6443] #------------------# #- S3 Credentials -# #------------------# 2 AWS_ACCESS_KEY_ID : jVmAsu9FsvRHYmfjTiHV 3 AWS_SECRET_ACCESS_KEY : E//420VNedJfATvVSmDz6FMtsSAzuBv6z180PT5c #------------------# #- Backing Stores -# #------------------# NAME TYPE TARGET-BUCKET PHASE AGE noobaa-default-backing-store aws-s3 noobaa-backing-store-15dc896d-7fe0-4bed-9349-5942211b93c9 Ready 141h35m32s #------------------# #- Bucket Classes -# #------------------# NAME PLACEMENT PHASE AGE noobaa-default-bucket-class {Tiers:[{Placement: BackingStores:[noobaa-default-backing-store]}]} Ready 141h35m33s #-----------------# #- Bucket Claims -# #-----------------# No OBC's found.",
"AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID> AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY> aws --endpoint <ENDPOINT> --no-verify-ssl s3 ls",
"oc adm groups new cluster-admins",
"oc adm policy add-cluster-role-to-group cluster-admin cluster-admins",
"oc adm groups add-users cluster-admins <user-name> <user-name> <user-name>",
"oc adm groups remove-users cluster-admins <user-name> <user-name> <user-name>",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create aws-s3 <backingstore_name> --access-key=<AWS ACCESS KEY> --secret-key=<AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"INFO[0001] β
Exists: NooBaa \"noobaa\" INFO[0002] β
Created: BackingStore \"aws-resource\" INFO[0002] β
Created: Secret \"backing-store-secret-aws-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: awsS3: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: aws-s3",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create ibm-cos <backingstore_name> --access-key=<IBM ACCESS KEY> --secret-key=<IBM SECRET ACCESS KEY> --endpoint=<IBM COS ENDPOINT> --target-bucket <bucket-name> -n openshift-storage",
"INFO[0001] β
Exists: NooBaa \"noobaa\" INFO[0002] β
Created: BackingStore \"ibm-resource\" INFO[0002] β
Created: Secret \"backing-store-secret-ibm-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: ibmCos: endpoint: <endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: ibm-cos",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create azure-blob <backingstore_name> --account-key=<AZURE ACCOUNT KEY> --account-name=<AZURE ACCOUNT NAME> --target-blob-container <blob container name>",
"INFO[0001] β
Exists: NooBaa \"noobaa\" INFO[0002] β
Created: BackingStore \"azure-resource\" INFO[0002] β
Created: Secret \"backing-store-secret-azure-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: azureBlob: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBlobContainer: <blob-container-name> type: azure-blob",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create google-cloud-storage <backingstore_name> --private-key-json-file=<PATH TO GCP PRIVATE KEY JSON FILE> --target-bucket <GCP bucket name>",
"INFO[0001] β
Exists: NooBaa \"noobaa\" INFO[0002] β
Created: BackingStore \"google-gcp\" INFO[0002] β
Created: Secret \"backing-store-google-cloud-storage-gcp\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: GoogleServiceAccountPrivateKeyJson: <GCP PRIVATE KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: googleCloudStorage: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <target bucket> type: google-cloud-storage",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create pv-pool <backingstore_name> --num-volumes=<NUMBER OF VOLUMES> --pv-size-gb=<VOLUME SIZE> --storage-class=<LOCAL STORAGE CLASS>",
"INFO[0001] β
Exists: NooBaa \"noobaa\" INFO[0002] β
Exists: BackingStore \"local-mcg-storage\"",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore_name> namespace: openshift-storage spec: pvPool: numVolumes: <NUMBER OF VOLUMES> resources: requests: storage: <VOLUME SIZE> storageClass: <LOCAL STORAGE CLASS> type: pv-pool",
"noobaa backingstore create s3-compatible rgw-resource --access-key=<RGW ACCESS KEY> --secret-key=<RGW SECRET KEY> --target-bucket=<bucket-name> --endpoint=<RGW endpoint>",
"get secret <RGW USER SECRET NAME> -o yaml -n openshift-storage",
"INFO[0001] β
Exists: NooBaa \"noobaa\" INFO[0002] β
Created: BackingStore \"rgw-resource\" INFO[0002] β
Created: Secret \"backing-store-secret-rgw-resource\"",
"apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: <RGW-Username> namespace: openshift-storage spec: store: ocs-storagecluster-cephobjectstore displayName: \"<Display-name>\"",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore-name> namespace: openshift-storage spec: s3Compatible: endpoint: <RGW endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage signatureVersion: v4 targetBucket: <RGW-bucket-name> type: s3-compatible",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement Mirror",
"noobaa obc create mirrored-bucket --bucketclass=mirror-to-aws",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <bucket-class-name> namespace: openshift-storage spec: placementPolicy: tiers: - backingStores: - <backing-store-1> - <backing-store-2> placement: Mirror",
"additionalConfig: bucketclass: mirror-to-aws",
"{ \"Version\": \"NewVersion\", \"Statement\": [ { \"Sid\": \"Example\", \"Effect\": \"Allow\", \"Principal\": [ \"[email protected]\" ], \"Action\": [ \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::john_bucket\" ] } ] }",
"aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy BucketPolicy",
"aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <obc-name> spec: generateBucketName: <obc-bucket-name> storageClassName: openshift-storage.noobaa.io",
"apiVersion: batch/v1 kind: Job metadata: name: testjob spec: template: spec: restartPolicy: OnFailure containers: - image: <your application image> name: test env: - name: BUCKET_NAME valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_NAME - name: BUCKET_HOST valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_HOST - name: BUCKET_PORT valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_PORT - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: <obc-name> key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: <obc-name> key: AWS_SECRET_ACCESS_KEY",
"oc apply -f <yaml.file>",
"oc get cm <obc-name> -o yaml",
"oc get secret <obc_name> -o yaml",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa obc create <obc-name> -n openshift-storage",
"INFO[0001] β
Created: ObjectBucketClaim \"test21obc\"",
"oc get obc -n openshift-storage",
"NAME STORAGE-CLASS PHASE AGE test21obc openshift-storage.noobaa.io Bound 38s",
"oc get obc test21obc -o yaml -n openshift-storage",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer generation: 2 labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage resourceVersion: \"40756\" selfLink: /apis/objectbucket.io/v1alpha1/namespaces/openshift-storage/objectbucketclaims/test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af spec: ObjectBucketName: obc-openshift-storage-test21obc bucketName: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 generateBucketName: test21obc storageClassName: openshift-storage.noobaa.io status: phase: Bound",
"oc get -n openshift-storage secret test21obc -o yaml",
"Example output: apiVersion: v1 data: AWS_ACCESS_KEY_ID: c0M0R2xVanF3ODR3bHBkVW94cmY= AWS_SECRET_ACCESS_KEY: Wi9kcFluSWxHRzlWaFlzNk1hc0xma2JXcjM1MVhqa051SlBleXpmOQ== kind: Secret metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40751\" selfLink: /api/v1/namespaces/openshift-storage/secrets/test21obc uid: 65117c1c-f662-11e9-9094-0a5305de57bb type: Opaque",
"oc get -n openshift-storage cm test21obc -o yaml",
"apiVersion: v1 data: BUCKET_HOST: 10.0.171.35 BUCKET_NAME: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 BUCKET_PORT: \"31242\" BUCKET_REGION: \"\" BUCKET_SUBREGION: \"\" kind: ConfigMap metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40752\" selfLink: /api/v1/namespaces/openshift-storage/configmaps/test21obc uid: 651c6501-f662-11e9-9094-0a5305de57bb",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3",
"noobaa bucketclass create namespace-bucketclass cache <my-cache-bucket-class> --backingstores <backing-store> --hub-resource <namespacestore>",
"noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <backingstore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos",
"noobaa bucketclass create namespace-bucketclass cache <my-bucket-class> --backingstores <backing-store> --hubResource <namespacestore>",
"noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>",
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: mypd persistentVolumeClaim: claimName: myclaim",
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: mypd persistentVolumeClaim: claimName: myclaim",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd",
"oc debug node/<node name> chroot /host",
"lsblk",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd",
"oc debug node/<node name> chroot /host",
"lsblk"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html-single/deploying_and_managing_openshift_data_foundation_using_google_cloud/adding-an-aws-s3-namespace-bucket-using-the-multicloud-object-gateway-cli_gcp |
3.2.2. Direct Routing and iptables | 3.2.2. Direct Routing and iptables You may also work around the ARP issue using the direct routing method by creating iptables firewall rules. To configure direct routing using iptables , you must add rules that create a transparent proxy so that a real server will service packets sent to the VIP address, even though the VIP address does not exist on the system. The iptables method is simpler to configure than the arptables_jf method. This method also circumvents the LVS ARP issue entirely, because the virtual IP address(es) only exist on the active LVS director. However, there are performance issues using the iptables method compared to arptables_jf , as there is overhead in forwarding/masquerading every packet. You also cannot reuse ports using the iptables method. For example, it is not possible to run two separate Apache HTTPD Server services bound to port 80, because both must bind to INADDR_ANY instead of the virtual IP addresses. To configure direct routing using the iptables method, perform the following steps: On each real server, enter the following command for every VIP, port, and protocol (TCP or UDP) combination intended to be serviced for the real server: iptables -t nat -A PREROUTING -p <tcp|udp> -d <vip> --dport <port> -j REDIRECT This command will cause the real servers to process packets destined for the VIP and port that they are given. Save the configuration on each real server: The commands above cause the system to reload the iptables configuration on bootup - before the network is started. | [
"service iptables save chkconfig --level 2345 iptables on"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s2-lvs-direct-iptables-vsa |
Data Grid documentation | Data Grid documentation Documentation for Data Grid is available on the Red Hat customer portal. Data Grid 8.4 Documentation Data Grid 8.4 Component Details Supported Configurations for Data Grid 8.4 Data Grid 8 Feature Support Data Grid Deprecated Features and Functionality | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/cache_encoding_and_marshalling/rhdg-docs_datagrid |
Chapter 4. Alerts | Chapter 4. Alerts 4.1. Setting up alerts For internal Mode clusters, various alerts related to the storage metrics services, storage cluster, disk devices, cluster health, cluster capacity, and so on are displayed in the Block and File, and the object dashboards. These alerts are not available for external Mode. Note It might take a few minutes for alerts to be shown in the alert panel, because only firing alerts are visible in this panel. You can also view alerts with additional details and customize the display of Alerts in the OpenShift Container Platform. For more information, see Managing alerts . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/monitoring_openshift_data_foundation/alerts |
Chapter 24. How do vCPUs, hyper-threading, and subscription structure affect the subscriptions service usage data? | Chapter 24. How do vCPUs, hyper-threading, and subscription structure affect the subscriptions service usage data? The Red Hat OpenShift portfolio contains offerings that track usage with a unit of measurement of cores, but this measurement is obfuscated by virtualization and multithreading technologies. The behavior of these technologies led to the development of the term vCPUs to help describe the virtual consumption of physical CPUs, but this term can vary in its meaning. In addition, the structure of Red Hat OpenShift offerings can be complex, making usage data in the subscriptions service difficult to understand. Red Hat has responded to various customer concerns about Red Hat OpenShift usage data through a series of improvements, both to the subscriptions service itself and to the underlying technologies and methodologies that inform Red Hat OpenShift usage tracking. 24.1. Improved calculations for x86-64 architectures with simultaneous multithreading October 2021: This change assumes that simultaneous multithreading on x86-64 architectures is enabled, resulting in more accurate usage data within the subscriptions service. Across different technology vendors, the term vCPU can have different definitions. If you work with a number of different vendors, the definition that you use might not match the definition that is used by Red Hat. As a result, you might not be familiar with how Red Hat and the subscriptions service measures usage when vCPUs and simultaneous multithreading (also referred to as hyper-threading) are in use within your environment. Some vendors offer hypervisors that do not expose to guests whether the CPUs of the guests use simultaneous multithreading. For example, recent versions of the VMware hypervisor do not show the simultaneous multithreading status to the kernel of the VM, and always report threads per core as 1. The effect of this counting method is that customers can interpret the subscriptions service reporting of Red Hat OpenShift usage data related to vCPUs to be artificially doubled. To address customer concerns about vCPU counting, Red Hat has adjusted its assumptions related to simultaneous multithreading. Red Hat now assumes simultaneous multithreading of 2 threads per core for x86 architectures. For many hypervisors, that assumption results in an accurate counting of vCPUs per core, and customers who use those hypervisors will see no change in their Red Hat OpenShift usage data in the subscriptions service. However, other customers who use hypervisors that do not expose simultaneous multithreading status to the kernel will see an abrupt change in subscriptions service data in October 2021. Those customers will see their related Red Hat OpenShift usage data in the subscriptions service reduced by 50% on the date that this change in counting is implemented. Past data will not be affected. Customers who encounter this situation will not be penalized. Red Hat requires that the customer purchase enough subscriptions to cover the usage as counted in the subscriptions service only. In the past, the discrepancies in the definitions for vCPUs have resulted in known problems with the interpretation of usage and capacity data for some subscriptions service users. This change in the assumptions for simultaneous multithreading is intended to improve the accuracy of vCPU usage data across a wider spectrum of customers, regardless of the hypervisor technology that is deployed. If you have questions or concerns related to the usage and capacity data that is displayed in the subscriptions service, work with your Red Hat account team to help you understand your data and account status. For additional information about the resolution of this problem, you can also log in to your Red Hat account to view the following issue: Bugzilla issue 1934915 . 24.2. Improved analysis of subscription capacity for certain subscriptions January 2022: These changes improved capacity analysis for subscriptions that include extra entitlements or infrastructure subscriptions. These improvements resulted in a more accurate calculation of usage and capacity data for those subscriptions and a more accurate calculation of the subscription threshold within the subscriptions service for the Red Hat OpenShift portion of your Red Hat account. Improved accuracy for subscriptions with numerous entitlements: Certain Red Hat OpenShift subscriptions that included a large capacity of cores also included extra entitlements. These entitlements helped to streamline installation by using tools that rely on attached entitlement workflows. However, these extra entitlements were calculated as extra capacity by the subscriptions service, resulting in confusion about how much Red Hat OpenShift could legally be deployed by customers. As of January 2022, counting methods have been revised to remove the extra entitlements from the capacity calculations. Infrastructure subscriptions excluded from capacity calculations: For certain purchases of Red Hat OpenShift subscriptions, a particular type of Red Hat OpenShift infrastructure subscription would be added to that purchase automatically. This type of subscription is used to provide infrastructure support for large deployments. Both version 4.1 and later and version 3.11 subscriptions were affected. Normally for Red Hat OpenShift version 4.1 and later, the subscriptions service does not count infrastructure nodes when calculating your Red Hat OpenShift capacity. However, for accounts that received this infrastructure subscription, the improper calculations were occurring at the subscription level, and that data was passed to the subscriptions service. Red Hat OpenShift capacity numbers were artificially inflated, resulting in an incorrect subscription threshold in the subscriptions service. As of January 2022, an added infrastructure subscription is not considered when calculating your Red Hat OpenShift capacity. 24.3. Isolating Red Hat OpenShift Cluster Manager operational metrics from metrics for the subscriptions service December 2023 and March 2024: These changes involved analyzing the metrics used by Red Hat OpenShift Cluster Manager for internal operational purposes and determining whether these metrics were still the optimal metrics to use for subscription usage tracking purposes in the subscriptions service. The use of these metrics for two different purposes was determined to be ineffective, and increased the chances for inaccurate subscription usage reporting. Therefore, the subscriptions service changed to a service-level metric that is designed exclusively to track the subscription obligation in cores. The overall assumption for simultaneous multithreading in virtualized, x86 environments at a factor of 2 threads per core remains in effect. For additional information about these changes, see the details in the OCP Cluster Size Corrections Customer Portal article. | null | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_the_subscriptions_service/con-trbl-how-do-vcpus-hyperthreading-affect-data_assembly-troubleshooting-common-questions-ctxt |
Chapter 13. Getting started with IPVLAN | Chapter 13. Getting started with IPVLAN IPVLAN is a driver for a virtual network device that can be used in container environment to access the host network. IPVLAN exposes a single MAC address to the external network regardless the number of IPVLAN device created inside the host network. This means that a user can have multiple IPVLAN devices in multiple containers and the corresponding switch reads a single MAC address. IPVLAN driver is useful when the local switch imposes constraints on the total number of MAC addresses that it can manage. 13.1. IPVLAN modes The following modes are available for IPVLAN: L2 mode In IPVLAN L2 mode , virtual devices receive and respond to address resolution protocol (ARP) requests. The netfilter framework runs only inside the container that owns the virtual device. No netfilter chains are executed in the default namespace on the containerized traffic. Using L2 mode provides good performance, but less control on the network traffic. L3 mode In L3 mode , virtual devices process only L3 traffic and above. Virtual devices do not respond to ARP request and users must configure the neighbour entries for the IPVLAN IP addresses on the relevant peers manually. The egress traffic of a relevant container is landed on the netfilter POSTROUTING and OUTPUT chains in the default namespace while the ingress traffic is threaded in the same way as L2 mode . Using L3 mode provides good control but decreases the network traffic performance. L3S mode In L3S mode , virtual devices process the same way as in L3 mode , except that both egress and ingress traffics of a relevant container are landed on netfilter chain in the default namespace. L3S mode behaves in a similar way to L3 mode but provides greater control of the network. Note The IPVLAN virtual device does not receive broadcast and multicast traffic in case of L3 and L3S modes. 13.2. Comparison of IPVLAN and MACVLAN The following table shows the major differences between MACVLAN and IPVLAN: MACVLAN IPVLAN Uses MAC address for each MACVLAN device. Note that, if a switch reaches the maximum number of MAC addresses it can store in its MAC table, connectivity can be lost. Uses single MAC address which does not limit the number of IPVLAN devices. Netfilter rules for a global namespace cannot affect traffic to or from a MACVLAN device in a child namespace. It is possible to control traffic to or from a IPVLAN device in L3 mode and L3S mode . Both IPVLAN and MACVLAN do not require any level of encapsulation. 13.3. Creating and configuring the IPVLAN device using iproute2 This procedure shows how to set up the IPVLAN device using iproute2 . Procedure To create an IPVLAN device, enter the following command: Note that network interface controller (NIC) is a hardware component which connects a computer to a network. Example 13.1. Creating an IPVLAN device To assign an IPv4 or IPv6 address to the interface, enter the following command: In case of configuring an IPVLAN device in L3 mode or L3S mode , make the following setups: Configure the neighbor setup for the remote peer on the remote host: where MAC_address is the MAC address of the real NIC on which an IPVLAN device is based on. Configure an IPVLAN device for L3 mode with the following command: For L3S mode : where IP-address represents the address of the remote peer. To set an IPVLAN device active, enter the following command: To check if the IPVLAN device is active, execute the following command on the remote host: where the IP_address uses the IP address of the IPVLAN device. | [
"ip link add link real_NIC_device name IPVLAN_device type ipvlan mode l2",
"ip link add link enp0s31f6 name my_ipvlan type ipvlan mode l2 ip link 47: my_ipvlan@enp0s31f6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether e8:6a:6e:8a:a2:44 brd ff:ff:ff:ff:ff:ff",
"ip addr add dev IPVLAN_device IP_address/subnet_mask_prefix",
"ip neigh add dev peer_device IPVLAN_device_IP_address lladdr MAC_address",
"ip route add dev <real_NIC_device> <peer_IP_address/32>",
"ip route add dev real_NIC_device peer_IP_address/32",
"ip link set dev IPVLAN_device up",
"ping IP_address"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/getting-started-with-ipvlan_configuring-and-managing-networking |
Subsets and Splits