title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Monitoring your OpenShift cluster health with Insights Advisor | Monitoring your OpenShift cluster health with Insights Advisor Red Hat Insights for OpenShift 1-latest Using Insights advisor service to monitor your OpenShift cluster infrastructure Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_insights_for_openshift/1-latest/html/monitoring_your_openshift_cluster_health_with_insights_advisor/index |
Chapter 12. Registering RHEL by using Subscription Manager | Chapter 12. Registering RHEL by using Subscription Manager Post-installation, you must register your system to get continuous updates. 12.1. Registering RHEL 8 using the installer GUI You can register a Red Hat Enterprise Linux 8 by using the RHEL installer GUI. Prerequisites You have a valid user account on the Red Hat Customer Portal. See the Create a Red Hat Login page . You have a valid Activation Key and Organization id. Procedure From the Installation Summary screen, under Software , click Connect to Red Hat . Authenticate your Red Hat account using the Account or Activation Key option. Optional: In the Set System Purpose field select the Role , SLA , and Usage attribute that you want to set from the drop-down menu. At this point, your Red Hat Enterprise Linux 8 system has been successfully registered. 12.2. Registration Assistant Registration Assistant is designed to help you choose the most suitable registration option for your Red Hat Enterprise Linux environment. Additional resources For assistance with using a username and password to register RHEL with the Subscription Manager client, see the RHEL registration assistant on the Customer Portal. For assistance with registering your RHEL system to Red Hat Insights, see the Insights registration assistant on the Hybrid Cloud Console. 12.3. Registering your system using the command line You can register your Red Hat Enterprise Linux 8 subscription by using the command line. For an improved and simplified experience registering your hosts to Red Hat, use remote host configuration (RHC). The RHC client registers your system to Red Hat making your system ready for Insights data collection and enabling direct issue remediation from Insights for Red Hat Enterprise Linux. For more information, see RHC registration . Prerequisites You have an active, non-evaluation Red Hat Enterprise Linux subscription. Your Red Hat subscription status is verified. You have not previously received a Red Hat Enterprise Linux 8 subscription. You have successfully installed Red Hat Enterprise Linux 8 and logged into the system as root. Procedure Open a terminal window as a root user. Register your Red Hat Enterprise Linux system by using the activation key: When the system is successfully registered, an output similar to the following is displayed: Additional resources Using an activation key to register a system with Red Hat Subscription Manager Getting Started with RHEL System Registration | [
"subscription-manager register --activationkey= <activation_key_name> --org= <organization_ID>",
"The system has been registered with id: 62edc0f8-855b-4184-b1b8-72a9dc793b96"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_from_installation_media/registering-rhel-by-using-subscription-manager_rhel-installer |
Console APIs | Console APIs OpenShift Container Platform 4.17 Reference guide for console APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/console_apis/index |
Chapter 6. Red Hat build of Kogito events add-on | Chapter 6. Red Hat build of Kogito events add-on The events add-on provides a default implementation in supported target platforms for EventEmitter and EventReceiver interfaces. You can use EventEmitter and EventReceiver interfaces to enable messaging by process, serverless workflow events, and event decision handling. 6.1. Implementing message payload decorator for Red Hat build of Kogito events add-on Any dependent add-on can implement the MessagePayloadDecorator . Prerequisites You have installed the Events add-on in Red Hat build of Kogito. Procedure Create a file named META-INF/services/org.kie.kogito.add-on.cloudevents.message.MessagePayloadDecorator in your class path. Open the file. Enter the full name of your implementation class in the file. Save the file. The MessagePayloadDecoratorProvider loads the file upon application start-up and adds the file to the decoration chain. When Red Hat build of Kogito calls the MessagePayloadDecoratorProvider#decorate , your implementation is part of the decoration algorithm. To use the events add-on, add the following code to the pom.xml file of your project: Events smallrye add-on for {QAURKUS} <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-addons-quarkus-events-smallrye</artifactId> <version>1.15</version> </dependency> Events decisions add-on for {QAURKUS} <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-addons-events-decisions</artifactId> <version>1.15</version> </dependency> Events Kafka add-on for Spring Boot <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-addons-springboot-events-kafka</artifactId> <version>1.15</version> </dependency> Events decisions add-on for Spring Boot <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-addons-springboot-events-decisions</artifactId> <version>1.15</version> </dependency> | [
"<dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-addons-quarkus-events-smallrye</artifactId> <version>1.15</version> </dependency>",
"<dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-addons-events-decisions</artifactId> <version>1.15</version> </dependency>",
"<dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-addons-springboot-events-kafka</artifactId> <version>1.15</version> </dependency>",
"<dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-addons-springboot-events-decisions</artifactId> <version>1.15</version> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_decision_manager/con-kogito-events-add-on_getting-started-kogito-microservices |
3.3.2. Configuring a Backup Fence Device | 3.3.2. Configuring a Backup Fence Device You can define multiple fencing methods for a node. If fencing fails using the first method, the system will attempt to fence the node using the second method, followed by any additional methods you have configured. Use the following procedure to configure a backup fence device for a node. Use the procedure provided in Section 3.3.1, "Configuring a Single Fence Device for a Node" to configure the primary fencing method for a node. Beneath the display of the primary method you defined, click Add Fence Method . Enter a name for the backup fencing method that you are configuring for this node and click Submit . This displays the node-specific screen that now displays the method you have just added, below the primary fence method. Configure a fence instance for this method by clicking Add Fence Instance . This displays a drop-down menu from which you can select a fence device you have previously configured, as described in Section 3.2.1, "Creating a Fence Device" . Select a fence device for this method. If this fence device requires that you configure node-specific parameters, the display shows the parameters to configure. Click Submit . This returns you to the node-specific screen with the fence method and fence instance displayed. You can continue to add fencing methods as needed. You can rearrange the order of fencing methods that will be used for this node by clicking on Move Up and Move Down . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s2-backup-fence-config-conga-CA |
15.4. Impressing Your Friends with RPM | 15.4. Impressing Your Friends with RPM RPM is a useful tool for both managing your system and diagnosing and fixing problems. The best way to make sense of all of its options is to look at some examples. Perhaps you have deleted some files by accident, but you are not sure what you deleted. To verify your entire system and see what might be missing, you could try the following command: If some files are missing or appear to have been corrupted, you should probably either re-install the package or uninstall and then re-install the package. At some point, you might see a file that you do not recognize. To find out which package owns it, enter: The output would look like the following: We can combine the above two examples in the following scenario. Say you are having problems with /usr/bin/paste . You would like to verify the package that owns that program, but you do not know which package owns paste . Enter the following command, and the appropriate package is verified. Do you want to find out more information about a particular program? You can try the following command to locate the documentation which came with the package that owns that program: The output would be similar to the following: You may find a new RPM, but you do not know what it does. To find information about it, use the following command: The output would be similar to the following: Perhaps you now want to see what files the crontabs RPM installs. You would enter the following: The output is similar to the following: These are just a few examples. As you use it, you will find many more uses for RPM. | [
"-Va",
"-qf /usr/bin/ggv",
"ggv-2.6.0-2",
"-Vf /usr/bin/paste",
"-qdf /usr/bin/free",
"/usr/share/doc/procps-3.2.3/BUGS /usr/share/doc/procps-3.2.3/FAQ /usr/share/doc/procps-3.2.3/NEWS /usr/share/doc/procps-3.2.3/TODO /usr/share/man/man1/free.1.gz /usr/share/man/man1/pgrep.1.gz /usr/share/man/man1/pkill.1.gz /usr/share/man/man1/pmap.1.gz /usr/share/man/man1/ps.1.gz /usr/share/man/man1/skill.1.gz /usr/share/man/man1/slabtop.1.gz /usr/share/man/man1/snice.1.gz /usr/share/man/man1/tload.1.gz /usr/share/man/man1/top.1.gz /usr/share/man/man1/uptime.1.gz /usr/share/man/man1/w.1.gz /usr/share/man/man1/watch.1.gz /usr/share/man/man5/sysctl.conf.5.gz /usr/share/man/man8/sysctl.8.gz /usr/share/man/man8/vmstat.8.gz",
"-qip crontabs-1.10-7.noarch.rpm",
"Name : crontabs Relocations: (not relocatable) Version : 1.10 Vendor: Red Hat, Inc Release : 7 Build Date: Mon 20 Sep 2004 05:58:10 PM EDT Install Date: (not installed) Build Host: tweety.build.redhat.com Group : System Environment/Base Source RPM: crontabs-1.10-7.src.rpm Size : 1004 License: Public Domain Signature : DSA/SHA1, Wed 05 Jan 2005 06:05:25 PM EST, Key ID 219180cddb42a60e Packager : Red Hat, Inc <http://bugzilla.redhat.com/bugzilla> Summary : Root crontab files used to schedule the execution of programs. Description : The crontabs package contains root crontab files. Crontab is the program used to install, uninstall, or list the tables used to drive the cron daemon. The cron daemon checks the crontab files to see when particular commands are scheduled to be executed. If commands are scheduled, then it executes them.",
"-qlp crontabs-1.10-5.noarch.rpm",
"/etc/cron.daily /etc/cron.hourly /etc/cron.monthly /etc/cron.weekly /etc/crontab /usr/bin/run-parts"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Package_Management_with_RPM-Impressing_Your_Friends_with_RPM |
Appendix E. Preventing kernel modules from loading automatically | Appendix E. Preventing kernel modules from loading automatically You can prevent a kernel module from being loaded automatically, whether the module is loaded directly, loaded as a dependency from another module, or during the boot process. Procedure The module name must be added to a configuration file for the modprobe utility. This file must reside in the configuration directory /etc/modprobe.d . For more information on this configuration directory, see the man page modprobe.d . Ensure the module is not configured to get loaded in any of the following: /etc/modprobe.conf /etc/modprobe.d/* /etc/rc.modules /etc/sysconfig/modules/* # modprobe --showconfig <_configuration_file_name_> If the module appears in the output, ensure it is ignored and not loaded: # modprobe --ignore-install <_module_name_> Unload the module from the running system, if it is loaded: # modprobe -r <_module_name_> Prevent the module from being loaded directly by adding the blacklist line to a configuration file specific to the system - for example /etc/modprobe.d/local-dontload.conf : # echo "blacklist <_module_name_> >> /etc/modprobe.d/local-dontload.conf Note This step does not prevent a module from loading if it is a required or an optional dependency of another module. Prevent optional modules from being loading on demand: # echo "install <_module_name_>/bin/false" >> /etc/modprobe.d/local-dontload.conf Important If the excluded module is required for other hardware, excluding it might cause unexpected side effects. Make a backup copy of your initramfs : # cp /boot/initramfs-USD(uname -r).img /boot/initramfs-USD(uname -r).img.USD(date +%m-%d-%H%M%S).bak If the kernel module is part of the initramfs , rebuild your initial ramdisk image, omitting the module: # dracut --omit-drivers <_module_name_> -f Get the current kernel command line parameters: # grub2-editenv - list | grep kernelopts Append <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_> to the generated output: # grub2-editenv - set kernelopts="<> <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>" For example: # grub2-editenv - set kernelopts="root=/dev/mapper/rhel_example-root ro crashkernel=auto resume=/dev/mapper/rhel_example-swap rd.lvm.lv=rhel_example/root rd.lvm.lv=rhel_example/swap <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>" Make a backup copy of the kdump initramfs : # cp /boot/initramfs-USD(uname -r)kdump.img /boot/initramfs-USD(uname -r)kdump.img.USD(date +%m-%d-%H%M%S).bak Append rd.driver.blacklist=<_module_name_> to the KDUMP_COMMANDLINE_APPEND setting in /etc/sysconfig/kdump to omit it from the kdump initramfs : # sed -i '/^KDUMP_COMMANDLINE_APPEND=/s/"USD/ rd.driver.blacklist=module_name"/' /etc/sysconfig/kdump Restart the kdump service to pick up the changes to the kdump initrd : # kdumpctl restart Rebuild the kdump initial ramdisk image: # mkdumprd -f /boot/initramfs-USD(uname -r)kdump.img Reboot the system. E.1. Removing a module temporarily You can remove a module temporarily. Procedure Run modprobe to remove any currently-loaded module: # modprobe -r <module name> If the module cannot be unloaded, a process or another module might still be using the module. If so, terminate the process and run the modpole command written above another time to unload the module. | [
"modprobe --showconfig <_configuration_file_name_>",
"modprobe --ignore-install <_module_name_>",
"modprobe -r <_module_name_>",
"echo \"blacklist <_module_name_> >> /etc/modprobe.d/local-dontload.conf",
"echo \"install <_module_name_>/bin/false\" >> /etc/modprobe.d/local-dontload.conf",
"cp /boot/initramfs-USD(uname -r).img /boot/initramfs-USD(uname -r).img.USD(date +%m-%d-%H%M%S).bak",
"dracut --omit-drivers <_module_name_> -f",
"grub2-editenv - list | grep kernelopts",
"grub2-editenv - set kernelopts=\"<> <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>\"",
"grub2-editenv - set kernelopts=\"root=/dev/mapper/rhel_example-root ro crashkernel=auto resume=/dev/mapper/rhel_example-swap rd.lvm.lv=rhel_example/root rd.lvm.lv=rhel_example/swap <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>\"",
"cp /boot/initramfs-USD(uname -r)kdump.img /boot/initramfs-USD(uname -r)kdump.img.USD(date +%m-%d-%H%M%S).bak",
"sed -i '/^KDUMP_COMMANDLINE_APPEND=/s/\"USD/ rd.driver.blacklist=module_name\"/' /etc/sysconfig/kdump",
"kdumpctl restart",
"mkdumprd -f /boot/initramfs-USD(uname -r)kdump.img",
"modprobe -r <module name>"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/proc-preventing_kernel_modules_from_loading_automatically_sm_localdb_deploy |
Chapter 2. Sharding clusters across Argo CD Application Controller replicas | Chapter 2. Sharding clusters across Argo CD Application Controller replicas You can shard clusters across multiple Argo CD Application Controller replicas if the controller is managing too many clusters and uses too much memory. 2.1. Enabling the round-robin sharding algorithm Important The round-robin sharding algorithm is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . By default, the Argo CD Application Controller uses the non-uniform legacy hash-based sharding algorithm to assign clusters to shards. This can result in uneven cluster distribution. You can enable the round-robin sharding algorithm to achieve more equal cluster distribution across all shards. Using the round-robin sharding algorithm in Red Hat OpenShift GitOps provides the following benefits: Ensure more balanced workload distribution Prevent shards from being overloaded or underutilized Optimize the efficiency of computing resources Reduce the risk of bottlenecks Improve overall performance and reliability of the Argo CD system The introduction of alternative sharding algorithms allows for further customization based on specific use cases. You can select the algorithm that best aligns with your deployment needs, which results in greater flexibility and adaptability in diverse operational scenarios. Tip To leverage the benefits of alternative sharding algorithms in GitOps, it is crucial to enable sharding during deployment. 2.1.1. Enabling the round-robin sharding algorithm in the web console You can enable the round-robin sharding algorithm by using the OpenShift Container Platform web console. Prerequisites You have installed the Red Hat OpenShift GitOps Operator in your cluster. You have access to the OpenShift Container Platform web console. You have access to the cluster with cluster-admin privileges. Procedure In the Administrator perspective of the web console, go to Operators Installed Operators . Click Red Hat OpenShift GitOps from the installed operators and go to the Argo CD tab. Click the Argo CD instance where you want to enable the round-robin sharding algorithm, for example, openshift-gitops . Click the YAML tab and edit the YAML file as shown in the following example: Example Argo CD instance with round-robin sharding algorithm enabled apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: openshift-gitops namespace: openshift-gitops spec: controller: sharding: enabled: true 1 replicas: 3 2 env: 3 - name: ARGOCD_CONTROLLER_SHARDING_ALGORITHM value: round-robin logLevel: debug 4 1 Set the sharding.enabled parameter to true to enable sharding. 2 Set the number of replicas to the wanted value, for example, 3 . 3 Set the sharding algorithm to round-robin . 4 Set the log level to debug so that you can verify to which shard each cluster is attached. Click Save . A success notification alert, openshift-gitops has been updated to version <version> , appears. Note If you edit the default openshift-gitops instance, the Managed resource dialog box is displayed. Click Save again to confirm the changes. Verify that the sharding is enabled with round-robin as the sharding algorithm by performing the following steps: Go to Workloads StatefulSets . Select the namespace where you installed the Argo CD instance from the Project drop-down list. Click <instance_name>-application-controller , for example, openshift-gitops-application-controller , and go to the Pods tab. Observe the number of created application controller pods. It should correspond with the number of set replicas. Click on the controller pod you want to examine and go to the Logs tab to view the pod logs. Example controller pod logs snippet time="2023-12-13T09:05:34Z" level=info msg="ArgoCD Application Controller is starting" built="2023-12-01T19:21:49Z" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=openshift-gitops version=v2.9.2+c5ea5c4 time="2023-12-13T09:05:34Z" level=info msg="Processing clusters from shard 1" time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" 1 time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" time="2023-12-13T09:05:34Z" level=info msg="appResyncPeriod=3m0s, appHardResyncPeriod=0s" 1 Look for the "Using filter function: round-robin" message. In the log Search field, search for processed by shard to verify that the cluster distribution across shards is even, as shown in the following example. Important Ensure that you set the log level to debug to observe these logs. Example controller pod logs snippet time="2023-12-13T09:05:34Z" level=debug msg="ClustersList has 3 items" time="2023-12-13T09:05:34Z" level=debug msg="Adding cluster with id= and name=in-cluster to cluster's map" time="2023-12-13T09:05:34Z" level=debug msg="Adding cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 and name=in-cluster2 to cluster's map" time="2023-12-13T09:05:34Z" level=debug msg="Adding cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w and name=in-cluster3 to cluster's map" time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id= will be processed by shard 0" 1 time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 will be processed by shard 1" 2 time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w will be processed by shard 2" 3 1 2 3 In this example, 3 clusters are attached consecutively to shard 0, shard 1, and shard 2. Note If the number of clusters "C" is a multiple of the number of shard replicas "R", then each shard must have the same number of assigned clusters "N", which is equal to "C" divided by "R". The example shows 3 clusters and 3 replicas; therefore, each shard has 1 cluster assigned. 2.1.2. Enabling the round-robin sharding algorithm by using the CLI You can enable the round-robin sharding algorithm by using the command-line interface. Prerequisites You have installed the Red Hat OpenShift GitOps Operator in your cluster. You have access to the cluster with cluster-admin privileges. Procedure Enable sharding and set the number of replicas to the wanted value by running the following command: USD oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"sharding":{"enabled":true,"replicas":<value>}}}}' --type=merge Example output argocd.argoproj.io/<argocd_instance> patched Configure the sharding algorithm to round-robin by running the following command: USD oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"env":[{"name":"ARGOCD_CONTROLLER_SHARDING_ALGORITHM","value":"round-robin"}]}}}' --type=merge Example output argocd.argoproj.io/<argocd_instance> patched Verify that the number of Argo CD Application Controller pods corresponds with the number of set replicas by running the following command: USD oc get pods -l app.kubernetes.io/name=<argocd_instance>-application-controller -n <namespace> Example output NAME READY STATUS RESTARTS AGE <argocd_instance>-application-controller-0 1/1 Running 0 11s <argocd_instance>-application-controller-1 1/1 Running 0 32s <argocd_instance>-application-controller-2 1/1 Running 0 22s Verify that the sharding is enabled with round-robin as the sharding algorithm by running the following command: USD oc logs <argocd_application_controller_pod> -n <namespace> Example output snippet time="2023-12-13T09:05:34Z" level=info msg="ArgoCD Application Controller is starting" built="2023-12-01T19:21:49Z" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=<namespace> version=v2.9.2+c5ea5c4 time="2023-12-13T09:05:34Z" level=info msg="Processing clusters from shard 1" time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" 1 time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" time="2023-12-13T09:05:34Z" level=info msg="appResyncPeriod=3m0s, appHardResyncPeriod=0s" 1 Look for the "Using filter function: round-robin" message. Verify that the cluster distribution across shards is even by performing the following steps: Set the log level to debug by running the following command: USD oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"logLevel":"debug"}}}' --type=merge Example output argocd.argoproj.io/<argocd_instance> patched View the logs and search for processed by shard to observe to which shard each cluster is attached by running the following command: USD oc logs <argocd_application_controller_pod> -n <namespace> | grep "processed by shard" Example output snippet time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id= will be processed by shard 0" 1 time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 will be processed by shard 1" 2 time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w will be processed by shard 2" 3 1 2 3 In this example, 3 clusters are attached consecutively to shard 0, shard 1, and shard 2. Note If the number of clusters "C" is a multiple of the number of shard replicas "R", then each shard must have the same number of assigned clusters "N", which is equal to "C" divided by "R". The example shows 3 clusters and 3 replicas; therefore, each shard has 1 cluster assigned. 2.2. Enabling dynamic scaling of shards of the Argo CD Application Controller Important Dynamic scaling of shards is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . By default, the Argo CD Application Controller assigns clusters to shards indefinitely. If you are using the round-robin sharding algorithm, this static assignment can result in uneven distribution of shards, particularly when replicas are added or removed. You can enable dynamic scaling of shards to automatically adjust the number of shards based on the number of clusters managed by the Argo CD Application Controller at a given time. This ensures that shards are well-balanced and optimizes the use of compute resources. Note After you enable dynamic scaling, you cannot manually modify the shard count. The system automatically adjusts the number of shards based on the number of clusters managed by the Argo CD Application Controller at a given time. 2.2.1. Enabling dynamic scaling of shards in the web console You can enable dynamic scaling of shards by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have installed the Red Hat OpenShift GitOps Operator in your cluster. Procedure In the Administator perspective of the OpenShift Container Platform web console, go to Operators Installed Operators . From the the list of Installed Operators , select the Red Hat OpenShift GitOps Operator, and then click the ArgoCD tab. Select the Argo CD instance name for which you want to enable dynamic scaling of shards, for example, openshift-gitops . Click the YAML tab, and then edit and configure the spec.controller.sharding properties as follows: Example Argo CD YAML file with dynamic scaling enabled apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: openshift-gitops namespace: openshift-gitops spec: controller: sharding: dynamicScalingEnabled: true 1 minShards: 1 2 maxShards: 3 3 clustersPerShard: 1 4 1 Set dynamicScalingEnabled to true to enable dynamic scaling. 2 Set minShards to the minimum number of shards that you want to have. The value must be set to 1 or greater. 3 Set maxShards to the maximum number of shards that you want to have. The value must be greater than the value of minShards . 4 Set clustersPerShard to the number of clusters that you want to have per shard. The value must be set to 1 or greater. Click Save . A success notification alert, openshift-gitops has been updated to version <version> , appears. Note If you edit the default openshift-gitops instance, the Managed resource dialog box is displayed. Click Save again to confirm the changes. Verification Verify that sharding is enabled by checking the number of pods in the namespace: Go to Workloads StatefulSets . Select the namespace where the Argo CD instance is deployed from the Project drop-down list, for example, openshift-gitops . Click the name of the StatefulSet object that has the name of the Argo CD instance, for example openshift-gitops-apllication-controller . Click the Pods tab, and then verify that the number of pods is equal to or greater than the value of minShards that you have set in the Argo CD YAML file. 2.2.2. Enabling dynamic scaling of shards by using the CLI You can enable dynamic scaling of shards by using the OpenShift CLI ( oc ). Prerequisites You have installed the Red Hat OpenShift GitOps Operator in your cluster. You have access to the cluster with cluster-admin privileges. Procedure Log in to the cluster by using the oc tool as a user with cluster-admin privileges. Enable dynamic scaling by running the following command: USD oc patch argocd <argocd_instance> -n <namespace> --type=merge --patch='{"spec":{"controller":{"sharding":{"dynamicScalingEnabled":true,"minShards":<value>,"maxShards":<value>,"clustersPerShard":<value>}}}}' Example command USD oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch='{"spec":{"controller":{"sharding":{"dynamicScalingEnabled":true,"minShards":1,"maxShards":3,"clustersPerShard":1}}}}' 1 1 The example command enables dynamic scaling for the openshift-gitops Argo CD instance in the openshift-gitops namespace, and sets the minimum number of shards to 1 , the maximum number of shards to 3 , and the number of clusters per shard to 1 . The values of minShard and clustersPerShard must be set to 1 or greater. The value of maxShard must be equal to or greater than the value of minShard . Example output argocd.argoproj.io/openshift-gitops patched Verification Check the spec.controller.sharding properties of the Argo CD instance: USD oc get argocd <argocd_instance> -n <namespace> -o jsonpath='{.spec.controller.sharding}' Example command USD oc get argocd openshift-gitops -n openshift-gitops -o jsonpath='{.spec.controller.sharding}' Example output when dynamic scaling of shards is enabled {"dynamicScalingEnabled":true,"minShards":1,"maxShards":3,"clustersPerShard":1} Optional: Verify that dynamic scaling is enabled by checking the configured spec.controller.sharding properties in the configuration YAML file of the Argo CD instance in the OpenShift Container Platform web console. Check the number of Argo CD Application Controller pods: USD oc get pods -n <namespace> -l app.kubernetes.io/name=<argocd_instance>-application-controller Example command USD oc get pods -n openshift-gitops -l app.kubernetes.io/name=openshift-gitops-application-controller Example output NAME READY STATUS RESTARTS AGE openshift-gitops-application-controller-0 1/1 Running 0 2m 1 1 The number of Argo CD Application Controller pods must be equal to or greater than the value of minShard . 2.2.3. Additional resources Argo CD custom resource properties Automatically scaling pods with the horizontal pod autoscaler | [
"apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: openshift-gitops namespace: openshift-gitops spec: controller: sharding: enabled: true 1 replicas: 3 2 env: 3 - name: ARGOCD_CONTROLLER_SHARDING_ALGORITHM value: round-robin logLevel: debug 4",
"time=\"2023-12-13T09:05:34Z\" level=info msg=\"ArgoCD Application Controller is starting\" built=\"2023-12-01T19:21:49Z\" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=openshift-gitops version=v2.9.2+c5ea5c4 time=\"2023-12-13T09:05:34Z\" level=info msg=\"Processing clusters from shard 1\" time=\"2023-12-13T09:05:34Z\" level=info msg=\"Using filter function: round-robin\" 1 time=\"2023-12-13T09:05:34Z\" level=info msg=\"Using filter function: round-robin\" time=\"2023-12-13T09:05:34Z\" level=info msg=\"appResyncPeriod=3m0s, appHardResyncPeriod=0s\"",
"time=\"2023-12-13T09:05:34Z\" level=debug msg=\"ClustersList has 3 items\" time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Adding cluster with id= and name=in-cluster to cluster's map\" time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Adding cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 and name=in-cluster2 to cluster's map\" time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Adding cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w and name=in-cluster3 to cluster's map\" time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Cluster with id= will be processed by shard 0\" 1 time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 will be processed by shard 1\" 2 time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w will be processed by shard 2\" 3",
"oc patch argocd <argocd_instance> -n <namespace> --patch='{\"spec\":{\"controller\":{\"sharding\":{\"enabled\":true,\"replicas\":<value>}}}}' --type=merge",
"argocd.argoproj.io/<argocd_instance> patched",
"oc patch argocd <argocd_instance> -n <namespace> --patch='{\"spec\":{\"controller\":{\"env\":[{\"name\":\"ARGOCD_CONTROLLER_SHARDING_ALGORITHM\",\"value\":\"round-robin\"}]}}}' --type=merge",
"argocd.argoproj.io/<argocd_instance> patched",
"oc get pods -l app.kubernetes.io/name=<argocd_instance>-application-controller -n <namespace>",
"NAME READY STATUS RESTARTS AGE <argocd_instance>-application-controller-0 1/1 Running 0 11s <argocd_instance>-application-controller-1 1/1 Running 0 32s <argocd_instance>-application-controller-2 1/1 Running 0 22s",
"oc logs <argocd_application_controller_pod> -n <namespace>",
"time=\"2023-12-13T09:05:34Z\" level=info msg=\"ArgoCD Application Controller is starting\" built=\"2023-12-01T19:21:49Z\" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=<namespace> version=v2.9.2+c5ea5c4 time=\"2023-12-13T09:05:34Z\" level=info msg=\"Processing clusters from shard 1\" time=\"2023-12-13T09:05:34Z\" level=info msg=\"Using filter function: round-robin\" 1 time=\"2023-12-13T09:05:34Z\" level=info msg=\"Using filter function: round-robin\" time=\"2023-12-13T09:05:34Z\" level=info msg=\"appResyncPeriod=3m0s, appHardResyncPeriod=0s\"",
"oc patch argocd <argocd_instance> -n <namespace> --patch='{\"spec\":{\"controller\":{\"logLevel\":\"debug\"}}}' --type=merge",
"argocd.argoproj.io/<argocd_instance> patched",
"oc logs <argocd_application_controller_pod> -n <namespace> | grep \"processed by shard\"",
"time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Cluster with id= will be processed by shard 0\" 1 time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 will be processed by shard 1\" 2 time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w will be processed by shard 2\" 3",
"apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: openshift-gitops namespace: openshift-gitops spec: controller: sharding: dynamicScalingEnabled: true 1 minShards: 1 2 maxShards: 3 3 clustersPerShard: 1 4",
"oc patch argocd <argocd_instance> -n <namespace> --type=merge --patch='{\"spec\":{\"controller\":{\"sharding\":{\"dynamicScalingEnabled\":true,\"minShards\":<value>,\"maxShards\":<value>,\"clustersPerShard\":<value>}}}}'",
"oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch='{\"spec\":{\"controller\":{\"sharding\":{\"dynamicScalingEnabled\":true,\"minShards\":1,\"maxShards\":3,\"clustersPerShard\":1}}}}' 1",
"argocd.argoproj.io/openshift-gitops patched",
"oc get argocd <argocd_instance> -n <namespace> -o jsonpath='{.spec.controller.sharding}'",
"oc get argocd openshift-gitops -n openshift-gitops -o jsonpath='{.spec.controller.sharding}'",
"{\"dynamicScalingEnabled\":true,\"minShards\":1,\"maxShards\":3,\"clustersPerShard\":1}",
"oc get pods -n <namespace> -l app.kubernetes.io/name=<argocd_instance>-application-controller",
"oc get pods -n openshift-gitops -l app.kubernetes.io/name=openshift-gitops-application-controller",
"NAME READY STATUS RESTARTS AGE openshift-gitops-application-controller-0 1/1 Running 0 2m 1"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.11/html/declarative_cluster_configuration/sharding-clusters-across-argo-cd-application-controller-replicas |
Installing on Nutanix | Installing on Nutanix OpenShift Container Platform 4.18 Installing OpenShift Container Platform on Nutanix Red Hat OpenShift Documentation Team | [
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"oc edit infrastructures.config.openshift.io cluster",
"spec: cloudConfig: key: config name: cloud-provider-config # platformSpec: nutanix: failureDomains: - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid>",
"oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: cluster namespace: openshift-machine-api spec: template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: Nutanix nutanix: - name: <failure_domain_name_1> - name: <failure_domain_name_2> - name: <failure_domain_name_3>",
"oc describe infrastructures.config.openshift.io cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <machine_set_name_1> 1 1 1 1 55m <machine_set_name_2> 1 1 1 1 55m",
"oc edit machineset <machine_set_name_1> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1>",
"oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1>",
"NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running AHV Unnamed Development-STS 4h <machine_name_original_2> Running AHV Unnamed Development-STS 4h",
"oc annotate machine/<machine_name_original_1> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"",
"oc scale --replicas=<twice_the_number_of_replicas> \\ 1 machineset <machine_set_name_1> -n openshift-machine-api",
"oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1>",
"oc scale --replicas=<original_number_of_replicas> \\ 1 machineset <machine_set_name_1> -n openshift-machine-api",
"oc describe infrastructures.config.openshift.io cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m",
"oc get machineset <original_machine_set_name_1> -n openshift-machine-api -o yaml > <new_machine_set_name_1>.yaml",
"oc get machineset <original_machine_set_name_1> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <new_machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1>",
"oc create -f <new_machine_set_name_1>.yaml",
"oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<new_machine_set_name_1>",
"NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Provisioned AHV Unnamed Development-STS 25s <machine_from_new_2> Provisioning AHV Unnamed Development-STS 25s",
"oc delete machineset <original_machine_set_name_1> -n openshift-machine-api",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <new_machine_set_name_1> 1 1 1 1 4m12s <new_machine_set_name_2> 1 1 1 1 4m12s",
"oc get -n openshift-machine-api machines",
"NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 5m41s <machine_from_new_2> Running AHV Unnamed Development-STS 5m41s <machine_from_original_1> Deleting AHV Unnamed Development-STS 4h <machine_from_original_2> Deleting AHV Unnamed Development-STS 4h",
"NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 6m30s <machine_from_new_2> Running AHV Unnamed Development-STS 6m30s",
"oc describe machine <machine_from_new_1> -n openshift-machine-api",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIPs: - 10.40.142.7 12 defaultMachinePlatform: bootType: Legacy categories: 13 - key: <category_key_name> value: <category_value> project: 14 type: name name: <project_name> ingressVIPs: - 10.40.142.8 15 prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23",
"apiVersion: v1 baseDomain: example.com compute: platform: nutanix: failureDomains: - name: <failure_domain_name> prismElement: name: <prism_element_name> uuid: <prism_element_uuid> subnetUUIDs: - <network_uuid>",
"apiVersion: v1 baseDomain: example.com compute: platform: nutanix: defaultMachinePlatform: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3",
"apiVersion: v1 baseDomain: example.com compute: controlPlane: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 compute: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api",
"ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1",
"openshift-install create manifests --dir <installation_directory> 1",
"cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests",
"ls ./<installation_directory>/manifests",
"cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml",
"cd <path_to_installation_directory>/manifests",
"apiVersion: v1 kind: ConfigMap metadata: name: cloud-conf namespace: openshift-cloud-controller-manager data: cloud.conf: \"{ \\\"prismCentral\\\": { \\\"address\\\": \\\"<prism_central_FQDN/IP>\\\", 1 \\\"port\\\": 9440, \\\"credentialRef\\\": { \\\"kind\\\": \\\"Secret\\\", \\\"name\\\": \\\"nutanix-credentials\\\", \\\"namespace\\\": \\\"openshift-cloud-controller-manager\\\" } }, \\\"topologyDiscovery\\\": { \\\"type\\\": \\\"Prism\\\", \\\"topologyCategories\\\": null }, \\\"enableCustomLabeling\\\": true }\"",
"spec: cloudConfig: key: config name: cloud-provider-config",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"platform: nutanix: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install coreos print-stream-json",
"\"nutanix\": { \"release\": \"411.86.202210041459-0\", \"formats\": { \"qcow2\": { \"disk\": { \"location\": \"https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2\", \"sha256\": \"42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b\"",
"platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: example.com compute: platform: nutanix: failureDomains: - name: <failure_domain_name> prismElement: name: <prism_element_name> uuid: <prism_element_uuid> subnetUUIDs: - <network_uuid>",
"apiVersion: v1 baseDomain: example.com compute: platform: nutanix: defaultMachinePlatform: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3",
"apiVersion: v1 baseDomain: example.com compute: controlPlane: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2 - failure-domain-3 compute: platform: nutanix: failureDomains: - failure-domain-1 - failure-domain-2",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api",
"ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1",
"openshift-install create manifests --dir <installation_directory> 1",
"cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests",
"ls ./<installation_directory>/manifests",
"cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc apply -f ./oc-mirror-workspace/results-<id>/",
"oc get imagecontentsourcepolicy",
"oc get catalogsource --all-namespaces",
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"networking: ovnKubernetesConfig: ipv4: internalJoinSubnet:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"compute: platform: nutanix: categories: key:",
"compute: platform: nutanix: categories: value:",
"compute: platform: nutanix: failureDomains:",
"compute: platform: nutanix: gpus: type:",
"compute: platform: nutanix: gpus: name:",
"compute: platform: nutanix: gpus: deviceID:",
"compute: platform: nutanix: project: type:",
"compute: platform: nutanix: project: name: or uuid:",
"compute: platform: nutanix: bootType:",
"compute: platform: nutanix: dataDisks: dataSourceImage: name:",
"compute: platform: nutanix: dataDisks: dataSourceImage: referenceName:",
"compute: platform: nutanix: dataDisks: dataSourceImage: uuid:",
"compute: platform: nutanix: dataDisks: deviceProperties: adapterType:",
"compute: platform: nutanix: dataDisks: deviceProperties: deviceIndex:",
"compute: platform: nutanix: dataDisks: deviceProperties: deviceType:",
"compute: platform: nutanix: dataDisks: diskSize:",
"compute: platform: nutanix: dataDisks: storageConfig: diskMode:",
"compute: platform: nutanix: dataDisks: storageConfig: storageContainer: name:",
"compute: platform: nutanix: dataDisks: storageConfig: storageContainer: referenceName:",
"compute: platform: nutanix: dataDisks: storageConfig: storageContainer: uuid:",
"controlPlane: platform: nutanix: categories: key:",
"controlPlane: platform: nutanix: categories: value:",
"controlPlane: platform: nutanix: failureDomains:",
"controlPlane: platform: nutanix: project: type:",
"controlPlane: platform: nutanix: project: name: or uuid:",
"platform: nutanix: defaultMachinePlatform: categories: key:",
"platform: nutanix: defaultMachinePlatform: categories: value:",
"platform: nutanix: defaultMachinePlatform: failureDomains:",
"platform: nutanix: defaultMachinePlatform: project: type:",
"platform: nutanix: defaultMachinePlatform: project: name: or uuid:",
"platform: nutanix: defaultMachinePlatform: bootType:",
"platform: nutanix: apiVIP:",
"platform: nutanix: failureDomains: - name: prismElement: name: uuid: subnetUUIDs: -",
"platform: nutanix: ingressVIP:",
"platform: nutanix: prismCentral: endpoint: address:",
"platform: nutanix: prismCentral: endpoint: port:",
"platform: nutanix: prismCentral: password:",
"platform: nutanix: preloadedOSImageName:",
"platform: nutanix: prismCentral: username:",
"platform: nutanix: prismElements: endpoint: address:",
"platform: nutanix: prismElements: endpoint: port:",
"platform: nutanix: prismElements: uuid:",
"platform: nutanix: subnetUUIDs:",
"platform: nutanix: clusterOSImage:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/installing_on_nutanix/index |
1.3. Running LVM in a Cluster | 1.3. Running LVM in a Cluster The Clustered Logical Volume Manager (CLVM) is a set of clustering extensions to LVM. These extensions allow a cluster of computers to manage shared storage (for example, on a SAN) using LVM. The clmvd daemon is the key clustering extension to LVM. The clvmd daemon runs in each cluster computer and distributes LVM metadata updates in a cluster, presenting each cluster computer with the same view of the logical volumes. Figure 1.2, "CLVM Overview" shows a CLVM overview in a Red Hat cluster. Figure 1.2. CLVM Overview Logical volumes created with CLVM on shared storage are visible to all computers that have access to the shared storage. CLVM allows a user to configure logical volumes on shared storage by locking access to physical storage while a logical volume is being configured. CLVM uses the locking services provided by the high availability symmetric infrastructure. Note Shared storage for use in Red Hat Cluster Suite requires that you be running the cluster logical volume manager daemon ( clvmd ) or the High Availability Logical Volume Management agents (HA-LVM). If you are not able to use either the clvmd daemon or HA-LVM for operational reasons or because you do not have the correct entitlements, you must not use single-instance LVM on the shared disk as this may result in data corruption. If you have any concerns please contact your Red Hat service representative. Note CLVM requires changes to the lvm.conf file for cluster-wide locking. For information on configuring the lvm.conf file to support CLVM, see Section 3.1, "Creating LVM Volumes in a Cluster" . You configure LVM volumes for use in a cluster with the standard set of LVM commands or the LVM graphical user interface, as described in Chapter 4, LVM Administration with CLI Commands and Chapter 7, LVM Administration with the LVM GUI . For information on installing LVM in a Red Hat Cluster, see Configuring and Managing a Red Hat Cluster . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/LVM_Cluster_Overview |
4.5. Enhancements to NUMA in Red Hat Enterprise Linux 6 | 4.5. Enhancements to NUMA in Red Hat Enterprise Linux 6 Red Hat Enterprise Linux 6 includes a number of enhancements to capitalize on the full potential of today's highly scalable hardware. This section gives a high-level overview of the most important NUMA-related performance enhancements provided by Red Hat Enterprise Linux 6. 4.5.1. Bare-metal and Scalability Optimizations 4.5.1.1. Enhancements in topology-awareness The following enhancements allow Red Hat Enterprise Linux to detect low-level hardware and architecture details, improving its ability to automatically optimize processing on your system. enhanced topology detection This allows the operating system to detect low-level hardware details (such as logical CPUs, hyper threads, cores, sockets, NUMA nodes and access times between nodes) at boot time, and optimize processing on your system. completely fair scheduler This new scheduling mode ensures that runtime is shared evenly between eligible processes. Combining this with topology detection allows processes to be scheduled onto CPUs within the same socket to avoid the need for expensive remote memory access, and ensure that cache content is preserved wherever possible. malloc malloc is now optimized to ensure that the regions of memory that are allocated to a process are as physically close as possible to the core on which the process is executing. This increases memory access speeds. skbuff I/O buffer allocation Similarly to malloc , this is now optimized to use memory that is physically close to the CPU handling I/O operations such as device interrupts. device interrupt affinity Information recorded by device drivers about which CPU handles which interrupts can be used to restrict interrupt handling to CPUs within the same physical socket, preserving cache affinity and limiting high-volume cross-socket communication. 4.5.1.2. Enhancements in Multi-processor Synchronization Coordinating tasks between multiple processors requires frequent, time-consuming operations to ensure that processes executing in parallel do not compromise data integrity. Red Hat Enterprise Linux includes the following enhancements to improve performance in this area: Read-Copy-Update (RCU) locks Typically, 90% of locks are acquired for read-only purposes. RCU locking removes the need to obtain an exclusive-access lock when the data being accessed is not being modified. This locking mode is now used in page cache memory allocation: locking is now used only for allocation or deallocation operations. per-CPU and per-socket algorithms Many algorithms have been updated to perform lock coordination among cooperating CPUs on the same socket to allow for more fine-grained locking. Numerous global spinlocks have been replaced with per-socket locking methods, and updated memory allocator zones and related memory page lists allow memory allocation logic to traverse a more efficient subset of the memory mapping data structures when performing allocation or deallocation operations. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/s-cpu-numa-enhancements |
Chapter 1. Introduction | Chapter 1. Introduction 1.1. Virtualized and Non-Virtualized Environments A virtualized environment presents opportunities for both the discovery of new attack vectors and the refinement of existing exploits that may not previously have presented value to an attacker. Therefore, it is important to take steps to ensure the security of both the physical hosts and the guests running on them when creating and maintaining virtual machines. Non-Virtualized Environment In a non-virtualized environment, hosts are separated from each other physically and each host has a self-contained environment, which consists of services such as a web server, or a DNS server. These services communicate directly to their own user space, host kernel and physical host, offering their services directly to the network. Figure 1.1. Non-Virtualized Environment Virtualized Environment In a virtualized environment, several operating systems can be housed (as guest virtual machines) within a single host kernel and physical host. Figure 1.2. Virtualized Environment When services are not virtualized, machines are physically separated. Any exploit is, therefore, usually contained to the affected machine, with the exception of network attacks. When services are grouped together in a virtualized environment, extra vulnerabilities emerge in the system. If a security flaw exists in the hypervisor that can be exploited by a guest instance, this guest may be able to attack the host, as well as other guests running on that host. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_security_guide/chap-virtualization_security_guide-introduction |
18.7. Creating Advanced RAID Devices | 18.7. Creating Advanced RAID Devices In some cases, you may wish to install the operating system on an array that can't be created after the installation completes. Usually, this means setting up the /boot or root file system arrays on a complex RAID device; in such cases, you may need to use array options that are not supported by Anaconda . To work around this, perform the following procedure: Procedure 18.1. Creating Advanced RAID Devices Insert the install disk. During the initial boot up, select Rescue Mode instead of Install or Upgrade . When the system fully boots into Rescue mode , the user will be presented with a command line terminal. From this terminal, use parted to create RAID partitions on the target hard drives. Then, use mdadm to manually create raid arrays from those partitions using any and all settings and options available. For more information on how to do these, see Chapter 13, Partitions , man parted , and man mdadm . Once the arrays are created, you can optionally create file systems on the arrays as well. Reboot the computer and this time select Install or Upgrade to install as normal. As Anaconda searches the disks in the system, it will find the pre-existing RAID devices. When asked about how to use the disks in the system, select Custom Layout and click . In the device listing, the pre-existing MD RAID devices will be listed. Select a RAID device, click Edit and configure its mount point and (optionally) the type of file system it should use (if you did not create one earlier) then click Done . Anaconda will perform the install to this pre-existing RAID device, preserving the custom options you selected when you created it in Rescue Mode . Note The limited Rescue Mode of the installer does not include man pages. Both the man mdadm and man md contain useful information for creating custom RAID arrays, and may be needed throughout the workaround. As such, it can be helpful to either have access to a machine with these man pages present, or to print them out prior to booting into Rescue Mode and creating your custom arrays. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/s1-raid-advanced-raid-create |
Chapter 4. Validate the Completed Restore | Chapter 4. Validate the Completed Restore Use the following commands to perform a healthcheck of your newly restored environment: 4.1. Check Identity Service (Keystone) Operation This step validates Identity Service operations by querying for a list of users. When run from the controller, the output of this command should include a list of users created in your environment. This action demonstrates that keystone is running and successfully authenticating user requests. For example: | [
"source stackrc openstack user list",
"openstack user list +----------------------------------+------------+---------+----------------------+ | id | name | enabled | email | +----------------------------------+------------+---------+----------------------+ | 9e47bb53bb40453094e32eccce996828 | admin | True | root@localhost | | 9fe2466f88cc4fa0ba69e59b47898829 | ceilometer | True | ceilometer@localhost | | 7a40d944e55d422fa4e85daf47e47c42 | cinder | True | cinder@localhost | | 3d2ed97538064f258f67c98d1912132e | demo | True | | | 756e73a5115d4e9a947d8aadc6f5ac22 | glance | True | glance@localhost | | f0d1fcee8f9b4da39556b78b72fdafb1 | neutron | True | neutron@localhost | | e9025f3faeee4d6bb7a057523576ea19 | nova | True | nova@localhost | | 65c60b1278a0498980b2dc46c7dcf4b7 | swift | True | swift@localhost | +----------------------------------+------------+---------+----------------------+"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/back_up_and_restore_the_director_undercloud/validate_the_completed_restore |
Chapter 2. Installing the Streams for Apache Kafka operator from the OperatorHub | Chapter 2. Installing the Streams for Apache Kafka operator from the OperatorHub You can install and subscribe to the Streams for Apache Kafka operator using the OperatorHub in the OpenShift Container Platform web console. This procedure describes how to create a project and install the Streams for Apache Kafka operator to that project. A project is a representation of a namespace. For manageability, it is a good practice to use namespaces to separate functions. Warning Make sure you use the appropriate update channel. If you are on a supported version of OpenShift, installing Streams for Apache Kafka from the default stable channel is generally safe. However, we do not recommend enabling automatic updates on the stable channel. An automatic upgrade will skip any necessary steps prior to upgrade. Use automatic upgrades only on version-specific channels. Prerequisites Access to an OpenShift Container Platform web console using an account with cluster-admin or strimzi-admin permissions. Procedure Navigate in the OpenShift web console to the Home > Projects page and create a project (namespace) for the installation. We use a project named streams-kafka in this example. Navigate to the Operators > OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Streams for Apache Kafka operator. The operator is located in the Streaming & Messaging category. Click Streams for Apache Kafka to display the operator information. Read the information about the operator and click Install . On the Install Operator page, choose from the following installation and update options: Update Channel : Choose the update channel for the operator. The (default) stable channel contains all the latest updates and releases, including major, minor, and micro releases, which are assumed to be well tested and stable. An amq-streams- X .x channel contains the minor and micro release updates for a major release, where X is the major release version number. An amq-streams- X.Y .x channel contains the micro release updates for a minor release, where X is the major release version number and Y is the minor release version number. Installation Mode : Choose the project you created to install the operator on a specific namespace. You can install the Streams for Apache Kafka operator to all namespaces in the cluster (the default option) or a specific namespace. We recommend that you dedicate a specific namespace to the Kafka cluster and other Streams for Apache Kafka components. Update approval : By default, the Streams for Apache Kafka operator is automatically upgraded to the latest Streams for Apache Kafka version by the Operator Lifecycle Manager (OLM). Optionally, select Manual if you want to manually approve future upgrades. For more information on operators, see the OpenShift documentation . Click Install to install the operator to your selected namespace. The Streams for Apache Kafka operator deploys the Cluster Operator, CRDs, and role-based access control (RBAC) resources to the selected namespace. After the operator is ready for use, navigate to Operators > Installed Operators to verify that the operator has installed to the selected namespace. The status will show as Succeeded . You can now use the Streams for Apache Kafka operator to deploy Kafka components, starting with a Kafka cluster. Note If you navigate to Workloads > Deployments , you can see the deployment details for the Cluster Operator and Entity Operator. The name of the Cluster Operator includes a version number: amq-streams-cluster-operator-<version> . The name is different when deploying the Cluster Operator using the Streams for Apache Kafka installation artifacts. In this case, the name is strimzi-cluster-operator . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/getting_started_with_streams_for_apache_kafka_on_openshift/proc-deploying-cluster-operator-hub-str |
Chapter 27. Managing containers by using the podman RHEL System Role | Chapter 27. Managing containers by using the podman RHEL System Role With the podman RHEL System Role, you can manage Podman configuration, containers, and systemd services which run Podman containers. 27.1. The podman RHEL System Role You can use the podman RHEL System Role to manage Podman configuration, containers, and systemd services which run Podman containers. Additional resources Installing RHEL System Roles For details about the parameters used in podman and additional information about the podman RHEL System Role, see the /usr/share/ansible/roles/rhel-system-roles.podman/README.md file. 27.2. Variables for the podman RHEL System Role The parameters used for the podman RHEL System Role are: Variable Description podman_kube_spec Describes a podman pod and corresponding systemd unit to manage. state : (default: created ) - denotes an operation to be executed with systemd services and pods: created : create the pods and systemd service, but do not run them started : create the pods and systemd services and start them absent : remove the pods and systemd services run_as_user : (default: podman_run_as_user ) - a per-pod user. If not specified, root is used. Note The user must already exist. run_as_group (default: podman_run_as_group ) - a per-pod group. If not specified, root is used. Note The group must already exist. systemd_unit_scope (default: podman_systemd_unit_scope ) - scope to use for the systemd unit. If not specified, system is used for root containers and user for user containers. kube_file_src - name of a Kubernetes YAML file on the controller node which will be copied to kube_file on the managed node Note Do not specify the kube_file_src variable if you specify kube_file_content variable. The kube_file_content takes precedence over kube_file_src . kube_file_content - string in Kubernetes YAML format or a dict in Kubernetes YAML format. It specifies the contents of kube_file on the managed node. Note Do not specify the kube_file_content variable if you specify kube_file_src variable. The kube_file_content takes precedence over kube_file_src . kube_file - a name of a file on the managed node that contains the Kubernetes specification of the container or pod. You typically do not have to specify the kube_file variable unless you need to copy the kube_file file to the managed node outside of the role. If you specify either kube_file_src or kube_file_content , you do not have to specify this. Note It is highly recommended to omit kube_file and instead specify either kube_file_src or kube_file_content and let the role manage the file path and name. The file basename will be the metadata.name value from the K8s yaml, with a .yml suffix appended to it. The directory is /etc/containers/ansible-kubernetes.d for system services. The directory is USDHOME/.config/containers/ansible-kubernetes.d for user services. This will be copied to the file /etc/containers/ansible-kubernetes.d/ <application_name> .yml on the managed node. podman_create_host_directories If true, the role ensures host directories specified in host mounts in volumes.hostPath specifications in the Kubernetes YAML given in podman_kube_specs . The default value is false. Note Directories must be specified as absolute paths (for root containers), or paths relative to the home directory (for non-root containers), in order for the role to manage them. Anything else is ignored. The role applies its default ownership or permissions to the directories. If you need to set ownership or permissions, see podman_host_directories . podman_host_directories It is a dict. If using podman_create_host_directories to tell the role to create host directories for volume mounts, and you need to specify permissions or ownership that apply to these created host directories, use podman_host_directories . Each key is the absolute path of the host directory to manage. The value is in the format of the parameters to the file module. If you do not specify a value, the role will use its built-in default values. If you want to specify a value to be used for all host directories, use the special key DEFAULT . podman_firewall It is a list of dict. Specifies ports that you want the role to manage in the firewall. This uses the same format as used by the firewall RHEL System Role. podman_selinux_ports It is a list of dict. Specifies ports that you want the role to manage the SELinux policy for ports used by the role. This uses the same format as used by the selinux RHEL System Role. podman_run_as_user Specifies the name of the user to use for all rootless containers. You can also specify per-container username with run_as_user in podman_kube_specs . Note The user must already exist. podman_run_as_group Specifies the name of the group to use for all rootless containers. You can also specify a per-container group name with run_as_group in podman_kube_specs . Note The group must already exist. podman_systemd_unit_scope Defines the systemd scope to use by default for all systemd units. You can also specify per-container scope with systemd_unit_scope in podman_kube_specs . By default, rootless containers use user and root containers use system . podman_containers_conf Defines the containers.conf(5) settings as a dict. The setting is provided in a drop-in file in the containers.conf.d directory. If running as root (see podman_run_as_user ), the system settings are managed. Otherwise, the user settings are managed. See the containers.conf man page for the directory locations. podman_registries_conf Defines the containers-registries.conf(5) settings as a dict. The setting is provided in a drop-in file in the registries.conf.d directory. If running as root (see podman_run_as_user ), the system settings are managed. Otherwise, the user settings are managed. See the registries.conf man page for the directory locations. podman_storage_conf Defines the containers-storage.conf(5) settings as a dict. If running as root (see podman_run_as_user ), the system settings are managed. Otherwise, the user settings are managed. See the storage.conf man page for the directory locations. podman_policy_json Defines the containers-policy.conf(5) settings as a dict. If running as root (see podman_run_as_user ), the system settings are managed. Otherwise, the user settings are managed. See the policy.json man page for the directory locations. Additional resources Installing RHEL System Roles For details about the parameters used in podman and additional information about the podman RHEL System Role, see the /usr/share/ansible/roles/rhel-system-roles.podman/README.md file. 27.3. Additional resources For details about the parameters used in podman and additional information about the podman RHEL System Role, see the /usr/share/ansible/roles/rhel-system-roles.podman/README.md file. For details about the ansible-playbook command, see the ansible-playbook(1) man page. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/managing-containers-by-using-the-podman-rhel-system-role_automating-system-administration-by-using-rhel-system-roles |
8.10. Security Policy | 8.10. Security Policy The Security Policy spoke allows you to configure the installed system following restrictions and recommendations ( compliance policies ) defined by the Security Content Automation Protocol (SCAP) standard. This functionality is provided by an add-on which has been enabled by default since Red Hat Enterprise Linux 7.2. When enabled, the packages necessary to provide this functionality will automatically be installed. However, by default, no policies are enforced, meaning that no checks are performed during or after installation unless specifically configured. See the information about scanning the system for configuration compliance and vulnerabilities in the Red Hat Enterprise Linux 7 Security Guide , which includes background information, practical examples, and additional resources. Important Applying a security policy is not necessary on all systems. This screen should only be used when a specific policy is mandated by your organization rules or government regulations. If you apply a security policy to the system, it will be installed using restrictions and recommendations defined in the selected profile. The openscap-scanner package will also be added to your package selection, providing a preinstalled tool for compliance and vulnerability scanning. After the installation finishes, the system will be automatically scanned to verify compliance. The results of this scan will be saved to the /root/openscap_data directory on the installed system. Pre-defined policies which are available in this screen are provided by SCAP Security Guide . See the OpenSCAP Portal for links to detailed information about each available profile. You can also load additional profiles from an HTTP, HTTPS or FTP server. Figure 8.8. Security policy selection screen To configure the use of security policies on the system, first enable configuration by setting the Apply security policy switch to ON . If the switch is in the OFF position, controls in the rest of this screen have no effect. After enabling security policy configuration using the switch, select one of the profiles listed in the top window of the screen, and click the Select profile below. When a profile is selected, a green check mark will appear on the right side, and the bottom field will display whether any changes will be made before beginning the installation. Note None of the profiles available by default perform any changes before the installation begins. However, loading a custom profile as described below can require some pre-installation actions. To use a custom profile, click the Change content button in the top left corner. This will open another screen where you can enter an URL of a valid security content. To go back to the default security content selection screen, click Use SCAP Security Guide in the top left corner. Custom profiles can be loaded from an HTTP , HTTPS or FTP server. Use the full address of the content, including the protocol (such as http:// ). A network connection must be active (enabled in Section 8.12, "Network & Hostname" ) before you can load a custom profile. The content type will be detected automatically by the installer. After you select a profile, or if you want to leave the screen, click Done in the top left corner to return to Section 8.6, "The Installation Summary Screen" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-security-policy-x86 |
1.2. SystemTap Capabilities | 1.2. SystemTap Capabilities SystemTap was originally developed to provide functionality for Red Hat Enterprise Linux similar to Linux probing tools such as dprobes and the Linux Trace Toolkit. SystemTap aims to supplement the existing suite of Linux monitoring tools by providing users with the infrastructure to track kernel activity. In addition, SystemTap combines this capability with two attributes: Flexibility: SystemTap's framework allows users to develop simple scripts for investigating and monitoring a wide variety of kernel functions, system calls, and other events that occur in kernel space. With this, SystemTap is not so much a tool as it is a system that allows you to develop your own kernel-specific forensic and monitoring tools. Ease-of-Use: as mentioned earlier, SystemTap allows users to probe kernel-space events without having to resort to the lengthy instrument, recompile, install, and reboot the kernel process. Most of the SystemTap scripts enumerated in Chapter 4, Useful SystemTap Scripts demonstrate system forensics and monitoring capabilities not natively available with other similar tools (such as top , OProfile , or ps ). These scripts are provided to give readers extensive examples of the application of SystemTap, which in turn will educate them further on the capabilities they can employ when writing their own SystemTap scripts. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_beginners_guide/intro-systemtap-vs-others |
Chapter 13. Configuring burst and QPS for net-kourier | Chapter 13. Configuring burst and QPS for net-kourier The queries per second (QPS) and burst values determine the frequency of requests or API calls to the API server. 13.1. Configuring burst and QPS values for net-kourier The queries per second (QPS) value determines the number of client requests or API calls that are sent to the API server. The burst value determines how many requests from the client can be stored for processing. Requests exceeding this buffer will be dropped. This is helpful for controllers that are bursty and do not spread their requests uniformly in time. When the net-kourier-controller restarts, it parses all ingress resources deployed on the cluster, which leads to a significant number of API calls. Due to this, the net-kourier-controller can take a long time to start. You can adjust the QPS and burst values for the net-kourier-controller in the KnativeServing CR: KnativeServing CR example apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: workloads: - name: net-kourier-controller env: - container: controller envVars: - name: KUBE_API_BURST value: "200" 1 - name: KUBE_API_QPS value: "200" 2 1 The QPS rate of communication between controller and the API Server. The default value is 200. 2 The burst capacity of communication between Kubelet and the API Server. The default value is 200. | [
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: workloads: - name: net-kourier-controller env: - container: controller envVars: - name: KUBE_API_BURST value: \"200\" 1 - name: KUBE_API_QPS value: \"200\" 2"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/serving/kube-burst-qps-net-kourier |
Monitoring OpenShift Data Foundation | Monitoring OpenShift Data Foundation Red Hat OpenShift Data Foundation 4.14 View cluster health, metrics, or set alerts. Red Hat Storage Documentation Team Abstract Read this document for instructions on monitoring Red Hat OpenShift Data Foundation using the Block and File, and Object dashboards. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/monitoring_openshift_data_foundation/index |
3.3. Install the Maven Repository | 3.3. Install the Maven Repository There are three ways to install the required repositories: On your local file system ( Section 3.3.1, "Local File System Repository Installation" ). On Apache Web Server. With a Maven repository manager ( Section 3.3.2, "Maven Repository Manager Installation" ). Use the option that best suits your environment. Report a bug 3.3.1. Local File System Repository Installation This option is best suited for initial testing in a small team. Follow the outlined procedure to extract the Red Hat JBoss Data Grid and JBoss Enterprise Application Platform Maven repositories to a directory in your local file system: Procedure 3.1. Local File System Repository Installation (JBoss Data Grid) Log Into the Customer Portal In a browser window, navigate to the Customer Portal page ( https://access.redhat.com/home ) and log in. Download the JBoss Data Grid Repository File Download the jboss-datagrid- {VERSION} -maven-repository.zip file from the Red Hat Customer Portal. Unzip the file to a directory on your local file system (for example USDJDG_HOME/projects/maven-repositories/ ). Report a bug 3.3.2. Maven Repository Manager Installation This option is ideal if you are already using a repository manager. The Red Hat JBoss Data Grid and JBoss Enterprise Application Server repositories are installed using a Maven repository manager using its documentation. Examples of such repository managers are: Apache Archiva: http://archiva.apache.org/ JFrog Artifactory: http://www.jfrog.com/products.php Sonatype Nexus: http://nexus.sonatype.org/ For details, see Section B.1, "Install the JBoss Enterprise Application Platform Repository Using Nexus" . Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/sect-Install_the_Maven_Repository |
Scaling storage | Scaling storage Red Hat OpenShift Data Foundation 4.9 Instructions for scaling operations in OpenShift Data Foundation Red Hat Storage Documentation Team Abstract This document explains scaling options for Red Hat OpenShift Data Foundation. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/scaling_storage/index |
5.60. espeak | 5.60. espeak 5.60.1. RHBA-2012:1118 - espeak bug fix update Updated espeak packages that fix one bug are now available for Red Hat Enterprise Linux 6. The espeak packages contain a software speech synthesizer for English and other languages. eSpeak uses a "formant synthesis" method, which allows many languages to be provided in a small size. Bug Fix BZ# 789997 Previously, eSpeak manipulated the system sound volume. As a consequence, eSpeak could set the sound volume to maximum regardless of the amplitude specified. The sound volume management code has been removed from eSpeak, and now only PulseAudio manages the sound volume. All users of espeak are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/espeak |
6.2. Resource Properties | 6.2. Resource Properties The properties that you define for a resource tell the cluster which script to use for the resource, where to find that script and what standards it conforms to. Table 6.1, "Resource Properties" describes these properties. Table 6.1. Resource Properties Field Description resource_id Your name for the resource standard The standard the script conforms to. Allowed values: ocf , service , upstart , systemd , lsb , stonith type The name of the Resource Agent you wish to use, for example IPaddr or Filesystem provider The OCF spec allows multiple vendors to supply the same resource agent. Most of the agents shipped by Red Hat use heartbeat as the provider. Table 6.2, "Commands to Display Resource Properties" . summarizes the commands that display the available resource properties. Table 6.2. Commands to Display Resource Properties pcs Display Command Output pcs resource list Displays a list of all available resources. pcs resource standards Displays a list of available resources agent standards. pcs resource providers Displays a list of available resources agent providers. pcs resource list string Displays a list of available resources filtered by the specified string. You can use this command to display resources filtered by the name of a standard, a provider, or a type. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-resourceprops-HAAR |
Chapter 3. Distributed tracing platform (Tempo) | Chapter 3. Distributed tracing platform (Tempo) 3.1. Installing Installing the distributed tracing platform (Tempo) requires the Tempo Operator and choosing which type of deployment is best for your use case: For microservices mode, deploy a TempoStack instance in a dedicated OpenShift project. For monolithic mode, deploy a TempoMonolithic instance in a dedicated OpenShift project. Important Using object storage requires setting up a supported object store and creating a secret for the object store credentials before deploying a TempoStack or TempoMonolithic instance. 3.1.1. Installing the Tempo Operator You can install the Tempo Operator by using the web console or the command line. 3.1.1.1. Installing the Tempo Operator by using the web console You can install the Tempo Operator from the Administrator view of the web console. Prerequisites You are logged in to the OpenShift Container Platform web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , Google Cloud Storage . For more information, see "Object storage setup". Warning Object storage is required and not included with the distributed tracing platform (Tempo). You must choose and set up object storage by a supported provider before installing the distributed tracing platform (Tempo). Procedure Go to Operators OperatorHub and search for Tempo Operator . Select the Tempo Operator that is provided by Red Hat . Important The following selections are the default presets for this Operator: Update channel stable Installation mode All namespaces on the cluster Installed Namespace openshift-tempo-operator Update approval Automatic Select the Enable Operator recommended cluster monitoring on this Namespace checkbox. Select Install Install View Operator . Verification In the Details tab of the page of the installed Operator, under ClusterServiceVersion details , verify that the installation Status is Succeeded . 3.1.1.2. Installing the Tempo Operator by using the CLI You can install the Tempo Operator from the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run oc login : USD oc login --username=<your_username> You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , Google Cloud Storage . For more information, see "Object storage setup". Warning Object storage is required and not included with the distributed tracing platform (Tempo). You must choose and set up object storage by a supported provider before installing the distributed tracing platform (Tempo). Procedure Create a project for the Tempo Operator by running the following command: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-tempo-operator openshift.io/cluster-monitoring: "true" name: openshift-tempo-operator EOF Create an Operator group by running the following command: USD oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-tempo-operator namespace: openshift-tempo-operator spec: upgradeStrategy: Default EOF Create a subscription by running the following command: USD oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tempo-product namespace: openshift-tempo-operator spec: channel: stable installPlanApproval: Automatic name: tempo-product source: redhat-operators sourceNamespace: openshift-marketplace EOF Verification Check the Operator status by running the following command: USD oc get csv -n openshift-tempo-operator 3.1.2. Installing a TempoStack instance You can install a TempoStack instance by using the web console or the command line. 3.1.2.1. Installing a TempoStack instance by using the web console You can install a TempoStack instance from the Administrator view of the web console. Prerequisites You are logged in to the OpenShift Container Platform web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , Google Cloud Storage . For more information, see "Object storage setup". Warning Object storage is required and not included with the distributed tracing platform (Tempo). You must choose and set up object storage by a supported provider before installing the distributed tracing platform (Tempo). Procedure Go to Home Projects Create Project to create a project of your choice for the TempoStack instance that you will create in a subsequent step. Go to Workloads Secrets Create From YAML to create a secret for your object storage bucket in the project that you created for the TempoStack instance. For more information, see "Object storage setup". Example secret for Amazon S3 and MinIO storage apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque Create a TempoStack instance. Note You can create multiple TempoStack instances in separate projects on the same cluster. Go to Operators Installed Operators . Select TempoStack Create TempoStack YAML view . In the YAML view , customize the TempoStack custom resource (CR): apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m 1 Size of the persistent volume claim for the Tempo WAL. The default is 10Gi . 2 Secret you created in step 2 for the object storage that had been set up as one of the prerequisites. 3 Value of the name in the metadata of the secret. 4 Accepted values are azure for Azure Blob Storage; gcs for Google Cloud Storage; and s3 for Amazon S3, MinIO, or Red Hat OpenShift Data Foundation. 5 Optional. 6 Optional: Name of a ConfigMap object that contains a CA certificate. 7 Optional. Example of a TempoStack CR for AWS S3 and MinIO storage apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route 1 In this example, the object storage was set up as one of the prerequisites, and the object storage secret was created in step 2. 2 The stack deployed in this example is configured to receive Jaeger Thrift over HTTP and OpenTelemetry Protocol (OTLP), which permits visualizing the data with the Jaeger UI. Select Create . Verification Use the Project: dropdown list to select the project of the TempoStack instance. Go to Operators Installed Operators to verify that the Status of the TempoStack instance is Condition: Ready . Go to Workloads Pods to verify that all the component pods of the TempoStack instance are running. Access the Tempo console: Go to Networking Routes and Ctrl + F to search for tempo . In the Location column, open the URL to access the Tempo console. Note The Tempo console initially shows no trace data following the Tempo console installation. 3.1.2.2. Installing a TempoStack instance by using the CLI You can install a TempoStack instance from the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run the oc login command: USD oc login --username=<your_username> You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , Google Cloud Storage . For more information, see "Object storage setup". Warning Object storage is required and not included with the distributed tracing platform (Tempo). You must choose and set up object storage by a supported provider before installing the distributed tracing platform (Tempo). Procedure Run the following command to create a project of your choice for the TempoStack instance that you will create in a subsequent step: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempostack_instance> EOF In the project that you created for the TempoStack instance, create a secret for your object storage bucket by running the following command: USD oc apply -f - << EOF <object_storage_secret> EOF For more information, see "Object storage setup". Example secret for Amazon S3 and MinIO storage apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque Create a TempoStack instance in the project that you created for it: Note You can create multiple TempoStack instances in separate projects on the same cluster. Customize the TempoStack custom resource (CR): apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m 1 Size of the persistent volume claim for the Tempo WAL. The default is 10Gi . 2 Secret you created in step 2 for the object storage that had been set up as one of the prerequisites. 3 Value of the name in the metadata of the secret. 4 Accepted values are azure for Azure Blob Storage; gcs for Google Cloud Storage; and s3 for Amazon S3, MinIO, or Red Hat OpenShift Data Foundation. 5 Optional. 6 Optional: Name of a ConfigMap object that contains a CA certificate. 7 Optional. Example of a TempoStack CR for AWS S3 and MinIO storage apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route 1 In this example, the object storage was set up as one of the prerequisites, and the object storage secret was created in step 2. 2 The stack deployed in this example is configured to receive Jaeger Thrift over HTTP and OpenTelemetry Protocol (OTLP), which permits visualizing the data with the Jaeger UI. Apply the customized CR by running the following command: USD oc apply -f - << EOF <tempostack_cr> EOF Verification Verify that the status of all TempoStack components is Running and the conditions are type: Ready by running the following command: USD oc get tempostacks.tempo.grafana.com simplest -o yaml Verify that all the TempoStack component pods are running by running the following command: USD oc get pods Access the Tempo console: Query the route details by running the following command: USD oc get route Open https://<route_from_previous_step> in a web browser. Note The Tempo console initially shows no trace data following the Tempo console installation. 3.1.3. Installing a TempoMonolithic instance Important The TempoMonolithic instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can install a TempoMonolithic instance by using the web console or the command line. The TempoMonolithic custom resource (CR) creates a Tempo deployment in monolithic mode. All components of the Tempo deployment, such as the compactor, distributor, ingester, querier, and query frontend, are contained in a single container. A TempoMonolithic instance supports storing traces in in-memory storage, a persistent volume, or object storage. Tempo deployment in monolithic mode is preferred for a small deployment, demonstration, testing, and as a migration path of the Red Hat OpenShift distributed tracing platform (Jaeger) all-in-one deployment. Note The monolithic deployment of Tempo does not scale horizontally. If you require horizontal scaling, use the TempoStack CR for a Tempo deployment in microservices mode. 3.1.3.1. Installing a TempoMonolithic instance by using the web console Important The TempoMonolithic instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can install a TempoMonolithic instance from the Administrator view of the web console. Prerequisites You are logged in to the OpenShift Container Platform web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. Procedure Go to Home Projects Create Project to create a project of your choice for the TempoMonolithic instance that you will create in a subsequent step. Decide which type of supported storage to use for storing traces: in-memory storage, a persistent volume, or object storage. Important Object storage is not included with the distributed tracing platform (Tempo) and requires setting up an object store by a supported provider: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , or Google Cloud Storage . Additionally, opting for object storage requires creating a secret for your object storage bucket in the project that you created for the TempoMonolithic instance. You can do this in Workloads Secrets Create From YAML . For more information, see "Object storage setup". Example secret for Amazon S3 and MinIO storage apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque Create a TempoMonolithic instance: Note You can create multiple TempoMonolithic instances in separate projects on the same cluster. Go to Operators Installed Operators . Select TempoMonolithic Create TempoMonolithic YAML view . In the YAML view , customize the TempoMonolithic custom resource (CR). The following TempoMonolithic CR creates a TempoMonolithic deployment with trace ingestion over OTLP/gRPC and OTLP/HTTP, storing traces in a supported type of storage and exposing Jaeger UI via a route: apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m 1 Type of storage for storing traces: in-memory storage, a persistent volume, or object storage. The value for a persistent volume is pv . The accepted values for object storage are s3 , gcs , or azure , depending on the used object store type. The default value is memory for the tmpfs in-memory storage, which is only appropriate for development, testing, demonstrations, and proof-of-concept environments because the data does not persist when the pod is shut down. 2 Memory size: For in-memory storage, this means the size of the tmpfs volume, where the default is 2Gi . For a persistent volume, this means the size of the persistent volume claim, where the default is 10Gi . For object storage, this means the size of the persistent volume claim for the Tempo WAL, where the default is 10Gi . 3 Optional: For object storage, the type of object storage. The accepted values are s3 , gcs , and azure , depending on the used object store type. 4 Optional: For object storage, the value of the name in the metadata of the storage secret. The storage secret must be in the same namespace as the TempoMonolithic instance and contain the fields specified in "Table 1. Required secret parameters" in the section "Object storage setup". 5 Optional. 6 Optional: Name of a ConfigMap object that contains a CA certificate. 7 Enables the Jaeger UI. 8 Enables creation of a route for the Jaeger UI. 9 Optional. Select Create . Verification Use the Project: dropdown list to select the project of the TempoMonolithic instance. Go to Operators Installed Operators to verify that the Status of the TempoMonolithic instance is Condition: Ready . Go to Workloads Pods to verify that the pod of the TempoMonolithic instance is running. Access the Jaeger UI: Go to Networking Routes and Ctrl + F to search for jaegerui . Note The Jaeger UI uses the tempo-<metadata_name_of_TempoMonolithic_CR>-jaegerui route. In the Location column, open the URL to access the Jaeger UI. When the pod of the TempoMonolithic instance is ready, you can send traces to the tempo-<metadata_name_of_TempoMonolithic_CR>:4317 (OTLP/gRPC) and tempo-<metadata_name_of_TempoMonolithic_CR>:4318 (OTLP/HTTP) endpoints inside the cluster. The Tempo API is available at the tempo-<metadata_name_of_TempoMonolithic_CR>:3200 endpoint inside the cluster. 3.1.3.2. Installing a TempoMonolithic instance by using the CLI Important The TempoMonolithic instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can install a TempoMonolithic instance from the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run the oc login command: USD oc login --username=<your_username> Procedure Run the following command to create a project of your choice for the TempoMonolithic instance that you will create in a subsequent step: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempomonolithic_instance> EOF Decide which type of supported storage to use for storing traces: in-memory storage, a persistent volume, or object storage. Important Object storage is not included with the distributed tracing platform (Tempo) and requires setting up an object store by a supported provider: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , or Google Cloud Storage . Additionally, opting for object storage requires creating a secret for your object storage bucket in the project that you created for the TempoMonolithic instance. You can do this by running the following command: USD oc apply -f - << EOF <object_storage_secret> EOF For more information, see "Object storage setup". Example secret for Amazon S3 and MinIO storage apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque Create a TempoMonolithic instance in the project that you created for it. Tip You can create multiple TempoMonolithic instances in separate projects on the same cluster. Customize the TempoMonolithic custom resource (CR). The following TempoMonolithic CR creates a TempoMonolithic deployment with trace ingestion over OTLP/gRPC and OTLP/HTTP, storing traces in a supported type of storage and exposing Jaeger UI via a route: apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m 1 Type of storage for storing traces: in-memory storage, a persistent volume, or object storage. The value for a persistent volume is pv . The accepted values for object storage are s3 , gcs , or azure , depending on the used object store type. The default value is memory for the tmpfs in-memory storage, which is only appropriate for development, testing, demonstrations, and proof-of-concept environments because the data does not persist when the pod is shut down. 2 Memory size: For in-memory storage, this means the size of the tmpfs volume, where the default is 2Gi . For a persistent volume, this means the size of the persistent volume claim, where the default is 10Gi . For object storage, this means the size of the persistent volume claim for the Tempo WAL, where the default is 10Gi . 3 Optional: For object storage, the type of object storage. The accepted values are s3 , gcs , and azure , depending on the used object store type. 4 Optional: For object storage, the value of the name in the metadata of the storage secret. The storage secret must be in the same namespace as the TempoMonolithic instance and contain the fields specified in "Table 1. Required secret parameters" in the section "Object storage setup". 5 Optional. 6 Optional: Name of a ConfigMap object that contains a CA certificate. 7 Enables the Jaeger UI. 8 Enables creation of a route for the Jaeger UI. 9 Optional. Apply the customized CR by running the following command: USD oc apply -f - << EOF <tempomonolithic_cr> EOF Verification Verify that the status of all TempoMonolithic components is Running and the conditions are type: Ready by running the following command: USD oc get tempomonolithic.tempo.grafana.com <metadata_name_of_tempomonolithic_cr> -o yaml Run the following command to verify that the pod of the TempoMonolithic instance is running: USD oc get pods Access the Jaeger UI: Query the route details for the tempo-<metadata_name_of_tempomonolithic_cr>-jaegerui route by running the following command: USD oc get route Open https://<route_from_previous_step> in a web browser. When the pod of the TempoMonolithic instance is ready, you can send traces to the tempo-<metadata_name_of_tempomonolithic_cr>:4317 (OTLP/gRPC) and tempo-<metadata_name_of_tempomonolithic_cr>:4318 (OTLP/HTTP) endpoints inside the cluster. The Tempo API is available at the tempo-<metadata_name_of_tempomonolithic_cr>:3200 endpoint inside the cluster. 3.1.4. Object storage setup You can use the following configuration parameters when setting up a supported object storage. Table 3.1. Required secret parameters Storage provider Secret parameters Red Hat OpenShift Data Foundation name: tempostack-dev-odf # example bucket: <bucket_name> # requires an ObjectBucketClaim endpoint: https://s3.openshift-storage.svc access_key_id: <data_foundation_access_key_id> access_key_secret: <data_foundation_access_key_secret> MinIO See MinIO Operator . name: tempostack-dev-minio # example bucket: <minio_bucket_name> # MinIO documentation endpoint: <minio_bucket_endpoint> access_key_id: <minio_access_key_id> access_key_secret: <minio_access_key_secret> Amazon S3 name: tempostack-dev-s3 # example bucket: <s3_bucket_name> # Amazon S3 documentation endpoint: <s3_bucket_endpoint> access_key_id: <s3_access_key_id> access_key_secret: <s3_access_key_secret> Amazon S3 with Security Token Service (STS) name: tempostack-dev-s3 # example bucket: <s3_bucket_name> # Amazon S3 documentation region: <s3_region> role_arn: <s3_role_arn> Microsoft Azure Blob Storage name: tempostack-dev-azure # example container: <azure_blob_storage_container_name> # Microsoft Azure documentation account_name: <azure_blob_storage_account_name> account_key: <azure_blob_storage_account_key> Google Cloud Storage on Google Cloud Platform (GCP) name: tempostack-dev-gcs # example bucketname: <google_cloud_storage_bucket_name> # requires a bucket created in a GCP project key.json: <path/to/key.json> # requires a service account in the bucket's GCP project for GCP authentication 3.1.4.1. Setting up the Amazon S3 storage with the Security Token Service You can set up the Amazon S3 storage with the Security Token Service (STS) by using the AWS Command Line Interface (AWS CLI). Important The Amazon S3 storage with the Security Token Service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have installed the latest version of the AWS CLI. Procedure Create an AWS S3 bucket. Create the following trust.json file for the AWS IAM policy that will set up a trust relationship for the AWS IAM role, created in the step, with the service account of the TempoStack instance: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::USD{<aws_account_id>}:oidc-provider/USD{<oidc_provider>}" 1 }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "USD{OIDC_PROVIDER}:sub": [ "system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}" 2 "system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}-query-frontend" ] } } } ] } 1 OIDC provider that you have configured on the OpenShift Container Platform. You can get the configured OIDC provider value also by running the following command: USD oc get authentication cluster -o json | jq -r '.spec.serviceAccountIssuer' | sed 's http[s]*:// ~g' . 2 Namespace in which you intend to create the TempoStack instance. Create an AWS IAM role by attaching the trust.json policy file that you created: USD aws iam create-role \ --role-name "tempo-s3-access" \ --assume-role-policy-document "file:///tmp/trust.json" \ --query Role.Arn \ --output text Attach an AWS IAM policy to the created role: USD aws iam attach-role-policy \ --role-name "tempo-s3-access" \ --policy-arn "arn:aws:iam::aws:policy/AmazonS3FullAccess" In the OpenShift Container Platform, create an object storage secret with keys as follows: apiVersion: v1 kind: Secret metadata: name: minio-test stringData: bucket: <s3_bucket_name> region: <s3_region> role_arn: <s3_role_arn> type: Opaque Additional resources AWS Identity and Access Management Documentation AWS Command Line Interface Documentation Configuring an OpenID Connect identity provider Identify AWS resources with Amazon Resource Names (ARNs) 3.1.4.2. Setting up IBM Cloud Object Storage You can set up IBM Cloud Object Storage by using the OpenShift CLI ( oc ). Prerequisites You have installed the latest version of OpenShift CLI ( oc ). For more information, see "Getting started with the OpenShift CLI" in Configure: CLI tools . You have installed the latest version of IBM Cloud Command Line Interface ( ibmcloud ). For more information, see "Getting started with the IBM Cloud CLI" in IBM Cloud Docs . You have configured IBM Cloud Object Storage. For more information, see "Choosing a plan and creating an instance" in IBM Cloud Docs . You have an IBM Cloud Platform account. You have ordered an IBM Cloud Object Storage plan. You have created an instance of IBM Cloud Object Storage. Procedure On IBM Cloud, create an object store bucket. On IBM Cloud, create a service key for connecting to the object store bucket by running the following command: USD ibmcloud resource service-key-create <tempo_bucket> Writer \ --instance-name <tempo_bucket> --parameters '{"HMAC":true}' On IBM Cloud, create a secret with the bucket credentials by running the following command: USD oc -n <namespace> create secret generic <ibm_cos_secret> \ --from-literal=bucket="<tempo_bucket>" \ --from-literal=endpoint="<ibm_bucket_endpoint>" \ --from-literal=access_key_id="<ibm_bucket_access_key>" \ --from-literal=access_key_secret="<ibm_bucket_secret_key>" On OpenShift Container Platform, create an object storage secret with keys as follows: apiVersion: v1 kind: Secret metadata: name: <ibm_cos_secret> stringData: bucket: <tempo_bucket> endpoint: <ibm_bucket_endpoint> access_key_id: <ibm_bucket_access_key> access_key_secret: <ibm_bucket_secret_key> type: Opaque On OpenShift Container Platform, set the storage section in the TempoStack custom resource as follows: apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack # ... spec: # ... storage: secret: name: <ibm_cos_secret> 1 type: s3 # ... 1 Name of the secret that contains the IBM Cloud Storage access and secret keys. Additional resources Getting started with the OpenShift CLI Getting started with the IBM Cloud CLI (IBM Cloud Docs) Choosing a plan and creating an instance (IBM Cloud Docs) Getting started with IBM Cloud Object Storage: Before you begin (IBM Cloud Docs) 3.1.5. Additional resources Creating a cluster admin OperatorHub.io Accessing the web console Installing from OperatorHub using the web console Creating applications from installed Operators Getting started with the OpenShift CLI 3.2. Configuring The Tempo Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings for creating and deploying the distributed tracing platform (Tempo) resources. You can install the default configuration or modify the file. 3.2.1. Configuring back-end storage For information about configuring the back-end storage, see Understanding persistent storage and the relevant configuration section for your chosen storage option. 3.2.2. Introduction to TempoStack configuration parameters The TempoStack custom resource (CR) defines the architecture and settings for creating the distributed tracing platform (Tempo) resources. You can modify these parameters to customize your implementation to your business needs. Example TempoStack CR apiVersion: tempo.grafana.com/v1alpha1 1 kind: TempoStack 2 metadata: 3 name: <name> 4 spec: 5 storage: {} 6 resources: {} 7 replicationFactor: 1 8 retention: {} 9 template: distributor: {} 10 ingester: {} 11 compactor: {} 12 querier: {} 13 queryFrontend: {} 14 gateway: {} 15 limits: 16 global: ingestion: {} 17 query: {} 18 observability: 19 grafana: {} metrics: {} tracing: {} search: {} 20 managementState: managed 21 1 API version to use when creating the object. 2 Defines the kind of Kubernetes object to create. 3 Data that uniquely identifies the object, including a name string, UID , and optional namespace . OpenShift Container Platform automatically generates the UID and completes the namespace with the name of the project where the object is created. 4 Name of the TempoStack instance. 5 Contains all of the configuration parameters of the TempoStack instance. When a common definition for all Tempo components is required, define it in the spec section. When the definition relates to an individual component, place it in the spec.template.<component> section. 6 Storage is specified at instance deployment. See the installation page for information about storage options for the instance. 7 Defines the compute resources for the Tempo container. 8 Integer value for the number of ingesters that must acknowledge the data from the distributors before accepting a span. 9 Configuration options for retention of traces. 10 Configuration options for the Tempo distributor component. 11 Configuration options for the Tempo ingester component. 12 Configuration options for the Tempo compactor component. 13 Configuration options for the Tempo querier component. 14 Configuration options for the Tempo query-frontend component. 15 Configuration options for the Tempo gateway component. 16 Limits ingestion and query rates. 17 Defines ingestion rate limits. 18 Defines query rate limits. 19 Configures operands to handle telemetry data. 20 Configures search capabilities. 21 Defines whether or not this CR is managed by the Operator. The default value is managed . Additional resources Installing a TempoStack instance Installing a TempoMonolithic instance 3.2.3. Query configuration options Two components of the distributed tracing platform (Tempo), the querier and query frontend, manage queries. You can configure both of these components. The querier component finds the requested trace ID in the ingesters or back-end storage. Depending on the set parameters, the querier component can query both the ingesters and pull bloom or indexes from the back end to search blocks in object storage. The querier component exposes an HTTP endpoint at GET /querier/api/traces/<trace_id> , but it is not expected to be used directly. Queries must be sent to the query frontend. Table 3.2. Configuration parameters for the querier component Parameter Description Values nodeSelector The simple form of the node-selection constraint. type: object replicas The number of replicas to be created for the component. type: integer; format: int32 tolerations Component-specific pod tolerations. type: array The query frontend component is responsible for sharding the search space for an incoming query. The query frontend exposes traces via a simple HTTP endpoint: GET /api/traces/<trace_id> . Internally, the query frontend component splits the blockID space into a configurable number of shards and then queues these requests. The querier component connects to the query frontend component via a streaming gRPC connection to process these sharded queries. Table 3.3. Configuration parameters for the query frontend component Parameter Description Values component Configuration of the query frontend component. type: object component.nodeSelector The simple form of the node selection constraint. type: object component.replicas The number of replicas to be created for the query frontend component. type: integer; format: int32 component.tolerations Pod tolerations specific to the query frontend component. type: array jaegerQuery The options specific to the Jaeger Query component. type: object jaegerQuery.enabled When enabled , creates the Jaeger Query component, jaegerQuery . type: boolean jaegerQuery.ingress The options for the Jaeger Query ingress. type: object jaegerQuery.ingress.annotations The annotations of the ingress object. type: object jaegerQuery.ingress.host The hostname of the ingress object. type: string jaegerQuery.ingress.ingressClassName The name of an IngressClass cluster resource. Defines which ingress controller serves this ingress resource. type: string jaegerQuery.ingress.route The options for the OpenShift route. type: object jaegerQuery.ingress.route.termination The termination type. The default is edge . type: string (enum: insecure, edge, passthrough, reencrypt) jaegerQuery.ingress.type The type of ingress for the Jaeger Query UI. The supported types are ingress , route , and none . type: string (enum: ingress, route) jaegerQuery.monitorTab The monitor tab configuration. type: object jaegerQuery.monitorTab.enabled Enables the monitor tab in the Jaeger console. The PrometheusEndpoint must be configured. type: boolean jaegerQuery.monitorTab.prometheusEndpoint The endpoint to the Prometheus instance that contains the span rate, error, and duration (RED) metrics. For example, https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 . type: string Example configuration of the query frontend component in a TempoStack CR apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: storage: secret: name: minio type: s3 storageSize: 200M resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route Additional resources Understanding taints and tolerations 3.2.4. Configuration of the monitor tab in Jaeger UI Trace data contains rich information, and the data is normalized across instrumented languages and frameworks. Therefore, request rate, error, and duration (RED) metrics can be extracted from traces. The metrics can be visualized in Jaeger console in the Monitor tab. The metrics are derived from spans in the OpenTelemetry Collector that are scraped from the Collector by the Prometheus deployed in the user-workload monitoring stack. The Jaeger UI queries these metrics from the Prometheus endpoint and visualizes them. 3.2.4.1. OpenTelemetry Collector configuration The OpenTelemetry Collector requires configuration of the spanmetrics connector that derives metrics from traces and exports the metrics in the Prometheus format. OpenTelemetry Collector custom resource for span RED kind: OpenTelemetryCollector apiVersion: opentelemetry.io/v1alpha1 metadata: name: otel spec: mode: deployment observability: metrics: enableMetrics: true 1 config: | connectors: spanmetrics: 2 metrics_flush_interval: 15s receivers: otlp: 3 protocols: grpc: http: exporters: prometheus: 4 endpoint: 0.0.0.0:8889 add_metric_suffixes: false resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped otlp: endpoint: "tempo-simplest-distributor:4317" tls: insecure: true service: pipelines: traces: receivers: [otlp] exporters: [otlp, spanmetrics] 5 metrics: receivers: [spanmetrics] 6 exporters: [prometheus] 1 Creates the ServiceMonitor custom resource to enable scraping of the Prometheus exporter. 2 The Spanmetrics connector receives traces and exports metrics. 3 The OTLP receiver to receive spans in the OpenTelemetry protocol. 4 The Prometheus exporter is used to export metrics in the Prometheus format. 5 The Spanmetrics connector is configured as exporter in traces pipeline. 6 The Spanmetrics connector is configured as receiver in metrics pipeline. 3.2.4.2. Tempo configuration The TempoStack custom resource must specify the following: the Monitor tab is enabled, and the Prometheus endpoint is set to the Thanos querier service to query the data from the user-defined monitoring stack. TempoStack custom resource with the enabled Monitor tab apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: redmetrics spec: storage: secret: name: minio-test type: s3 storageSize: 1Gi template: gateway: enabled: false queryFrontend: jaegerQuery: enabled: true monitorTab: enabled: true 1 prometheusEndpoint: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 2 redMetricsNamespace: "" 3 ingress: type: route 1 Enables the monitoring tab in the Jaeger console. 2 The service name for Thanos Querier from user-workload monitoring. 3 Optional: The metrics namespace on which the Jaeger query retrieves the Prometheus metrics. Include this line only if you are using an OpenTelemetry Collector version earlier than 0.109.0. If you are using an OpenTelemetry Collector version 0.109.0 or later, omit this line. 3.2.4.3. Span RED metrics and alerting rules The metrics generated by the spanmetrics connector are usable with alerting rules. For example, for alerts about a slow service or to define service level objectives (SLOs), the connector creates a duration_bucket histogram and the calls counter metric. These metrics have labels that identify the service, API name, operation type, and other attributes. Table 3.4. Labels of the metrics created in the spanmetrics connector Label Description Values service_name Service name set by the otel_service_name environment variable. frontend span_name Name of the operation. / /customer span_kind Identifies the server, client, messaging, or internal operation. SPAN_KIND_SERVER SPAN_KIND_CLIENT SPAN_KIND_PRODUCER SPAN_KIND_CONSUMER SPAN_KIND_INTERNAL Example PrometheusRule CR that defines an alerting rule for SLO when not serving 95% of requests within 2000ms on the front-end service apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: span-red spec: groups: - name: server-side-latency rules: - alert: SpanREDFrontendAPIRequestLatency expr: histogram_quantile(0.95, sum(rate(duration_bucket{service_name="frontend", span_kind="SPAN_KIND_SERVER"}[5m])) by (le, service_name, span_name)) > 2000 1 labels: severity: Warning annotations: summary: "High request latency on {{USDlabels.service_name}} and {{USDlabels.span_name}}" description: "{{USDlabels.instance}} has 95th request latency above 2s (current value: {{USDvalue}}s)" 1 The expression for checking if 95% of the front-end server response time values are below 2000 ms. The time range ( [5m] ) must be at least four times the scrape interval and long enough to accommodate a change in the metric. 3.2.5. Configuring the receiver TLS The custom resource of your TempoStack or TempoMonolithic instance supports configuring the TLS for receivers by using user-provided certificates or OpenShift's service serving certificates. 3.2.5.1. Receiver TLS configuration for a TempoStack instance You can provide a TLS certificate in a secret or use the service serving certificates that are generated by OpenShift Container Platform. To provide a TLS certificate in a secret, configure it in the TempoStack custom resource. Note This feature is not supported with the enabled Tempo Gateway. TLS for receivers and using a user-provided certificate in a secret apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack # ... spec: # ... template: distributor: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3 # ... 1 TLS enabled at the Tempo Distributor. 2 Secret containing a tls.key key and tls.crt certificate that you apply in advance. 3 Optional: CA in a config map to enable mutual TLS authentication (mTLS). Alternatively, you can use the service serving certificates that are generated by OpenShift Container Platform. Note Mutual TLS authentication (mTLS) is not supported with this feature. TLS for receivers and using the service serving certificates that are generated by OpenShift Container Platform apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack # ... spec: # ... template: distributor: tls: enabled: true 1 # ... 1 Sufficient configuration for the TLS at the Tempo Distributor. Additional resources Understanding service serving certificates Service CA certificates 3.2.5.2. Receiver TLS configuration for a TempoMonolithic instance You can provide a TLS certificate in a secret or use the service serving certificates that are generated by OpenShift Container Platform. To provide a TLS certificate in a secret, configure it in the TempoMonolithic custom resource. Note This feature is not supported with the enabled Tempo Gateway. TLS for receivers and using a user-provided certificate in a secret apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic # ... spec: # ... ingestion: otlp: grpc: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3 # ... 1 TLS enabled at the Tempo Distributor. 2 Secret containing a tls.key key and tls.crt certificate that you apply in advance. 3 Optional: CA in a config map to enable mutual TLS authentication (mTLS). Alternatively, you can use the service serving certificates that are generated by OpenShift Container Platform. Note Mutual TLS authentication (mTLS) is not supported with this feature. TLS for receivers and using the service serving certificates that are generated by OpenShift Container Platform apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic # ... spec: # ... ingestion: otlp: grpc: tls: enabled: true http: tls: enabled: true 1 # ... 1 Minimal configuration for the TLS at the Tempo Distributor. Additional resources Understanding service serving certificates Service CA certificates 3.2.6. Multitenancy Multitenancy with authentication and authorization is provided in the Tempo Gateway service. The authentication uses OpenShift OAuth and the Kubernetes TokenReview API. The authorization uses the Kubernetes SubjectAccessReview API. Note The Tempo Gateway service supports ingestion of traces only via the OTLP/gRPC. The OTLP/HTTP is not supported. Sample Tempo CR with two tenants, dev and prod apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: chainsaw-multitenancy spec: storage: secret: name: minio type: s3 storageSize: 1Gi resources: total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift 1 authentication: 2 - tenantName: dev 3 tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" 4 - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb" template: gateway: enabled: true 5 queryFrontend: jaegerQuery: enabled: true 1 Must be set to openshift . 2 The list of tenants. 3 The tenant name. Must be provided in the X-Scope-OrgId header when ingesting the data. 4 A unique tenant ID. 5 Enables a gateway that performs authentication and authorization. The Jaeger UI is exposed at http://<gateway-ingress>/api/traces/v1/<tenant-name>/search . The authorization configuration uses the ClusterRole and ClusterRoleBinding of the Kubernetes Role-Based Access Control (RBAC). By default, no users have read or write permissions. Sample of the read RBAC configuration that allows authenticated users to read the trace data of the dev and prod tenants apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-reader rules: - apiGroups: - 'tempo.grafana.com' resources: 1 - dev - prod resourceNames: - traces verbs: - 'get' 2 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-reader subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: system:authenticated 3 1 Lists the tenants. 2 The get value enables the read operation. 3 Grants all authenticated users the read permissions for trace data. Sample of the write RBAC configuration that allows the otel-collector service account to write the trace data for the dev tenant apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector 1 namespace: otel --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-write rules: - apiGroups: - 'tempo.grafana.com' resources: 2 - dev resourceNames: - traces verbs: - 'create' 3 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-write subjects: - kind: ServiceAccount name: otel-collector namespace: otel 1 The service account name for the client to use when exporting trace data. The client must send the service account token, /var/run/secrets/kubernetes.io/serviceaccount/token , as the bearer token header. 2 Lists the tenants. 3 The create value enables the write operation. Trace data can be sent to the Tempo instance from the OpenTelemetry Collector that uses the service account with RBAC for writing the data. Sample OpenTelemetry CR configuration apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment serviceAccount: otel-collector config: | extensions: bearertokenauth: filename: "/var/run/secrets/kubernetes.io/serviceaccount/token" exporters: otlp/dev: 1 endpoint: tempo-simplest-gateway.tempo.svc.cluster.local:8090 tls: insecure: false ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: "dev" otlphttp/dev: 2 endpoint: https://tempo-simplest-gateway.chainsaw-multitenancy.svc.cluster.local:8080/api/traces/v1/dev tls: insecure: false ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: "dev" service: extensions: [bearertokenauth] pipelines: traces: exporters: [otlp/dev] 3 1 OTLP gRPC Exporter. 2 OTLP HTTP Exporter. 3 You can specify otlp/dev for the OTLP gRPC Exporter or otlphttp/dev for the OTLP HTTP Exporter. 3.2.7. Using taints and tolerations To schedule the TempoStack pods on dedicated nodes, see How to deploy the different TempoStack components on infra nodes using nodeSelector and tolerations in OpenShift 4 . 3.2.8. Configuring monitoring and alerts The Tempo Operator supports monitoring and alerts about each TempoStack component such as distributor, ingester, and so on, and exposes upgrade and operational metrics about the Operator itself. 3.2.8.1. Configuring the TempoStack metrics and alerts You can enable metrics and alerts of TempoStack instances. Prerequisites Monitoring for user-defined projects is enabled in the cluster. Procedure To enable metrics of a TempoStack instance, set the spec.observability.metrics.createServiceMonitors field to true : apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createServiceMonitors: true To enable alerts for a TempoStack instance, set the spec.observability.metrics.createPrometheusRules field to true : apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createPrometheusRules: true Verification You can use the Administrator view of the web console to verify successful configuration: Go to Observe Targets , filter for Source: User , and check that ServiceMonitors in the format tempo-<instance_name>-<component> have the Up status. To verify that alerts are set up correctly, go to Observe Alerting Alerting rules , filter for Source: User , and check that the Alert rules for the TempoStack instance components are available. Additional resources Enabling monitoring for user-defined projects 3.2.8.2. Configuring the Tempo Operator metrics and alerts When installing the Tempo Operator from the web console, you can select the Enable Operator recommended cluster monitoring on this Namespace checkbox, which enables creating metrics and alerts of the Tempo Operator. If the checkbox was not selected during installation, you can manually enable metrics and alerts even after installing the Tempo Operator. Procedure Add the openshift.io/cluster-monitoring: "true" label in the project where the Tempo Operator is installed, which is openshift-tempo-operator by default. Verification You can use the Administrator view of the web console to verify successful configuration: Go to Observe Targets , filter for Source: Platform , and search for tempo-operator , which must have the Up status. To verify that alerts are set up correctly, go to Observe Alerting Alerting rules , filter for Source: Platform , and locate the Alert rules for the Tempo Operator . 3.3. Troubleshooting You can diagnose and fix issues in TempoStack or TempoMonolithic instances by using various troubleshooting methods. 3.3.1. Collecting diagnostic data from the command line When submitting a support case, it is helpful to include diagnostic information about your cluster to Red Hat Support. You can use the oc adm must-gather tool to gather diagnostic data for resources of various types, such as TempoStack or TempoMonolithic , and the created resources like Deployment , Pod , or ConfigMap . The oc adm must-gather tool creates a new pod that collects this data. Procedure From the directory where you want to save the collected data, run the oc adm must-gather command to collect the data: USD oc adm must-gather --image=ghcr.io/grafana/tempo-operator/must-gather -- \ /usr/bin/must-gather --operator-namespace <operator_namespace> 1 1 The default namespace where the Operator is installed is openshift-tempo-operator . Verification Verify that the new directory is created and contains the collected data. 3.4. Upgrading For version upgrades, the Tempo Operator uses the Operator Lifecycle Manager (OLM), which controls installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. The OLM runs in the OpenShift Container Platform by default. The OLM queries for available Operators as well as upgrades for installed Operators. When the Tempo Operator is upgraded to the new version, it scans for running TempoStack instances that it manages and upgrades them to the version corresponding to the Operator's new version. 3.4.1. Additional resources Operator Lifecycle Manager concepts and resources Updating installed Operators 3.5. Removing The steps for removing the Red Hat OpenShift distributed tracing platform (Tempo) from an OpenShift Container Platform cluster are as follows: Shut down all distributed tracing platform (Tempo) pods. Remove any TempoStack instances. Remove the Tempo Operator. 3.5.1. Removing by using the web console You can remove a TempoStack instance in the Administrator view of the web console. Prerequisites You are logged in to the OpenShift Container Platform web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. Procedure Go to Operators Installed Operators Tempo Operator TempoStack . To remove the TempoStack instance, select Delete TempoStack Delete . Optional: Remove the Tempo Operator. 3.5.2. Removing by using the CLI You can remove a TempoStack instance on the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run oc login : USD oc login --username=<your_username> Procedure Get the name of the TempoStack instance by running the following command: USD oc get deployments -n <project_of_tempostack_instance> Remove the TempoStack instance by running the following command: USD oc delete tempo <tempostack_instance_name> -n <project_of_tempostack_instance> Optional: Remove the Tempo Operator. Verification Run the following command to verify that the TempoStack instance is not found in the output, which indicates its successful removal: USD oc get deployments -n <project_of_tempostack_instance> 3.5.3. Additional resources Deleting Operators from a cluster Getting started with the OpenShift CLI | [
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-tempo-operator openshift.io/cluster-monitoring: \"true\" name: openshift-tempo-operator EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-tempo-operator namespace: openshift-tempo-operator spec: upgradeStrategy: Default EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tempo-product namespace: openshift-tempo-operator spec: channel: stable installPlanApproval: Automatic name: tempo-product source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get csv -n openshift-tempo-operator",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempostack_instance> EOF",
"oc apply -f - << EOF <object_storage_secret> EOF",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route",
"oc apply -f - << EOF <tempostack_cr> EOF",
"oc get tempostacks.tempo.grafana.com simplest -o yaml",
"oc get pods",
"oc get route",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempomonolithic_instance> EOF",
"oc apply -f - << EOF <object_storage_secret> EOF",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m",
"oc apply -f - << EOF <tempomonolithic_cr> EOF",
"oc get tempomonolithic.tempo.grafana.com <metadata_name_of_tempomonolithic_cr> -o yaml",
"oc get pods",
"oc get route",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{<aws_account_id>}:oidc-provider/USD{<oidc_provider>}\" 1 }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": [ \"system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}\" 2 \"system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}-query-frontend\" ] } } } ] }",
"aws iam create-role --role-name \"tempo-s3-access\" --assume-role-policy-document \"file:///tmp/trust.json\" --query Role.Arn --output text",
"aws iam attach-role-policy --role-name \"tempo-s3-access\" --policy-arn \"arn:aws:iam::aws:policy/AmazonS3FullAccess\"",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: bucket: <s3_bucket_name> region: <s3_region> role_arn: <s3_role_arn> type: Opaque",
"ibmcloud resource service-key-create <tempo_bucket> Writer --instance-name <tempo_bucket> --parameters '{\"HMAC\":true}'",
"oc -n <namespace> create secret generic <ibm_cos_secret> --from-literal=bucket=\"<tempo_bucket>\" --from-literal=endpoint=\"<ibm_bucket_endpoint>\" --from-literal=access_key_id=\"<ibm_bucket_access_key>\" --from-literal=access_key_secret=\"<ibm_bucket_secret_key>\"",
"apiVersion: v1 kind: Secret metadata: name: <ibm_cos_secret> stringData: bucket: <tempo_bucket> endpoint: <ibm_bucket_endpoint> access_key_id: <ibm_bucket_access_key> access_key_secret: <ibm_bucket_secret_key> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: storage: secret: name: <ibm_cos_secret> 1 type: s3",
"apiVersion: tempo.grafana.com/v1alpha1 1 kind: TempoStack 2 metadata: 3 name: <name> 4 spec: 5 storage: {} 6 resources: {} 7 replicationFactor: 1 8 retention: {} 9 template: distributor: {} 10 ingester: {} 11 compactor: {} 12 querier: {} 13 queryFrontend: {} 14 gateway: {} 15 limits: 16 global: ingestion: {} 17 query: {} 18 observability: 19 grafana: {} metrics: {} tracing: {} search: {} 20 managementState: managed 21",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: storage: secret: name: minio type: s3 storageSize: 200M resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route",
"kind: OpenTelemetryCollector apiVersion: opentelemetry.io/v1alpha1 metadata: name: otel spec: mode: deployment observability: metrics: enableMetrics: true 1 config: | connectors: spanmetrics: 2 metrics_flush_interval: 15s receivers: otlp: 3 protocols: grpc: http: exporters: prometheus: 4 endpoint: 0.0.0.0:8889 add_metric_suffixes: false resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped otlp: endpoint: \"tempo-simplest-distributor:4317\" tls: insecure: true service: pipelines: traces: receivers: [otlp] exporters: [otlp, spanmetrics] 5 metrics: receivers: [spanmetrics] 6 exporters: [prometheus]",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: redmetrics spec: storage: secret: name: minio-test type: s3 storageSize: 1Gi template: gateway: enabled: false queryFrontend: jaegerQuery: enabled: true monitorTab: enabled: true 1 prometheusEndpoint: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 2 redMetricsNamespace: \"\" 3 ingress: type: route",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: span-red spec: groups: - name: server-side-latency rules: - alert: SpanREDFrontendAPIRequestLatency expr: histogram_quantile(0.95, sum(rate(duration_bucket{service_name=\"frontend\", span_kind=\"SPAN_KIND_SERVER\"}[5m])) by (le, service_name, span_name)) > 2000 1 labels: severity: Warning annotations: summary: \"High request latency on {{USDlabels.service_name}} and {{USDlabels.span_name}}\" description: \"{{USDlabels.instance}} has 95th request latency above 2s (current value: {{USDvalue}}s)\"",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: template: distributor: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: template: distributor: tls: enabled: true 1",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic spec: ingestion: otlp: grpc: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic spec: ingestion: otlp: grpc: tls: enabled: true http: tls: enabled: true 1",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: chainsaw-multitenancy spec: storage: secret: name: minio type: s3 storageSize: 1Gi resources: total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift 1 authentication: 2 - tenantName: dev 3 tenantId: \"1610b0c3-c509-4592-a256-a1871353dbfa\" 4 - tenantName: prod tenantId: \"1610b0c3-c509-4592-a256-a1871353dbfb\" template: gateway: enabled: true 5 queryFrontend: jaegerQuery: enabled: true",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-reader rules: - apiGroups: - 'tempo.grafana.com' resources: 1 - dev - prod resourceNames: - traces verbs: - 'get' 2 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-reader subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: system:authenticated 3",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector 1 namespace: otel --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-write rules: - apiGroups: - 'tempo.grafana.com' resources: 2 - dev resourceNames: - traces verbs: - 'create' 3 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-write subjects: - kind: ServiceAccount name: otel-collector namespace: otel",
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment serviceAccount: otel-collector config: | extensions: bearertokenauth: filename: \"/var/run/secrets/kubernetes.io/serviceaccount/token\" exporters: otlp/dev: 1 endpoint: tempo-simplest-gateway.tempo.svc.cluster.local:8090 tls: insecure: false ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: \"dev\" otlphttp/dev: 2 endpoint: https://tempo-simplest-gateway.chainsaw-multitenancy.svc.cluster.local:8080/api/traces/v1/dev tls: insecure: false ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: \"dev\" service: extensions: [bearertokenauth] pipelines: traces: exporters: [otlp/dev] 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createServiceMonitors: true",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createPrometheusRules: true",
"oc adm must-gather --image=ghcr.io/grafana/tempo-operator/must-gather -- /usr/bin/must-gather --operator-namespace <operator_namespace> 1",
"oc login --username=<your_username>",
"oc get deployments -n <project_of_tempostack_instance>",
"oc delete tempo <tempostack_instance_name> -n <project_of_tempostack_instance>",
"oc get deployments -n <project_of_tempostack_instance>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/distributed_tracing/distributed-tracing-platform-tempo |
Chapter 3. AMQ Streams deployment of Kafka | Chapter 3. AMQ Streams deployment of Kafka Apache Kafka components are provided for deployment to OpenShift with the AMQ Streams distribution. The Kafka components are generally run as clusters for availability. A typical deployment incorporating Kafka components might include: Kafka cluster of broker nodes ZooKeeper cluster of replicated ZooKeeper instances Kafka Connect cluster for external data connections Kafka MirrorMaker cluster to mirror the Kafka cluster in a secondary cluster Kafka Exporter to extract additional Kafka metrics data for monitoring Kafka Bridge to make HTTP-based requests to the Kafka cluster Not all of these components are mandatory, though you need Kafka and ZooKeeper as a minimum. Some components can be deployed without Kafka, such as MirrorMaker or Kafka Connect. 3.1. Kafka component architecture A cluster of Kafka brokers handles delivery of messages. A broker uses Apache ZooKeeper for storing configuration data and for cluster coordination. Before running Apache Kafka, an Apache ZooKeeper cluster has to be ready. Each of the other Kafka components interact with the Kafka cluster to perform specific roles. Kafka component interaction Apache ZooKeeper Apache ZooKeeper is a core dependency for Kafka as it provides a cluster coordination service, storing and tracking the status of brokers and consumers. ZooKeeper is also used for leader election of partitions. Kafka Connect Kafka Connect is an integration toolkit for streaming data between Kafka brokers and other systems using Connector plugins. Kafka Connect provides a framework for integrating Kafka with an external data source or target, such as a database, for import or export of data using connectors. Connectors are plugins that provide the connection configuration needed. A source connector pushes external data into Kafka. A sink connector extracts data out of Kafka External data is translated and transformed into the appropriate format. You can deploy Kafka Connect with build configuration that automatically builds a container image with the connector plugins you require for your data connections. Kafka MirrorMaker Kafka MirrorMaker replicates data between two Kafka clusters, within or across data centers. MirrorMaker takes messages from a source Kafka cluster and writes them to a target Kafka cluster. Kafka Bridge Kafka Bridge provides an API for integrating HTTP-based clients with a Kafka cluster. Kafka Exporter Kafka Exporter extracts data for analysis as Prometheus metrics, primarily data relating to offsets, consumer groups, consumer lag and topics. Consumer lag is the delay between the last message written to a partition and the message currently being picked up from that partition by a consumer 3.2. Kafka Bridge interface The Kafka Bridge provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster. It offers the advantages of a web API connection to AMQ Streams, without the need for client applications to interpret the Kafka protocol. The API has two main resources - consumers and topics - that are exposed and made accessible through endpoints to interact with consumers and producers in your Kafka cluster. The resources relate only to the Kafka Bridge, not the consumers and producers connected directly to Kafka. 3.2.1. HTTP requests The Kafka Bridge supports HTTP requests to a Kafka cluster, with methods to: Send messages to a topic. Retrieve messages from topics. Retrieve a list of partitions for a topic. Create and delete consumers. Subscribe consumers to topics, so that they start receiving messages from those topics. Retrieve a list of topics that a consumer is subscribed to. Unsubscribe consumers from topics. Assign partitions to consumers. Commit a list of consumer offsets. Seek on a partition, so that a consumer starts receiving messages from the first or last offset position, or a given offset position. The methods provide JSON responses and HTTP response code error handling. Messages can be sent in JSON or binary formats. Clients can produce and consume messages without the requirement to use the native Kafka protocol. Additional resources To view the API documentation, including example requests and responses, see the Kafka Bridge API reference . 3.2.2. Supported clients for the Kafka Bridge You can use the Kafka Bridge to integrate both internal and external HTTP client applications with your Kafka cluster. Internal clients Internal clients are container-based HTTP clients running in the same OpenShift cluster as the Kafka Bridge itself. Internal clients can access the Kafka Bridge on the host and port defined in the KafkaBridge custom resource. External clients External clients are HTTP clients running outside the OpenShift cluster in which the Kafka Bridge is deployed and running. External clients can access the Kafka Bridge through an OpenShift Route, a loadbalancer service, or using an Ingress. HTTP internal and external client integration | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/amq_streams_on_openshift_overview/kafka-components_str |
Chapter 13. PMML model execution | Chapter 13. PMML model execution You can import PMML files into your Red Hat Decision Manager project using Business Central ( Menu Design Projects Import Asset ) or package the PMML files as part of your project knowledge JAR (KJAR) file without Business Central. After you implement your PMML files in your Red Hat Decision Manager project, you can execute the PMML-based decision service by embedding PMML calls directly in your Java application or by sending an ApplyPmmlModelCommand command to a configured KIE Server. For more information about including PMML assets with your project packaging and deployment method, see Packaging and deploying an Red Hat Decision Manager project . Note You can also include a PMML model as part of a Decision Model and Notation (DMN) service in Business Central. When you include a PMML model within a DMN file, you can invoke that PMML model as a boxed function expression for a DMN decision node or business knowledge model node. For more information about including PMML models in a DMN service, see Designing a decision service using DMN models . 13.1. Embedding a PMML trusty call directly in a Java application A KIE container is local when the knowledge assets are either embedded directly into the calling program or are physically pulled in using Maven dependencies for the KJAR. You embed knowledge assets directly into a project if there is a tight relationship between the version of the code and the version of the PMML definition. Any changes to the decision take effect after you have intentionally updated and redeployed the application. A benefit of this approach is that proper operation does not rely on any external dependencies to the run time, which can be a limitation of locked-down environments. Prerequisites A KJAR containing the PMML model to execute has been created. For more information about project packaging, see Packaging and deploying an Red Hat Decision Manager project . Procedure In your client application, add the following dependencies to the relevant classpath of your Java project: <!-- Required for the PMML compiler --> <dependency> <groupId>org.drools</groupId> <artifactId>kie-pmml-dependencies</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required for the KIE public API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{rhpam.version}</version> </dependencies> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>USD{rhpam.version}</version> </dependency> The <version> is the Maven artifact version for Red Hat Decision Manager currently used in your project (for example, 7.67.0.Final-redhat-00024). Note Instead of specifying a Red Hat Decision Manager <version> for individual dependencies, consider adding the Red Hat Business Automation bill of materials (BOM) dependency to your project pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. Example BOM dependency: <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency> For more information about the Red Hat Business Automation BOM, see What is the mapping between RHDM product and maven library version? . Create a KIE container from classpath or ReleaseId : KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "my-kjar", "1.0.0" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId ); Alternative option: KieServices kieServices = KieServices.Factory.get(); KieContainer kieContainer = kieServices.getKieClasspathContainer(); Create an instance of the PMMLRuntime that is used to execute the model: PMMLRuntime pmmlRuntime = KieRuntimeFactory.of(kieContainer.getKieBase()).get(PMMLRuntime.class); Create an instance of the PMMLRequestData class that applies your PMML model to a data set: PMMLRequestData pmmlRequestData = new PMMLRequestData({correlation_id}, {model_name}); pmmlRequestData.addRequestParam({parameter_name}, {parameter_value}) ... Create an instance of the PMMLContext class that contains the input data: PMMLContext pmmlContext = new PMMLContextImpl(pmmlRequestData); Retrieve the PMML4Result while executing the PMML model with the required PMML class instances that you created: PMML4Result pmml4Result = pmmlRuntime.evaluate({model_name}, pmmlContext); 13.2. Embedding a PMML legacy call directly in a Java application A KIE container is local when the knowledge assets are either embedded directly into the calling program or are physically pulled in using Maven dependencies for the KJAR. You embed knowledge assets directly into a project if there is a tight relationship between the version of the code and the version of the PMML definition. Any changes to the decision take effect after you have intentionally updated and redeployed the application. A benefit of this approach is that proper operation does not rely on any external dependencies to the run time, which can be a limitation of locked-down environments. Using Maven dependencies enables further flexibility because the specific version of the decision can dynamically change (for example, by using a system property), and it can be periodically scanned for updates and automatically updated. This introduces an external dependency on the deploy time of the service, but executes the decision locally, reducing reliance on an external service being available during run time. Prerequisites A KJAR containing the PMML model to execute has been created. For more information about project packaging, see Packaging and deploying an Red Hat Decision Manager project . Procedure In your client application, add the following dependencies to the relevant classpath of your Java project: <!-- Required for the PMML compiler --> <dependency> <groupId>org.drools</groupId> <artifactId>kie-pmml</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required for the KIE public API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{rhpam.version}</version> </dependencies> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>USD{rhpam.version}</version> </dependency> The <version> is the Maven artifact version for Red Hat Decision Manager currently used in your project (for example, 7.67.0.Final-redhat-00024). Note Instead of specifying a Red Hat Decision Manager <version> for individual dependencies, consider adding the Red Hat Business Automation bill of materials (BOM) dependency to your project pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. Example BOM dependency: <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency> For more information about the Red Hat Business Automation BOM, see What is the mapping between RHDM product and maven library version? . Important To use the legacy implementation, ensure that the kie-pmml-implementation system property is set as legacy . Create a KIE container from classpath or ReleaseId : KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "my-kjar", "1.0.0" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId ); Alternative option: KieServices kieServices = KieServices.Factory.get(); KieContainer kieContainer = kieServices.getKieClasspathContainer(); Create an instance of the PMMLRequestData class, which applies your PMML model to a set of data: public class PMMLRequestData { private String correlationId; 1 private String modelName; 2 private String source; 3 private List<ParameterInfo<?>> requestParams; 4 ... } 1 Identifies data that is associated with a particular request or result 2 The name of the model that should be applied to the request data 3 Used by internally generated PMMLRequestData objects to identify the segment that generated the request 4 The default mechanism for sending input data points Create an instance of the PMML4Result class, which holds the output information that is the result of applying the PMML-based rules to the input data: public class PMML4Result { private String correlationId; private String segmentationId; 1 private String segmentId; 2 private int segmentIndex; 3 private String resultCode; 4 private Map<String, Object> resultVariables; 5 ... } 1 Used when the model type is MiningModel . The segmentationId is used to differentiate between multiple segmentations. 2 Used in conjunction with the segmentationId to identify which segment generated the results. 3 Used to maintain the order of segments. 4 Used to determine whether the model was successfully applied, where OK indicates success. 5 Contains the name of a resultant variable and its associated value. In addition to the normal getter methods, the PMML4Result class also supports the following methods for directly retrieving the values for result variables: public <T> Optional<T> getResultValue(String objName, String objField, Class<T> clazz, Object...params) public Object getResultValue(String objName, String objField, Object...params) Create an instance of the ParameterInfo class, which serves as a wrapper for basic data type objects used as part of the PMMLRequestData class: public class ParameterInfo<T> { 1 private String correlationId; private String name; 2 private String capitalizedName; private Class<T> type; 3 private T value; 4 ... } 1 The parameterized class to handle many different types 2 The name of the variable that is expected as input for the model 3 The class that is the actual type of the variable 4 The actual value of the variable Execute the PMML model based on the required PMML class instances that you have created: public void executeModel(KieBase kbase, Map<String,Object> variables, String modelName, String correlationId, String modelPkgName) { RuleUnitExecutor executor = RuleUnitExecutor.create().bind(kbase); PMMLRequestData request = new PMMLRequestData(correlationId, modelName); PMML4Result resultHolder = new PMML4Result(correlationId); variables.entrySet().forEach( es -> { request.addRequestParam(es.getKey(), es.getValue()); }); DataSource<PMMLRequestData> requestData = executor.newDataSource("request"); DataSource<PMML4Result> resultData = executor.newDataSource("results"); DataSource<PMMLData> internalData = executor.newDataSource("pmmlData"); requestData.insert(request); resultData.insert(resultHolder); List<String> possiblePackageNames = calculatePossiblePackageNames(modelName, modelPkgName); Class<? extends RuleUnit> ruleUnitClass = getStartingRuleUnit("RuleUnitIndicator", (InternalKnowledgeBase)kbase, possiblePackageNames); if (ruleUnitClass != null) { executor.run(ruleUnitClass); if ( "OK".equals(resultHolder.getResultCode()) ) { // extract result variables here } } } protected Class<? extends RuleUnit> getStartingRuleUnit(String startingRule, InternalKnowledgeBase ikb, List<String> possiblePackages) { RuleUnitRegistry unitRegistry = ikb.getRuleUnitRegistry(); Map<String,InternalKnowledgePackage> pkgs = ikb.getPackagesMap(); RuleImpl ruleImpl = null; for (String pkgName: possiblePackages) { if (pkgs.containsKey(pkgName)) { InternalKnowledgePackage pkg = pkgs.get(pkgName); ruleImpl = pkg.getRule(startingRule); if (ruleImpl != null) { RuleUnitDescr descr = unitRegistry.getRuleUnitFor(ruleImpl).orElse(null); if (descr != null) { return descr.getRuleUnitClass(); } } } } return null; } protected List<String> calculatePossiblePackageNames(String modelId, String...knownPackageNames) { List<String> packageNames = new ArrayList<>(); String javaModelId = modelId.replaceAll("\\s",""); if (knownPackageNames != null && knownPackageNames.length > 0) { for (String knownPkgName: knownPackageNames) { packageNames.add(knownPkgName + "." + javaModelId); } } String basePkgName = PMML4UnitImpl.DEFAULT_ROOT_PACKAGE+"."+javaModelId; packageNames.add(basePkgName); return packageNames; } Rules are executed by the RuleUnitExecutor class. The RuleUnitExecutor class creates KIE sessions and adds the required DataSource objects to those sessions, and then executes the rules based on the RuleUnit that is passed as a parameter to the run() method. The calculatePossiblePackageNames and the getStartingRuleUnit methods determine the fully qualified name of the RuleUnit class that is passed to the run() method. To facilitate your PMML model execution, you can also use a PMML4ExecutionHelper class supported in Red Hat Decision Manager. For more information about the PMML helper class, see Section 13.2.1, "PMML execution helper class" . 13.2.1. PMML execution helper class Red Hat Decision Manager provides a PMML4ExecutionHelper class that helps create the PMMLRequestData class required for PMML model execution and that helps execute rules using the RuleUnitExecutor class. The following are examples of a PMML model execution without and with the PMML4ExecutionHelper class, as a comparison: Example PMML model execution without using PMML4ExecutionHelper public void executeModel(KieBase kbase, Map<String,Object> variables, String modelName, String correlationId, String modelPkgName) { RuleUnitExecutor executor = RuleUnitExecutor.create().bind(kbase); PMMLRequestData request = new PMMLRequestData(correlationId, modelName); PMML4Result resultHolder = new PMML4Result(correlationId); variables.entrySet().forEach( es -> { request.addRequestParam(es.getKey(), es.getValue()); }); DataSource<PMMLRequestData> requestData = executor.newDataSource("request"); DataSource<PMML4Result> resultData = executor.newDataSource("results"); DataSource<PMMLData> internalData = executor.newDataSource("pmmlData"); requestData.insert(request); resultData.insert(resultHolder); List<String> possiblePackageNames = calculatePossiblePackageNames(modelName, modelPkgName); Class<? extends RuleUnit> ruleUnitClass = getStartingRuleUnit("RuleUnitIndicator", (InternalKnowledgeBase)kbase, possiblePackageNames); if (ruleUnitClass != null) { executor.run(ruleUnitClass); if ( "OK".equals(resultHolder.getResultCode()) ) { // extract result variables here } } } protected Class<? extends RuleUnit> getStartingRuleUnit(String startingRule, InternalKnowledgeBase ikb, List<String> possiblePackages) { RuleUnitRegistry unitRegistry = ikb.getRuleUnitRegistry(); Map<String,InternalKnowledgePackage> pkgs = ikb.getPackagesMap(); RuleImpl ruleImpl = null; for (String pkgName: possiblePackages) { if (pkgs.containsKey(pkgName)) { InternalKnowledgePackage pkg = pkgs.get(pkgName); ruleImpl = pkg.getRule(startingRule); if (ruleImpl != null) { RuleUnitDescr descr = unitRegistry.getRuleUnitFor(ruleImpl).orElse(null); if (descr != null) { return descr.getRuleUnitClass(); } } } } return null; } protected List<String> calculatePossiblePackageNames(String modelId, String...knownPackageNames) { List<String> packageNames = new ArrayList<>(); String javaModelId = modelId.replaceAll("\\s",""); if (knownPackageNames != null && knownPackageNames.length > 0) { for (String knownPkgName: knownPackageNames) { packageNames.add(knownPkgName + "." + javaModelId); } } String basePkgName = PMML4UnitImpl.DEFAULT_ROOT_PACKAGE+"."+javaModelId; packageNames.add(basePkgName); return packageNames; } Example PMML model execution using PMML4ExecutionHelper public void executeModel(KieBase kbase, Map<String,Object> variables, String modelName, String modelPkgName, String correlationId) { PMML4ExecutionHelper helper = PMML4ExecutionHelperFactory.getExecutionHelper(modelName, kbase); helper.addPossiblePackageName(modelPkgName); PMMLRequestData request = new PMMLRequestData(correlationId, modelName); variables.entrySet().forEach(entry -> { request.addRequestParam(entry.getKey(), entry.getValue); }); PMML4Result resultHolder = helper.submitRequest(request); if ("OK".equals(resultHolder.getResultCode)) { // extract result variables here } } When you use the PMML4ExecutionHelper , you do not need to specify the possible package names nor the RuleUnit class as you would in a typical PMML model execution. To construct a PMML4ExecutionHelper class, you use the PMML4ExecutionHelperFactory class to determine how instances of PMML4ExecutionHelper are retrieved. The following are the available PMML4ExecutionHelperFactory class methods for constructing a PMML4ExecutionHelper class: PMML4ExecutionHelperFactory methods for PMML assets in a KIE base Use these methods when PMML assets have already been compiled and are being used from an existing KIE base: public static PMML4ExecutionHelper getExecutionHelper(String modelName, KieBase kbase) public static PMML4ExecutionHelper getExecutionHelper(String modelName, KieBase kbase, boolean includeMiningDataSources) PMML4ExecutionHelperFactory methods for PMML assets on the project classpath Use these methods when PMML assets are on the project classpath. The classPath argument is the project classpath location of the PMML file: public static PMML4ExecutionHelper getExecutionHelper(String modelName, String classPath, KieBaseConfiguration kieBaseConf) public static PMML4ExecutionHelper getExecutionHelper(String modelName,String classPath, KieBaseConfiguration kieBaseConf, boolean includeMiningDataSources) PMML4ExecutionHelperFactory methods for PMML assets in a byte array Use these methods when PMML assets are in the form of a byte array: public static PMML4ExecutionHelper getExecutionHelper(String modelName, byte[] content, KieBaseConfiguration kieBaseConf) public static PMML4ExecutionHelper getExecutionHelper(String modelName, byte[] content, KieBaseConfiguration kieBaseConf, boolean includeMiningDataSources) PMML4ExecutionHelperFactory methods for PMML assets in a Resource Use these methods when PMML assets are in the form of an org.kie.api.io.Resource object: public static PMML4ExecutionHelper getExecutionHelper(String modelName, Resource resource, KieBaseConfiguration kieBaseConf) public static PMML4ExecutionHelper getExecutionHelper(String modelName, Resource resource, KieBaseConfiguration kieBaseConf, boolean includeMiningDataSources) Note The classpath, byte array, and resource PMML4ExecutionHelperFactory methods create a KIE container for the generated rules and Java classes. The container is used as the source of the KIE base that the RuleUnitExecutor uses. The container is not persisted. The PMML4ExecutionHelperFactory method for PMML assets that are already in a KIE base does not create a KIE container in this way. 13.3. Executing a PMML model using KIE Server You can execute PMML models that have been deployed to KIE Server by sending the ApplyPmmlModelCommand command to the configured KIE Server. When you use this command, a PMMLRequestData object is sent to KIE Server and a PMML4Result result object is received as a reply. You can send PMML requests to KIE Server through the KIE Server REST API from a configured Java class or directly from a REST client. Prerequisites KIE Server is installed and configured, including a known user name and credentials for a user with the kie-server role. For installation options, see Planning a Red Hat Decision Manager installation . A KIE container is deployed in KIE Server in the form of a KJAR that includes the PMML model. For more information about project packaging, see Packaging and deploying an Red Hat Decision Manager project . You have the container ID of the KIE container containing the PMML model. Procedure In your client application, add the following dependencies to the relevant classpath of your Java project: Example of legacy implementation <!-- Required for the PMML compiler --> <dependency> <groupId>org.drools</groupId> <artifactId>kie-pmml</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required for the KIE public API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{rhpam.version}</version> </dependencies> <!-- Required for the KIE Server Java client API --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>USD{rhpam.version}</version> </dependency> Important To use the legacy implementation, ensure that the kie-pmml-implementation system property is set as legacy . Example of trusty implementation <!-- Required for the PMML compiler --> <dependency> <groupId>org.drools</groupId> <artifactId>kie-pmml-dependencies</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required for the KIE public API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{rhpam.version}</version> </dependencies> <!-- Required for the KIE Server Java client API --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>USD{rhpam.version}</version> </dependency> The <version> is the Maven artifact version for Red Hat Decision Manager currently used in your project (for example, 7.67.0.Final-redhat-00024). Note Instead of specifying a Red Hat Decision Manager <version> for individual dependencies, consider adding the Red Hat Business Automation bill of materials (BOM) dependency to your project pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. Example BOM dependency: <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency> For more information about the Red Hat Business Automation BOM, see What is the mapping between RHDM product and maven library version? . Create a KIE container from classpath or ReleaseId : KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "my-kjar", "1.0.0" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId ); Alternative option: KieServices kieServices = KieServices.Factory.get(); KieContainer kieContainer = kieServices.getKieClasspathContainer(); Create a class for sending requests to KIE Server and receiving responses: public class ApplyScorecardModel { private static final ReleaseId releaseId = new ReleaseId("org.acme","my-kjar","1.0.0"); private static final String containerId = "SampleModelContainer"; private static KieCommands commandFactory; private static ClassLoader kjarClassLoader; 1 private RuleServicesClient serviceClient; 2 // Attributes specific to your class instance private String rankedFirstCode; private Double score; // Initialization of non-final static attributes static { commandFactory = KieServices.Factory.get().getCommands(); // Specifications for kjarClassLoader, if used KieMavenRepository kmp = KieMavenRepository.getMavenRepository(); File artifactFile = kmp.resolveArtifact(releaseId).getFile(); if (artifactFile != null) { URL urls[] = new URL[1]; try { urls[0] = artifactFile.toURI().toURL(); classLoader = new KieURLClassLoader(urls,PMML4Result.class.getClassLoader()); } catch (MalformedURLException e) { logger.error("Error getting classLoader for "+containerId); logger.error(e.getMessage()); } } else { logger.warn("Did not find the artifact file for "+releaseId.toString()); } } public ApplyScorecardModel(KieServicesConfiguration kieConfig) { KieServicesClient clientFactory = KieServicesFactory.newKieServicesClient(kieConfig); serviceClient = clientFactory.getServicesClient(RuleServicesClient.class); } ... // Getters and setters ... // Method for executing the PMML model on KIE Server public void applyModel(String occupation, int age) { PMMLRequestData input = new PMMLRequestData("1234","SampleModelName"); 3 input.addRequestParam(new ParameterInfo("1234","occupation",String.class,occupation)); input.addRequestParam(new ParameterInfo("1234","age",Integer.class,age)); CommandFactoryServiceImpl cf = (CommandFactoryServiceImpl)commandFactory; ApplyPmmlModelCommand command = (ApplyPmmlModelCommand) cf.newApplyPmmlModel(request); 4 ServiceResponse<ExecutionResults> results = ruleClient.executeCommandsWithResults(CONTAINER_ID, command); 5 if (results != null) { 6 PMML4Result resultHolder = (PMML4Result)results.getResult().getValue("results"); if (resultHolder != null && "OK".equals(resultHolder.getResultCode())) { this.score = resultHolder.getResultValue("ScoreCard","score",Double.class).get(); Map<String,Object> rankingMap = (Map<String,Object>)resultHolder.getResultValue("ScoreCard","ranking"); if (rankingMap != null && !rankingMap.isEmpty()) { this.rankedFirstCode = rankingMap.keySet().iterator().(); } } } } } 1 Defines the class loader if you did not include the KJAR in your client project dependencies 2 Identifies the service client as defined in the configuration settings, including KIE Server REST API access credentials 3 Initializes a PMMLRequestData object 4 Creates an instance of the ApplyPmmlModelCommand 5 Sends the command using the service client 6 Retrieves the results of the executed PMML model Execute the class instance to send the PMML invocation request to KIE Server. Alternatively, you can use JMS and REST interfaces to send the ApplyPmmlModelCommand command to KIE Server. For REST requests, you use the ApplyPmmlModelCommand command as a POST request to http://SERVER:PORT/kie-server/services/rest/server/containers/instances/{containerId} in JSON, JAXB, or XStream request format. Example POST endpoint Example JSON request body { "commands": [ { "apply-pmml-model-command": { "outIdentifier": null, "packageName": null, "hasMining": false, "requestData": { "correlationId": "123", "modelName": "SimpleScorecard", "source": null, "requestParams": [ { "correlationId": "123", "name": "param1", "type": "java.lang.Double", "value": "10.0" }, { "correlationId": "123", "name": "param2", "type": "java.lang.Double", "value": "15.0" } ] } } } ] } Example curl request with endpoint and body Example JSON response { "results" : [ { "value" : {"org.kie.api.pmml.DoubleFieldOutput":{ "value" : 40.8, "correlationId" : "123", "segmentationId" : null, "segmentId" : null, "name" : "OverallScore", "displayValue" : "OverallScore", "weight" : 1.0 }}, "key" : "OverallScore" }, { "value" : {"org.kie.api.pmml.PMML4Result":{ "resultVariables" : { "OverallScore" : { "value" : 40.8, "correlationId" : "123", "segmentationId" : null, "segmentId" : null, "name" : "OverallScore", "displayValue" : "OverallScore", "weight" : 1.0 }, "ScoreCard" : { "modelName" : "SimpleScorecard", "score" : 40.8, "holder" : { "modelName" : "SimpleScorecard", "correlationId" : "123", "voverallScore" : null, "moverallScore" : true, "vparam1" : 10.0, "mparam1" : false, "vparam2" : 15.0, "mparam2" : false }, "enableRC" : true, "pointsBelow" : true, "ranking" : { "reasonCh1" : 5.0, "reasonCh2" : -6.0 } } }, "correlationId" : "123", "segmentationId" : null, "segmentId" : null, "segmentIndex" : 0, "resultCode" : "OK", "resultObjectName" : null }}, "key" : "results" } ], "facts" : [ ] } | [
"<!-- Required for the PMML compiler --> <dependency> <groupId>org.drools</groupId> <artifactId>kie-pmml-dependencies</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required for the KIE public API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{rhpam.version}</version> </dependencies> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>USD{rhpam.version}</version> </dependency>",
"<dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency>",
"KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( \"org.acme\", \"my-kjar\", \"1.0.0\" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId );",
"KieServices kieServices = KieServices.Factory.get(); KieContainer kieContainer = kieServices.getKieClasspathContainer();",
"PMMLRuntime pmmlRuntime = KieRuntimeFactory.of(kieContainer.getKieBase()).get(PMMLRuntime.class);",
"PMMLRequestData pmmlRequestData = new PMMLRequestData({correlation_id}, {model_name}); pmmlRequestData.addRequestParam({parameter_name}, {parameter_value})",
"PMMLContext pmmlContext = new PMMLContextImpl(pmmlRequestData);",
"PMML4Result pmml4Result = pmmlRuntime.evaluate({model_name}, pmmlContext);",
"<!-- Required for the PMML compiler --> <dependency> <groupId>org.drools</groupId> <artifactId>kie-pmml</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required for the KIE public API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{rhpam.version}</version> </dependencies> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>USD{rhpam.version}</version> </dependency>",
"<dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency>",
"KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( \"org.acme\", \"my-kjar\", \"1.0.0\" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId );",
"KieServices kieServices = KieServices.Factory.get(); KieContainer kieContainer = kieServices.getKieClasspathContainer();",
"public class PMMLRequestData { private String correlationId; 1 private String modelName; 2 private String source; 3 private List<ParameterInfo<?>> requestParams; 4 }",
"public class PMML4Result { private String correlationId; private String segmentationId; 1 private String segmentId; 2 private int segmentIndex; 3 private String resultCode; 4 private Map<String, Object> resultVariables; 5 }",
"public <T> Optional<T> getResultValue(String objName, String objField, Class<T> clazz, Object...params) public Object getResultValue(String objName, String objField, Object...params)",
"public class ParameterInfo<T> { 1 private String correlationId; private String name; 2 private String capitalizedName; private Class<T> type; 3 private T value; 4 }",
"public void executeModel(KieBase kbase, Map<String,Object> variables, String modelName, String correlationId, String modelPkgName) { RuleUnitExecutor executor = RuleUnitExecutor.create().bind(kbase); PMMLRequestData request = new PMMLRequestData(correlationId, modelName); PMML4Result resultHolder = new PMML4Result(correlationId); variables.entrySet().forEach( es -> { request.addRequestParam(es.getKey(), es.getValue()); }); DataSource<PMMLRequestData> requestData = executor.newDataSource(\"request\"); DataSource<PMML4Result> resultData = executor.newDataSource(\"results\"); DataSource<PMMLData> internalData = executor.newDataSource(\"pmmlData\"); requestData.insert(request); resultData.insert(resultHolder); List<String> possiblePackageNames = calculatePossiblePackageNames(modelName, modelPkgName); Class<? extends RuleUnit> ruleUnitClass = getStartingRuleUnit(\"RuleUnitIndicator\", (InternalKnowledgeBase)kbase, possiblePackageNames); if (ruleUnitClass != null) { executor.run(ruleUnitClass); if ( \"OK\".equals(resultHolder.getResultCode()) ) { // extract result variables here } } } protected Class<? extends RuleUnit> getStartingRuleUnit(String startingRule, InternalKnowledgeBase ikb, List<String> possiblePackages) { RuleUnitRegistry unitRegistry = ikb.getRuleUnitRegistry(); Map<String,InternalKnowledgePackage> pkgs = ikb.getPackagesMap(); RuleImpl ruleImpl = null; for (String pkgName: possiblePackages) { if (pkgs.containsKey(pkgName)) { InternalKnowledgePackage pkg = pkgs.get(pkgName); ruleImpl = pkg.getRule(startingRule); if (ruleImpl != null) { RuleUnitDescr descr = unitRegistry.getRuleUnitFor(ruleImpl).orElse(null); if (descr != null) { return descr.getRuleUnitClass(); } } } } return null; } protected List<String> calculatePossiblePackageNames(String modelId, String...knownPackageNames) { List<String> packageNames = new ArrayList<>(); String javaModelId = modelId.replaceAll(\"\\\\s\",\"\"); if (knownPackageNames != null && knownPackageNames.length > 0) { for (String knownPkgName: knownPackageNames) { packageNames.add(knownPkgName + \".\" + javaModelId); } } String basePkgName = PMML4UnitImpl.DEFAULT_ROOT_PACKAGE+\".\"+javaModelId; packageNames.add(basePkgName); return packageNames; }",
"public void executeModel(KieBase kbase, Map<String,Object> variables, String modelName, String correlationId, String modelPkgName) { RuleUnitExecutor executor = RuleUnitExecutor.create().bind(kbase); PMMLRequestData request = new PMMLRequestData(correlationId, modelName); PMML4Result resultHolder = new PMML4Result(correlationId); variables.entrySet().forEach( es -> { request.addRequestParam(es.getKey(), es.getValue()); }); DataSource<PMMLRequestData> requestData = executor.newDataSource(\"request\"); DataSource<PMML4Result> resultData = executor.newDataSource(\"results\"); DataSource<PMMLData> internalData = executor.newDataSource(\"pmmlData\"); requestData.insert(request); resultData.insert(resultHolder); List<String> possiblePackageNames = calculatePossiblePackageNames(modelName, modelPkgName); Class<? extends RuleUnit> ruleUnitClass = getStartingRuleUnit(\"RuleUnitIndicator\", (InternalKnowledgeBase)kbase, possiblePackageNames); if (ruleUnitClass != null) { executor.run(ruleUnitClass); if ( \"OK\".equals(resultHolder.getResultCode()) ) { // extract result variables here } } } protected Class<? extends RuleUnit> getStartingRuleUnit(String startingRule, InternalKnowledgeBase ikb, List<String> possiblePackages) { RuleUnitRegistry unitRegistry = ikb.getRuleUnitRegistry(); Map<String,InternalKnowledgePackage> pkgs = ikb.getPackagesMap(); RuleImpl ruleImpl = null; for (String pkgName: possiblePackages) { if (pkgs.containsKey(pkgName)) { InternalKnowledgePackage pkg = pkgs.get(pkgName); ruleImpl = pkg.getRule(startingRule); if (ruleImpl != null) { RuleUnitDescr descr = unitRegistry.getRuleUnitFor(ruleImpl).orElse(null); if (descr != null) { return descr.getRuleUnitClass(); } } } } return null; } protected List<String> calculatePossiblePackageNames(String modelId, String...knownPackageNames) { List<String> packageNames = new ArrayList<>(); String javaModelId = modelId.replaceAll(\"\\\\s\",\"\"); if (knownPackageNames != null && knownPackageNames.length > 0) { for (String knownPkgName: knownPackageNames) { packageNames.add(knownPkgName + \".\" + javaModelId); } } String basePkgName = PMML4UnitImpl.DEFAULT_ROOT_PACKAGE+\".\"+javaModelId; packageNames.add(basePkgName); return packageNames; }",
"public void executeModel(KieBase kbase, Map<String,Object> variables, String modelName, String modelPkgName, String correlationId) { PMML4ExecutionHelper helper = PMML4ExecutionHelperFactory.getExecutionHelper(modelName, kbase); helper.addPossiblePackageName(modelPkgName); PMMLRequestData request = new PMMLRequestData(correlationId, modelName); variables.entrySet().forEach(entry -> { request.addRequestParam(entry.getKey(), entry.getValue); }); PMML4Result resultHolder = helper.submitRequest(request); if (\"OK\".equals(resultHolder.getResultCode)) { // extract result variables here } }",
"public static PMML4ExecutionHelper getExecutionHelper(String modelName, KieBase kbase) public static PMML4ExecutionHelper getExecutionHelper(String modelName, KieBase kbase, boolean includeMiningDataSources)",
"public static PMML4ExecutionHelper getExecutionHelper(String modelName, String classPath, KieBaseConfiguration kieBaseConf) public static PMML4ExecutionHelper getExecutionHelper(String modelName,String classPath, KieBaseConfiguration kieBaseConf, boolean includeMiningDataSources)",
"public static PMML4ExecutionHelper getExecutionHelper(String modelName, byte[] content, KieBaseConfiguration kieBaseConf) public static PMML4ExecutionHelper getExecutionHelper(String modelName, byte[] content, KieBaseConfiguration kieBaseConf, boolean includeMiningDataSources)",
"public static PMML4ExecutionHelper getExecutionHelper(String modelName, Resource resource, KieBaseConfiguration kieBaseConf) public static PMML4ExecutionHelper getExecutionHelper(String modelName, Resource resource, KieBaseConfiguration kieBaseConf, boolean includeMiningDataSources)",
"<!-- Required for the PMML compiler --> <dependency> <groupId>org.drools</groupId> <artifactId>kie-pmml</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required for the KIE public API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{rhpam.version}</version> </dependencies> <!-- Required for the KIE Server Java client API --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>USD{rhpam.version}</version> </dependency>",
"<!-- Required for the PMML compiler --> <dependency> <groupId>org.drools</groupId> <artifactId>kie-pmml-dependencies</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required for the KIE public API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{rhpam.version}</version> </dependencies> <!-- Required for the KIE Server Java client API --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>USD{rhpam.version}</version> </dependency>",
"<dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency>",
"KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( \"org.acme\", \"my-kjar\", \"1.0.0\" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId );",
"KieServices kieServices = KieServices.Factory.get(); KieContainer kieContainer = kieServices.getKieClasspathContainer();",
"public class ApplyScorecardModel { private static final ReleaseId releaseId = new ReleaseId(\"org.acme\",\"my-kjar\",\"1.0.0\"); private static final String containerId = \"SampleModelContainer\"; private static KieCommands commandFactory; private static ClassLoader kjarClassLoader; 1 private RuleServicesClient serviceClient; 2 // Attributes specific to your class instance private String rankedFirstCode; private Double score; // Initialization of non-final static attributes static { commandFactory = KieServices.Factory.get().getCommands(); // Specifications for kjarClassLoader, if used KieMavenRepository kmp = KieMavenRepository.getMavenRepository(); File artifactFile = kmp.resolveArtifact(releaseId).getFile(); if (artifactFile != null) { URL urls[] = new URL[1]; try { urls[0] = artifactFile.toURI().toURL(); classLoader = new KieURLClassLoader(urls,PMML4Result.class.getClassLoader()); } catch (MalformedURLException e) { logger.error(\"Error getting classLoader for \"+containerId); logger.error(e.getMessage()); } } else { logger.warn(\"Did not find the artifact file for \"+releaseId.toString()); } } public ApplyScorecardModel(KieServicesConfiguration kieConfig) { KieServicesClient clientFactory = KieServicesFactory.newKieServicesClient(kieConfig); serviceClient = clientFactory.getServicesClient(RuleServicesClient.class); } // Getters and setters // Method for executing the PMML model on KIE Server public void applyModel(String occupation, int age) { PMMLRequestData input = new PMMLRequestData(\"1234\",\"SampleModelName\"); 3 input.addRequestParam(new ParameterInfo(\"1234\",\"occupation\",String.class,occupation)); input.addRequestParam(new ParameterInfo(\"1234\",\"age\",Integer.class,age)); CommandFactoryServiceImpl cf = (CommandFactoryServiceImpl)commandFactory; ApplyPmmlModelCommand command = (ApplyPmmlModelCommand) cf.newApplyPmmlModel(request); 4 ServiceResponse<ExecutionResults> results = ruleClient.executeCommandsWithResults(CONTAINER_ID, command); 5 if (results != null) { 6 PMML4Result resultHolder = (PMML4Result)results.getResult().getValue(\"results\"); if (resultHolder != null && \"OK\".equals(resultHolder.getResultCode())) { this.score = resultHolder.getResultValue(\"ScoreCard\",\"score\",Double.class).get(); Map<String,Object> rankingMap = (Map<String,Object>)resultHolder.getResultValue(\"ScoreCard\",\"ranking\"); if (rankingMap != null && !rankingMap.isEmpty()) { this.rankedFirstCode = rankingMap.keySet().iterator().next(); } } } } }",
"http://localhost:8080/kie-server/services/rest/server/containers/instances/SampleModelContainer",
"{ \"commands\": [ { \"apply-pmml-model-command\": { \"outIdentifier\": null, \"packageName\": null, \"hasMining\": false, \"requestData\": { \"correlationId\": \"123\", \"modelName\": \"SimpleScorecard\", \"source\": null, \"requestParams\": [ { \"correlationId\": \"123\", \"name\": \"param1\", \"type\": \"java.lang.Double\", \"value\": \"10.0\" }, { \"correlationId\": \"123\", \"name\": \"param2\", \"type\": \"java.lang.Double\", \"value\": \"15.0\" } ] } } } ] }",
"curl -X POST \"http://localhost:8080/kie-server/services/rest/server/containers/instances/SampleModelContainer\" -H \"accept: application/json\" -H \"content-type: application/json\" -d \"{ \\\"commands\\\": [ { \\\"apply-pmml-model-command\\\": { \\\"outIdentifier\\\": null, \\\"packageName\\\": null, \\\"hasMining\\\": false, \\\"requestData\\\": { \\\"correlationId\\\": \\\"123\\\", \\\"modelName\\\": \\\"SimpleScorecard\\\", \\\"source\\\": null, \\\"requestParams\\\": [ { \\\"correlationId\\\": \\\"123\\\", \\\"name\\\": \\\"param1\\\", \\\"type\\\": \\\"java.lang.Double\\\", \\\"value\\\": \\\"10.0\\\" }, { \\\"correlationId\\\": \\\"123\\\", \\\"name\\\": \\\"param2\\\", \\\"type\\\": \\\"java.lang.Double\\\", \\\"value\\\": \\\"15.0\\\" } ] } } } ]}\"",
"{ \"results\" : [ { \"value\" : {\"org.kie.api.pmml.DoubleFieldOutput\":{ \"value\" : 40.8, \"correlationId\" : \"123\", \"segmentationId\" : null, \"segmentId\" : null, \"name\" : \"OverallScore\", \"displayValue\" : \"OverallScore\", \"weight\" : 1.0 }}, \"key\" : \"OverallScore\" }, { \"value\" : {\"org.kie.api.pmml.PMML4Result\":{ \"resultVariables\" : { \"OverallScore\" : { \"value\" : 40.8, \"correlationId\" : \"123\", \"segmentationId\" : null, \"segmentId\" : null, \"name\" : \"OverallScore\", \"displayValue\" : \"OverallScore\", \"weight\" : 1.0 }, \"ScoreCard\" : { \"modelName\" : \"SimpleScorecard\", \"score\" : 40.8, \"holder\" : { \"modelName\" : \"SimpleScorecard\", \"correlationId\" : \"123\", \"voverallScore\" : null, \"moverallScore\" : true, \"vparam1\" : 10.0, \"mparam1\" : false, \"vparam2\" : 15.0, \"mparam2\" : false }, \"enableRC\" : true, \"pointsBelow\" : true, \"ranking\" : { \"reasonCh1\" : 5.0, \"reasonCh2\" : -6.0 } } }, \"correlationId\" : \"123\", \"segmentationId\" : null, \"segmentId\" : null, \"segmentIndex\" : 0, \"resultCode\" : \"OK\", \"resultObjectName\" : null }}, \"key\" : \"results\" } ], \"facts\" : [ ] }"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/pmml-invocation-options-con_pmml-models |
28.4. Configuring Persistent Memory for use in Device DAX mode | 28.4. Configuring Persistent Memory for use in Device DAX mode Device DAX ( devdax ) provides a means for applications to directly access storage, without the involvement of a file system. The benefit of device DAX is that it provides a guaranteed fault granularity, which can be configured using the --align option with the ndctl utility: The given command ensures that the operating system would fault in 2MiB pages at a time. For the Intel 64 and AMD64 architecture, the following fault granularities are supported: 4KiB 2MiB 1GiB Device DAX nodes ( /dev/dax N.M ) only supports the following system call: open() close() mmap() fallocate() read() and write() variants are not supported because the use case is tied to persistent memory programming. | [
"ndctl create-namespace --force --reconfig= namespace0.0 --mode=devdax --align= 2M"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/configuring-persistent-memory-for-use-in-device-dax-mode |
5.4.11. Renaming Logical Volumes | 5.4.11. Renaming Logical Volumes To rename an existing logical volume, use the lvrename command. Either of the following commands renames logical volume lvold in volume group vg02 to lvnew . Renaming the root logical volume requires additional reconfiguration. For information on renaming a root volume, see How to rename root volume group or logical volume in Red Hat Enterprise Linux . For more information on activating logical volumes on individual nodes in a cluster, see Section 5.7, "Activating Logical Volumes on Individual Nodes in a Cluster" . | [
"lvrename /dev/vg02/lvold /dev/vg02/lvnew",
"lvrename vg02 lvold lvnew"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lv_rename |
Chapter 2. Architectures | Chapter 2. Architectures Red Hat Enterprise Linux 7.5 is distributed with the kernel version 3.10.0-862, which provides support for the following architectures: [1] 64-bit AMD 64-bit Intel IBM POWER7+ and POWER8 (big endian) [2] IBM POWER8 (little endian) [3] IBM Z [4] Support for Architectures in the kernel-alt Packages Red Hat Enterprise Linux 7.5 is distributed with the kernel-alt packages, which include kernel version 4.14. This kernel version provides support for the following architectures: 64-bit ARM IBM POWER9 (little endian) [5] IBM Z The following table provides an overview of architectures supported by the two kernel versions available in Red Hat Enterprise Linux 7.5: Table 2.1. Architectures Supported in Red Hat Enterprise Linux 7.5 Architecture Kernel version 3.10 Kernel version 4.14 64-bit AMD and Intel yes no 64-bit ARM no yes IBM POWER7 (big endian) yes no IBM POWER8 (big endian) yes no IBM POWER8 (little endian) yes no IBM POWER9 (little endian) no yes IBM z System yes [a] yes (Structure A) [a] The 3.10 kernel version does not support KVM virtualization and containers on IBM Z. Both of these features are supported on the 4.14 kernel on IBM Z - this offerring is also referred to as Structure A. For more information, see Chapter 19, Red Hat Enterprise Linux 7.5 for ARM and Chapter 20, Red Hat Enterprise Linux 7.5 for IBM Power LE (POWER9) . [1] Note that the Red Hat Enterprise Linux 7.5 installation is supported only on 64-bit hardware. Red Hat Enterprise Linux 7.5 is able to run 32-bit operating systems, including versions of Red Hat Enterprise Linux, as virtual machines. [2] Red Hat Enterprise Linux 7.5 POWER8 (big endian) are currently supported as KVM guests on Red Hat Enterprise Linux 7.5 POWER8 systems that run the KVM hypervisor, and on PowerVM. [3] Red Hat Enterprise Linux 7.5 POWER8 (little endian) is currently supported as a KVM guest on Red Hat Enterprise Linux 7.5 POWER8 systems that run the KVM hypervisor, and on PowerVM. In addition, Red Hat Enterprise Linux 7.5 POWER8 (little endian) guests are supported on Red Hat Enterprise Linux 7.5 POWER9 systems that run the KVM hypervisor in POWER8-compatibility mode on version 4.14 kernel using the kernel-alt package. [4] Red Hat Enterprise Linux 7.5 for IBM Z (both the 3.10 kernel version and the 4.14 kernel version) is currently supported as a KVM guest on Red Hat Enterprise Linux 7.5 for IBM Z hosts that run the KVM hypervisor on version 4.14 kernel using the kernel-alt package. [5] Red Hat Enterprise Linux 7.5 POWER9 (little endian) is currently supported as a KVM guest on Red Hat Enterprise Linux 7.5 POWER9 systems that run the KVM hypervisor on version 4.14 kernel using the kernel-alt package, and on PowerVM. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/chap-red_hat_enterprise_linux-7.5_release_notes-architectures |
Chapter 9. Enabling and disabling automatic rule updates for Insights | Chapter 9. Enabling and disabling automatic rule updates for Insights By default, automatic collection rule updates are enabled for Insights. You can edit the client configuration file to disable them or re-enable them. 9.1. Disabling automatic rule updates for Insights You can disable the automatic collection rule updates for Red Hat Insights for Red Hat Enterprise Linux. If you do so, you risk using outdated rule definition files and not getting the most recent validation updates. Prerequisites Root-level access to your system. Automatic rule updates are enabled. Procedure Open the /etc/insights-client/insights-client.conf file with an editor. Locate the line that contains Remove the # and change True to False . Save and close the /etc/insights-client/insights-client.conf file. 9.2. Enabling automatic rule updates for Insights You can re-enable the automatic collection rule updates for Red Hat Insights for Red Hat Enterprise Linux, if you previously disabled updates. By default, automatic rule update is enabled. Prerequisites Root-level access to your system. Automatic rule collection is disabled. Procedure Open the /etc/insights-client/insights-client.conf file with an editor. Locate the line that contains Change False to True . Save and close the /etc/insights-client/insights-client.conf file. | [
"#auto_update=True",
"auto_update=False",
"auto_update=False",
"auto_update=True"
] | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/client_configuration_guide_for_red_hat_insights/assembly-client-data-auto-update-rules |
Chapter 6. Updating | Chapter 6. Updating 6.1. Updating OpenShift Virtualization Learn how to keep OpenShift Virtualization updated and compatible with OpenShift Container Platform. 6.1.1. About updating OpenShift Virtualization When you install OpenShift Virtualization, you select an update channel and an approval strategy. The update channel determines the versions that OpenShift Virtualization will be updated to. The approval strategy setting determines whether updates occur automatically or require manual approval. Both settings can impact supportability. 6.1.1.1. Recommended settings To maintain a supportable environment, use the following settings: Update channel: stable Approval strategy: Automatic With these settings, the update process automatically starts when a new version of the Operator is available in the stable channel. This ensures that your OpenShift Virtualization and OpenShift Container Platform versions remain compatible, and that your version of OpenShift Virtualization is suitable for production environments. Note Each minor version of OpenShift Virtualization is supported only if you run the corresponding OpenShift Container Platform version. For example, you must run OpenShift Virtualization 4.14 on OpenShift Container Platform 4.14. 6.1.1.2. What to expect The amount of time an update takes to complete depends on your network connection. Most automatic updates complete within fifteen minutes. Updating OpenShift Virtualization does not interrupt network connections. Data volumes and their associated persistent volume claims are preserved during an update. Important If you have virtual machines running that use hostpath provisioner storage, they cannot be live migrated and might block an OpenShift Container Platform cluster update. As a workaround, you can reconfigure the virtual machines so that they can be powered off automatically during a cluster update. Remove the evictionStrategy: LiveMigrate field and set the runStrategy field to Always . 6.1.1.3. How updates work Operator Lifecycle Manager (OLM) manages the lifecycle of the OpenShift Virtualization Operator. The Marketplace Operator, which is deployed during OpenShift Container Platform installation, makes external Operators available to your cluster. OLM provides z-stream and minor version updates for OpenShift Virtualization. Minor version updates become available when you update OpenShift Container Platform to the minor version. You cannot update OpenShift Virtualization to the minor version without first updating OpenShift Container Platform. 6.1.1.4. RHEL 9 compatibility OpenShift Virtualization 4.14 is based on Red Hat Enterprise Linux (RHEL) 9. You can update to OpenShift Virtualization 4.14 from a version that was based on RHEL 8 by following the standard OpenShift Virtualization update procedure. No additional steps are required. As in versions, you can perform the update without disrupting running workloads. OpenShift Virtualization 4.14 supports live migration from RHEL 8 nodes to RHEL 9 nodes. 6.1.1.4.1. RHEL 9 machine type All VM templates that are included with OpenShift Virtualization now use the RHEL 9 machine type by default: machineType: pc-q35-rhel9.<y>.0 , where <y> is a single digit corresponding to the latest minor version of RHEL 9. For example, the value pc-q35-rhel9.2.0 is used for RHEL 9.2. Updating OpenShift Virtualization does not change the machineType value of any existing VMs. These VMs continue to function as they did before the update. You can optionally change a VM's machine type so that it can benefit from RHEL 9 improvements. Important Before you change a VM's machineType value, you must shut down the VM. 6.1.2. Monitoring update status To monitor the status of a OpenShift Virtualization Operator update, watch the cluster service version (CSV) PHASE . You can also monitor the CSV conditions in the web console or by running the command provided here. Note The PHASE and conditions values are approximations that are based on available information. Prerequisites Log in to the cluster as a user with the cluster-admin role. Install the OpenShift CLI ( oc ). Procedure Run the following command: USD oc get csv -n openshift-cnv Review the output, checking the PHASE field. For example: Example output VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing Optional: Monitor the aggregated status of all OpenShift Virtualization component conditions by running the following command: USD oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv \ -o=jsonpath='{range .status.conditions[*]}{.type}{"\t"}{.status}{"\t"}{.message}{"\n"}{end}' A successful upgrade results in the following output: Example output ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully 6.1.3. VM workload updates When you update OpenShift Virtualization, virtual machine workloads, including libvirt , virt-launcher , and qemu , update automatically if they support live migration. Note Each virtual machine has a virt-launcher pod that runs the virtual machine instance (VMI). The virt-launcher pod runs an instance of libvirt , which is used to manage the virtual machine (VM) process. You can configure how workloads are updated by editing the spec.workloadUpdateStrategy stanza of the HyperConverged custom resource (CR). There are two available workload update methods: LiveMigrate and Evict . Because the Evict method shuts down VMI pods, only the LiveMigrate update strategy is enabled by default. When LiveMigrate is the only update strategy enabled: VMIs that support live migration are migrated during the update process. The VM guest moves into a new pod with the updated components enabled. VMIs that do not support live migration are not disrupted or updated. If a VMI has the LiveMigrate eviction strategy but does not support live migration, it is not updated. If you enable both LiveMigrate and Evict : VMIs that support live migration use the LiveMigrate update strategy. VMIs that do not support live migration use the Evict update strategy. If a VMI is controlled by a VirtualMachine object that has runStrategy: Always set, a new VMI is created in a new pod with updated components. Migration attempts and timeouts When updating workloads, live migration fails if a pod is in the Pending state for the following periods: 5 minutes If the pod is pending because it is Unschedulable . 15 minutes If the pod is stuck in the pending state for any reason. When a VMI fails to migrate, the virt-controller tries to migrate it again. It repeats this process until all migratable VMIs are running on new virt-launcher pods. If a VMI is improperly configured, however, these attempts can repeat indefinitely. Note Each attempt corresponds to a migration object. Only the five most recent attempts are held in a buffer. This prevents migration objects from accumulating on the system while retaining information for debugging. 6.1.3.1. Configuring workload update methods You can configure workload update methods by editing the HyperConverged custom resource (CR). Prerequisites To use live migration as an update method, you must first enable live migration in the cluster. Note If a VirtualMachineInstance CR contains evictionStrategy: LiveMigrate and the virtual machine instance (VMI) does not support live migration, the VMI will not update. Procedure To open the HyperConverged CR in your default editor, run the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Edit the workloadUpdateStrategy stanza of the HyperConverged CR. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: "1m0s" 5 # ... 1 The methods that can be used to perform automated workload updates. The available values are LiveMigrate and Evict . If you enable both options as shown in this example, updates use LiveMigrate for VMIs that support live migration and Evict for any VMIs that do not support live migration. To disable automatic workload updates, you can either remove the workloadUpdateStrategy stanza or set workloadUpdateMethods: [] to leave the array empty. 2 The least disruptive update method. VMIs that support live migration are updated by migrating the virtual machine (VM) guest into a new pod with the updated components enabled. If LiveMigrate is the only workload update method listed, VMIs that do not support live migration are not disrupted or updated. 3 A disruptive method that shuts down VMI pods during upgrade. Evict is the only update method available if live migration is not enabled in the cluster. If a VMI is controlled by a VirtualMachine object that has runStrategy: Always configured, a new VMI is created in a new pod with updated components. 4 The number of VMIs that can be forced to be updated at a time by using the Evict method. This does not apply to the LiveMigrate method. 5 The interval to wait before evicting the batch of workloads. This does not apply to the LiveMigrate method. Note You can configure live migration limits and timeouts by editing the spec.liveMigrationConfig stanza of the HyperConverged CR. To apply your changes, save and exit the editor. 6.1.3.2. Viewing outdated VM workloads You can view a list of outdated virtual machine (VM) workloads by using the CLI. Note If there are outdated virtualization pods in your cluster, the OutdatedVirtualMachineInstanceWorkloads alert fires. Procedure To view a list of outdated virtual machine instances (VMIs), run the following command: USD oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces Note To ensure that VMIs update automatically, configure workload updates. 6.1.4. Control Plane Only updates Every even-numbered minor version of OpenShift Container Platform, including 4.10 and 4.12, is an Extended Update Support (EUS) version. However, because Kubernetes design mandates serial minor version updates, you cannot directly update from one EUS version to the . After you update from the source EUS version to the odd-numbered minor version, you must sequentially update OpenShift Virtualization to all z-stream releases of that minor version that are on your update path. When you have upgraded to the latest applicable z-stream version, you can then update OpenShift Container Platform to the target EUS minor version. When the OpenShift Container Platform update succeeds, the corresponding update for OpenShift Virtualization becomes available. You can now update OpenShift Virtualization to the target EUS version. For more information about EUS versions, see the Red Hat OpenShift Container Platform Life Cycle Policy . 6.1.4.1. Prerequisites Before beginning a Control Plane Only update, you must: Pause worker nodes' machine config pools before you start a Control Plane Only update so that the workers are not rebooted twice. Disable automatic workload updates before you begin the update process. This is to prevent OpenShift Virtualization from migrating or evicting your virtual machines (VMs) until you update to your target EUS version. Note By default, OpenShift Virtualization automatically updates workloads, such as the virt-launcher pod, when you update the OpenShift Virtualization Operator. You can configure this behavior in the spec.workloadUpdateStrategy stanza of the HyperConverged custom resource. Learn more about Performing a Control Plane Only update . 6.1.4.2. Preventing workload updates during a Control Plane Only update When you update from one Extended Update Support (EUS) version to the , you must manually disable automatic workload updates to prevent OpenShift Virtualization from migrating or evicting workloads during the update process. Prerequisites You are running an EUS version of OpenShift Container Platform and want to update to the EUS version. You have not yet updated to the odd-numbered version in between. You read "Preparing to perform a Control Plane Only update" and learned the caveats and requirements that pertain to your OpenShift Container Platform cluster. You paused the worker nodes' machine config pools as directed by the OpenShift Container Platform documentation. It is recommended that you use the default Automatic approval strategy. If you use the Manual approval strategy, you must approve all pending updates in the web console. For more details, refer to the "Manually approving a pending Operator update" section. Procedure Run the following command and record the workloadUpdateMethods configuration: USD oc get kv kubevirt-kubevirt-hyperconverged \ -n openshift-cnv -o jsonpath='{.spec.workloadUpdateStrategy.workloadUpdateMethods}' Turn off all workload update methods by running the following command: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op":"replace","path":"/spec/workloadUpdateStrategy/workloadUpdateMethods", "value":[]}]' Example output hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched Ensure that the HyperConverged Operator is Upgradeable before you continue. Enter the following command and monitor the output: USD oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.conditions" Example 6.1. Example output [ { "lastTransitionTime": "2022-12-09T16:29:11Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "True", "type": "ReconcileComplete" }, { "lastTransitionTime": "2022-12-09T20:30:10Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "True", "type": "Available" }, { "lastTransitionTime": "2022-12-09T20:30:10Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "False", "type": "Progressing" }, { "lastTransitionTime": "2022-12-09T16:39:11Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "False", "type": "Degraded" }, { "lastTransitionTime": "2022-12-09T20:30:10Z", "message": "Reconcile completed successfully", "observedGeneration": 3, "reason": "ReconcileCompleted", "status": "True", "type": "Upgradeable" 1 } ] 1 The OpenShift Virtualization Operator has the Upgradeable status. Manually update your cluster from the source EUS version to the minor version of OpenShift Container Platform: USD oc adm upgrade Verification Check the current version by running the following command: USD oc get clusterversion Note Updating OpenShift Container Platform to the version is a prerequisite for updating OpenShift Virtualization. For more details, refer to the "Updating clusters" section of the OpenShift Container Platform documentation. Update OpenShift Virtualization. With the default Automatic approval strategy, OpenShift Virtualization automatically updates to the corresponding version after you update OpenShift Container Platform. If you use the Manual approval strategy, approve the pending updates by using the web console. Monitor the OpenShift Virtualization update by running the following command: USD oc get csv -n openshift-cnv Update OpenShift Virtualization to every z-stream version that is available for the non-EUS minor version, monitoring each update by running the command shown in the step. Confirm that OpenShift Virtualization successfully updated to the latest z-stream release of the non-EUS version by running the following command: USD oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.versions" Example output [ { "name": "operator", "version": "4.14.11" } ] Wait until the HyperConverged Operator has the Upgradeable status before you perform the update. Enter the following command and monitor the output: USD oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.conditions" Update OpenShift Container Platform to the target EUS version. Confirm that the update succeeded by checking the cluster version: USD oc get clusterversion Update OpenShift Virtualization to the target EUS version. With the default Automatic approval strategy, OpenShift Virtualization automatically updates to the corresponding version after you update OpenShift Container Platform. If you use the Manual approval strategy, approve the pending updates by using the web console. Monitor the OpenShift Virtualization update by running the following command: USD oc get csv -n openshift-cnv The update completes when the VERSION field matches the target EUS version and the PHASE field reads Succeeded . Restore the workloadUpdateMethods configuration that you recorded from step 1 with the following command: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p \ "[{\"op\":\"add\",\"path\":\"/spec/workloadUpdateStrategy/workloadUpdateMethods\", \"value\":{WorkloadUpdateMethodConfig}}]" Example output hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched Verification Check the status of VM migration by running the following command: USD oc get vmim -A steps You can now unpause the worker nodes' machine config pools. 6.1.5. Advanced options The stable release channel and the Automatic approval strategy are recommended for most OpenShift Virtualization installations. Use other settings only if you understand the risks. 6.1.5.1. Changing update settings You can change the update channel and approval strategy for your OpenShift Virtualization Operator subscription by using the web console. Prerequisites You have installed the OpenShift Virtualization Operator. You have administrator permissions. Procedure Click Operators Installed Operators . Select OpenShift Virtualization from the list. Click the Subscription tab. In the Subscription details section, click the setting that you want to change. For example, to change the approval strategy from Manual to Automatic , click Manual . In the window that opens, select the new update channel or approval strategy. Click Save . 6.1.5.2. Manual approval strategy If you use the Manual approval strategy, you must manually approve every pending update. If OpenShift Container Platform and OpenShift Virtualization updates are out of sync, your cluster becomes unsupported. To avoid risking the supportability and functionality of your cluster, use the Automatic approval strategy. If you must use the Manual approval strategy, maintain a supportable cluster by approving pending Operator updates as soon as they become available. 6.1.5.3. Manually approving a pending Operator update If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending update display a status with Upgrade available . Click the name of the Operator you want to update. Click the Subscription tab. Any updates requiring approval are displayed to Upgrade status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for update. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . 6.1.6. Additional resources Performing a Control Plane Only update What are Operators? Operator Lifecycle Manager concepts and resources Cluster service versions (CSVs) About live migration Configuring eviction strategies Configuring live migration limits and timeouts | [
"oc get csv -n openshift-cnv",
"VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing",
"oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o=jsonpath='{range .status.conditions[*]}{.type}{\"\\t\"}{.status}{\"\\t\"}{.message}{\"\\n\"}{end}'",
"ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: \"1m0s\" 5",
"oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces",
"oc get kv kubevirt-kubevirt-hyperconverged -n openshift-cnv -o jsonpath='{.spec.workloadUpdateStrategy.workloadUpdateMethods}'",
"oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\":\"replace\",\"path\":\"/spec/workloadUpdateStrategy/workloadUpdateMethods\", \"value\":[]}]'",
"hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched",
"oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq \".status.conditions\"",
"[ { \"lastTransitionTime\": \"2022-12-09T16:29:11Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"True\", \"type\": \"ReconcileComplete\" }, { \"lastTransitionTime\": \"2022-12-09T20:30:10Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"True\", \"type\": \"Available\" }, { \"lastTransitionTime\": \"2022-12-09T20:30:10Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"False\", \"type\": \"Progressing\" }, { \"lastTransitionTime\": \"2022-12-09T16:39:11Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"False\", \"type\": \"Degraded\" }, { \"lastTransitionTime\": \"2022-12-09T20:30:10Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"True\", \"type\": \"Upgradeable\" 1 } ]",
"oc adm upgrade",
"oc get clusterversion",
"oc get csv -n openshift-cnv",
"oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq \".status.versions\"",
"[ { \"name\": \"operator\", \"version\": \"4.14.11\" } ]",
"oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq \".status.conditions\"",
"oc get clusterversion",
"oc get csv -n openshift-cnv",
"oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p \"[{\\\"op\\\":\\\"add\\\",\\\"path\\\":\\\"/spec/workloadUpdateStrategy/workloadUpdateMethods\\\", \\\"value\\\":{WorkloadUpdateMethodConfig}}]\"",
"hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched",
"oc get vmim -A"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/virtualization/updating |
7.46. elfutils | 7.46. elfutils 7.46.1. RHEA-2015:1302 - elfutils bug fix and enhancement update Updated elfutils packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The elfutils packages contain a number of utility programs and libraries related to the creation and maintenance of executable code. Note The elfutils packages have been upgraded to upstream version 0.161, which provides a number of bug fixes and enhancements over the version. (BZ# 1167724 ) Bug Fix BZ# 1167724 The eu-stack utility supports showing inlined frames and it is now able to produce backtraces even for processes that might have some of their on-disk libraries updated or deleted. Improved DWZ compressed DWARF multi-file support with new functions, "dwarf_getalt" and "dwarf_setalt", has been introduced. Support for ARM 64-bit architecture and Red Hat Enterprise Linux for POWER, little endian has been added. The libdw library now supports LZMA-compressed (.ko.xz) kernel modules. Support for ".debug_macro" has been added; new functions has been introduced: "dwarf_getmacros_off", "dwarf_macro_getsrcfiles", "dwarf_macro_getparamcnt", and "dwarf_macro_param". New GNU extensions to the DWARF format are now recognized. New functions have been added to the libdw library: "dwarf_peel_type", "dwarf_cu_getdwarf", "dwarf_cu_die", "dwelf_elf_gnu_debuglink", "dwelf_dwarf_gnu_debugaltlink", "dwelf_elf_gnu_build_id". Users of elfutils are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-elfutils |
Chapter 8. Configuring Identity Management for smart card authentication | Chapter 8. Configuring Identity Management for smart card authentication Identity Management (IdM) supports smart card authentication with: User certificates issued by the IdM certificate authority User certificates issued by an external certificate authority You can configure smart card authentication in IdM for both types of certificates. In this scenario, the rootca.pem CA certificate is the file containing the certificate of a trusted external certificate authority. For information about smart card authentication in IdM, see Understanding smart card authentication . For more details on configuring smart card authentication: Configuring the IdM server for smart card authentication Configuring the IdM client for smart card authentication Adding a certificate to a user entry in the IdM Web UI Adding a certificate to a user entry in the IdM CLI Installing tools for managing and using smart cards Storing a certificate on a smart card Logging in to IdM with smart cards Configuring GDM access using smart card authentication Configuring su access using smart card authentication 8.1. Configuring the IdM server for smart card authentication If you want to enable smart card authentication for users whose certificates have been issued by the certificate authority (CA) of the <EXAMPLE.ORG> domain that your Identity Management (IdM) CA trusts, you must obtain the following certificates so that you can add them when running the ipa-advise script that configures the IdM server: The certificate of the root CA that has either issued the certificate for the <EXAMPLE.ORG> CA directly, or through one or more of its sub-CAs. You can download the certificate chain from a web page whose certificate has been issued by the authority. For details, see Steps 1 - 4a in Configuring a browser to enable certificate authentication . The IdM CA certificate. You can obtain the CA certificate from the /etc/ipa/ca.crt file on the IdM server on which an IdM CA instance is running. The certificates of all of the intermediate CAs; that is, intermediate between the <EXAMPLE.ORG> CA and the IdM CA. To configure an IdM server for smart card authentication: Obtain files with the CA certificates in the PEM format. Run the built-in ipa-advise script. Reload the system configuration. Prerequisites You have root access to the IdM server. You have the root CA certificate and all the intermediate CA certificates. Procedure Create a directory in which you will do the configuration: Navigate to the directory: Obtain the relevant CA certificates stored in files in PEM format. If your CA certificate is stored in a file of a different format, such as DER, convert it to PEM format. The IdM Certificate Authority certificate is in PEM format and is located in the /etc/ipa/ca.crt file. Convert a DER file to a PEM file: For convenience, copy the certificates to the directory in which you want to do the configuration: Optional: If you use certificates of external certificate authorities, use the openssl x509 utility to view the contents of the files in the PEM format to check that the Issuer and Subject values are correct: Generate a configuration script with the in-built ipa-advise utility, using the administrator's privileges: The config-server-for-smart-card-auth.sh script performs the following actions: It configures the IdM Apache HTTP Server. It enables Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) on the Key Distribution Center (KDC). It configures the IdM Web UI to accept smart card authorization requests. Execute the script, adding the PEM files containing the root CA and sub CA certificates as arguments: Note Ensure that you add the root CA's certificate as an argument before any sub CA certificates and that the CA or sub CA certificates have not expired. Optional: If the certificate authority that issued the user certificate does not provide any Online Certificate Status Protocol (OCSP) responder, you may need to disable OCSP check for authentication to the IdM Web UI: Set the SSLOCSPEnable parameter to off in the /etc/httpd/conf.d/ssl.conf file: Restart the Apache daemon (httpd) for the changes to take effect immediately: Warning Do not disable the OCSP check if you only use user certificates issued by the IdM CA. OCSP responders are part of IdM. For instructions on how to keep the OCSP check enabled, and yet prevent a user certificate from being rejected by the IdM server if it does not contain the information about the location at which the CA that issued the user certificate listens for OCSP service requests, see the SSLOCSPDefaultResponder directive in Apache mod_ssl configuration options . The server is now configured for smart card authentication. Note To enable smart card authentication in the whole topology, run the procedure on each IdM server. 8.2. Using Ansible to configure the IdM server for smart card authentication You can use Ansible to enable smart card authentication for users whose certificates have been issued by the certificate authority (CA) of the <EXAMPLE.ORG> domain that your Identity Management (IdM) CA trusts. To do that, you must obtain the following certificates so that you can use them when running an Ansible playbook with the ipasmartcard_server ansible-freeipa role script: The certificate of the root CA that has either issued the certificate for the <EXAMPLE.ORG> CA directly, or through one or more of its sub-CAs. You can download the certificate chain from a web page whose certificate has been issued by the authority. For details, see Step 4 in Configuring a browser to enable certificate authentication . The IdM CA certificate. You can obtain the CA certificate from the /etc/ipa/ca.crt file on any IdM CA server. The certificates of all of the CAs that are intermediate between the <EXAMPLE.ORG> CA and the IdM CA. Prerequisites You have root access to the IdM server. You know the IdM admin password. You have the root CA certificate, the IdM CA certificate, and all the intermediate CA certificates. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure If your CA certificates are stored in files of a different format, such as DER , convert them to PEM format: The IdM Certificate Authority certificate is in PEM format and is located in the /etc/ipa/ca.crt file. Optional: Use the openssl x509 utility to view the contents of the files in the PEM format to check that the Issuer and Subject values are correct: Navigate to your ~/ MyPlaybooks / directory: Create a subdirectory dedicated to the CA certificates: For convenience, copy all the required certificates to the ~/MyPlaybooks/SmartCard/ directory: In your Ansible inventory file, specify the following: The IdM servers that you want to configure for smart card authentication. The IdM administrator password. The paths to the certificates of the CAs in the following order: The root CA certificate file The intermediate CA certificates files The IdM CA certificate file The file can look as follows: Create an install-smartcard-server.yml playbook with the following content: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: The ipasmartcard_server Ansible role performs the following actions: It configures the IdM Apache HTTP Server. It enables Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) on the Key Distribution Center (KDC). It configures the IdM Web UI to accept smart card authorization requests. Optional: If the certificate authority that issued the user certificate does not provide any Online Certificate Status Protocol (OCSP) responder, you may need to disable OCSP check for authentication to the IdM Web UI: Connect to the IdM server as root : Set the SSLOCSPEnable parameter to off in the /etc/httpd/conf.d/ssl.conf file: Restart the Apache daemon (httpd) for the changes to take effect immediately: Warning Do not disable the OCSP check if you only use user certificates issued by the IdM CA. OCSP responders are part of IdM. For instructions on how to keep the OCSP check enabled, and yet prevent a user certificate from being rejected by the IdM server if it does not contain the information about the location at which the CA that issued the user certificate listens for OCSP service requests, see the SSLOCSPDefaultResponder directive in Apache mod_ssl configuration options . The server listed in the inventory file is now configured for smart card authentication. Note To enable smart card authentication in the whole topology, set the hosts variable in the Ansible playbook to ipacluster : Additional resources Sample playbooks using the ipasmartcard_server role in the /usr/share/doc/ansible-freeipa/playbooks/ directory 8.3. Configuring the IdM client for smart card authentication Follow this procedure to configure IdM clients for smart card authentication. The procedure needs to be run on each IdM system, a client or a server, to which you want to connect while using a smart card for authentication. For example, to enable an ssh connection from host A to host B, the script needs to be run on host B. As an administrator, run this procedure to enable smart card authentication using The ssh protocol For details see Configuring SSH access using smart card authentication . The console login The GNOME Display Manager (GDM) The su command This procedure is not required for authenticating to the IdM Web UI. Authenticating to the IdM Web UI involves two hosts, neither of which needs to be an IdM client: The machine on which the browser is running. The machine can be outside of the IdM domain. The IdM server on which httpd is running. The following procedure assumes that you are configuring smart card authentication on an IdM client, not an IdM server. For this reason you need two computers: an IdM server to generate the configuration script, and the IdM client on which to run the script. Prerequisites Your IdM server has been configured for smart card authentication, as described in Configuring the IdM server for smart card authentication . You have root access to the IdM server and the IdM client. You have the root CA certificate and all the intermediate CA certificates. You installed the IdM client with the --mkhomedir option to ensure remote users can log in successfully. If you do not create a home directory, the default login location is the root of the directory structure, / . Procedure On an IdM server, generate a configuration script with ipa-advise using the administrator's privileges: The config-client-for-smart-card-auth.sh script performs the following actions: It configures the smart card daemon. It sets the system-wide truststore. It configures the System Security Services Daemon (SSSD) to allow users to authenticate with either their user name and password or with their smart card. For more details on SSSD profile options for smart card authentication, see Smart card authentication options in RHEL . From the IdM server, copy the script to a directory of your choice on the IdM client machine: From the IdM server, copy the CA certificate files in PEM format for convenience to the same directory on the IdM client machine as used in the step: On the client machine, execute the script, adding the PEM files containing the CA certificates as arguments: Note Ensure that you add the root CA's certificate as an argument before any sub CA certificates and that the CA or sub CA certificates have not expired. The client is now configured for smart card authentication. 8.4. Using Ansible to configure IdM clients for smart card authentication Follow this procedure to use the ansible-freeipa ipasmartcard_client module to configure specific Identity Management (IdM) clients to permit IdM users to authenticate with a smart card. Run this procedure to enable smart card authentication for IdM users that use any of the following to access IdM: The ssh protocol For details see Configuring SSH access using smart card authentication . The console login The GNOME Display Manager (GDM) The su command Note This procedure is not required for authenticating to the IdM Web UI. Authenticating to the IdM Web UI involves two hosts, neither of which needs to be an IdM client: The machine on which the browser is running. The machine can be outside of the IdM domain. The IdM server on which httpd is running. Prerequisites Your IdM server has been configured for smart card authentication, as described in Using Ansible to configure the IdM server for smart card authentication . You have root access to the IdM server and the IdM client. You have the root CA certificate, the IdM CA certificate, and all the intermediate CA certificates. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure If your CA certificates are stored in files of a different format, such as DER , convert them to PEM format: The IdM CA certificate is in PEM format and is located in the /etc/ipa/ca.crt file. Optional: Use the openssl x509 utility to view the contents of the files in the PEM format to check that the Issuer and Subject values are correct: On your Ansible control node, navigate to your ~/ MyPlaybooks / directory: Create a subdirectory dedicated to the CA certificates: For convenience, copy all the required certificates to the ~/MyPlaybooks/SmartCard/ directory, for example: In your Ansible inventory file, specify the following: The IdM clients that you want to configure for smart card authentication. The IdM administrator password. The paths to the certificates of the CAs in the following order: The root CA certificate file The intermediate CA certificates files The IdM CA certificate file The file can look as follows: Create an install-smartcard-clients.yml playbook with the following content: Save the file. Run the Ansible playbook. Specify the playbook and inventory files: The ipasmartcard_client Ansible role performs the following actions: It configures the smart card daemon. It sets the system-wide truststore. It configures the System Security Services Daemon (SSSD) to allow users to authenticate with either their user name and password or their smart card. For more details on SSSD profile options for smart card authentication, see Smart card authentication options in RHEL . The clients listed in the ipaclients section of the inventory file are now configured for smart card authentication. Note If you have installed the IdM clients with the --mkhomedir option, remote users will be able to log in to their home directories. Otherwise, the default login location is the root of the directory structure, / . Additional resources Sample playbooks using the ipasmartcard_server role in the /usr/share/doc/ansible-freeipa/playbooks/ directory 8.5. Adding a certificate to a user entry in the IdM Web UI Follow this procedure to add an external certificate to a user entry in IdM Web UI. Note Instead of uploading the whole certificate, it is also possible to upload certificate mapping data to a user entry in IdM. User entries containing either full certificates or certificate mapping data can be used in conjunction with corresponding certificate mapping rules to facilitate the configuration of smart card authentication for system administrators. For details, see Certificate mapping rules for configuring authentication . Note If the user's certificate has been issued by the IdM Certificate Authority, the certificate is already stored in the user entry, and you do not need to follow this procedure. Prerequisites You have the certificate that you want to add to the user entry at your disposal. Procedure Log into the IdM Web UI as an administrator if you want to add a certificate to another user. For adding a certificate to your own profile, you do not need the administrator's credentials. Navigate to Users Active users sc_user . Find the Certificate option and click Add . On the command line, display the certificate in the PEM format using the cat utility or a text editor: Copy and paste the certificate from the CLI into the window that has opened in the Web UI. Click Add . Figure 8.1. Adding a new certificate in the IdM Web UI The sc_user entry now contains an external certificate. 8.6. Adding a certificate to a user entry in the IdM CLI Follow this procedure to add an external certificate to a user entry in IdM CLI. Note Instead of uploading the whole certificate, it is also possible to upload certificate mapping data to a user entry in IdM. User entries containing either full certificates or certificate mapping data can be used in conjunction with corresponding certificate mapping rules to facilitate the configuration of smart card authentication for system administrators. For details, see Certificate mapping rules for configuring authentication . Note If the user's certificate has been issued by the IdM Certificate Authority, the certificate is already stored in the user entry, and you do not need to follow this procedure. Prerequisites You have the certificate that you want to add to the user entry at your disposal. Procedure Log into the IdM CLI as an administrator if you want to add a certificate to another user: For adding a certificate to your own profile, you do not need the administrator's credentials: Create an environment variable containing the certificate with the header and footer removed and concatenated into a single line, which is the format expected by the ipa user-add-cert command: Note that certificate in the testuser.crt file must be in the PEM format. Add the certificate to the profile of sc_user using the ipa user-add-cert command: The sc_user entry now contains an external certificate. 8.7. Installing tools for managing and using smart cards Prerequisites The gnutls-utils package is installed. The opensc package is installed. The pcscd service is running. Before you can configure your smart card, you must install the corresponding tools, which can generate certificates and start the pscd service. Procedure Install the opensc and gnutls-utils packages: Start the pcscd service. Verification Verify that the pcscd service is up and running 8.8. Preparing your smart card and uploading your certificates and keys to your smart card Follow this procedure to configure your smart card with the pkcs15-init tool, which helps you to configure: Erasing your smart card Setting new PINs and optional PIN Unblocking Keys (PUKs) Creating a new slot on the smart card Storing the certificate, private key, and public key in the slot If required, locking the smart card settings as certain smart cards require this type of finalization Note The pkcs15-init tool may not work with all smart cards. You must use the tools that work with the smart card you are using. Prerequisites The opensc package, which includes the pkcs15-init tool, is installed. For more details, see Installing tools for managing and using smart cards . The card is inserted in the reader and connected to the computer. You have a private key, a public key, and a certificate to store on the smart card. In this procedure, testuser.key , testuserpublic.key , and testuser.crt are the names used for the private key, public key, and the certificate. You have your current smart card user PIN and Security Officer PIN (SO-PIN). Procedure Erase your smart card and authenticate yourself with your PIN: The card has been erased. Initialize your smart card, set your user PIN and PUK, and your Security Officer PIN and PUK: The pcks15-init tool creates a new slot on the smart card. Set a label and the authentication ID for the slot: The label is set to a human-readable value, in this case, testuser . The auth-id must be two hexadecimal values, in this case it is set to 01 . Store and label the private key in the new slot on the smart card: Note The value you specify for --id must be the same when storing your private key and storing your certificate in the step. Specifying your own value for --id is recommended as otherwise a more complicated value is calculated by the tool. Store and label the certificate in the new slot on the smart card: Optional: Store and label the public key in the new slot on the smart card: Note If the public key corresponds to a private key or certificate, specify the same ID as the ID of the private key or certificate. Optional: Certain smart cards require you to finalize the card by locking the settings: At this stage, your smart card includes the certificate, private key, and public key in the newly created slot. You have also created your user PIN and PUK and the Security Officer PIN and PUK. 8.9. Logging in to IdM with smart cards Follow this procedure to use smart cards for logging in to the IdM Web UI. Prerequisites The web browser is configured for using smart card authentication. The IdM server is configured for smart card authentication. The certificate installed on your smart card is either issued by the IdM server or has been added to the user entry in IdM. You know the PIN required to unlock the smart card. The smart card has been inserted into the reader. Procedure Open the IdM Web UI in the browser. Click Log In Using Certificate . If the Password Required dialog box opens, add the PIN to unlock the smart card and click the OK button. The User Identification Request dialog box opens. If the smart card contains more than one certificate, select the certificate you want to use for authentication in the drop down list below Choose a certificate to present as identification . Click the OK button. Now you are successfully logged in to the IdM Web UI. 8.10. Logging in to GDM using smart card authentication on an IdM client The GNOME Desktop Manager (GDM) requires authentication. You can use your password; however, you can also use a smart card for authentication. Follow this procedure to use smart card authentication to access GDM. Prerequisites The system has been configured for smart card authentication. For details, see Configuring the IdM client for smart card authentication . The smart card contains your certificate and private key. The user account is a member of the IdM domain. The certificate on the smart card maps to the user entry through: Assigning the certificate to a particular user entry. For details, see, Adding a certificate to a user entry in the IdM Web UI or Adding a certificate to a user entry in the IdM CLI . The certificate mapping data being applied to the account. For details, see Certificate mapping rules for configuring authentication on smart cards . Procedure Insert the smart card in the reader. Enter the smart card PIN. Click Sign In . You are successfully logged in to the RHEL system and you have a TGT provided by the IdM server. Verification In the Terminal window, enter klist and check the result: 8.11. Using smart card authentication with the su command Changing to a different user requires authentication. You can use a password or a certificate. Follow this procedure to use your smart card with the su command. It means that after entering the su command, you are prompted for the smart card PIN. Prerequisites Your IdM server and client have been configured for smart card authentication. See Configuring the IdM server for smart card authentication See Configuring the IdM client for smart card authentication The smart card contains your certificate and private key. See Storing a certificate on a smart card The card is inserted in the reader and connected to the computer. Procedure In a terminal window, change to a different user with the su command: If the configuration is correct, you are prompted to enter the smart card PIN. | [
"mkdir ~/SmartCard/",
"cd ~/SmartCard/",
"openssl x509 -in <filename>.der -inform DER -out <filename>.pem -outform PEM",
"cp /tmp/rootca.pem ~/SmartCard/ cp /tmp/subca.pem ~/SmartCard/ cp /tmp/issuingca.pem ~/SmartCard/",
"openssl x509 -noout -text -in rootca.pem | more",
"kinit admin ipa-advise config-server-for-smart-card-auth > config-server-for-smart-card-auth.sh",
"chmod +x config-server-for-smart-card-auth.sh ./config-server-for-smart-card-auth.sh rootca.pem subca.pem issuingca.pem Ticket cache:KEYRING:persistent:0:0 Default principal: [email protected] [...] Systemwide CA database updated. The ipa-certupdate command was successful",
"SSLOCSPEnable off",
"systemctl restart httpd",
"openssl x509 -in <filename>.der -inform DER -out <filename>.pem -outform PEM",
"openssl x509 -noout -text -in root-ca.pem | more",
"cd ~/ MyPlaybooks /",
"mkdir SmartCard/",
"cp /tmp/root-ca.pem ~/MyPlaybooks/SmartCard/ cp /tmp/intermediate-ca.pem ~/MyPlaybooks/SmartCard/ cp /etc/ipa/ca.crt ~/MyPlaybooks/SmartCard/ipa-ca.crt",
"[ipaserver] ipaserver.idm.example.com [ipareplicas] ipareplica1.idm.example.com ipareplica2.idm.example.com [ipacluster:children] ipaserver ipareplicas [ipacluster:vars] ipaadmin_password= \"{{ ipaadmin_password }}\" ipasmartcard_server_ca_certs=/home/<user_name>/MyPlaybooks/SmartCard/root-ca.pem,/home/<user_name>/MyPlaybooks/SmartCard/intermediate-ca.pem,/home/<user_name>/MyPlaybooks/SmartCard/ipa-ca.crt",
"--- - name: Playbook to set up smart card authentication for an IdM server hosts: ipaserver become: true roles: - role: ipasmartcard_server state: present",
"ansible-playbook --vault-password-file=password_file -v -i inventory install-smartcard-server.yml",
"ssh [email protected]",
"SSLOCSPEnable off",
"systemctl restart httpd",
"--- - name: Playbook to setup smartcard for IPA server and replicas hosts: ipacluster [...]",
"kinit admin ipa-advise config-client-for-smart-card-auth > config-client-for-smart-card-auth.sh",
"scp config-client-for-smart-card-auth.sh root @ client.idm.example.com:/root/SmartCard/ Password: config-client-for-smart-card-auth.sh 100% 2419 3.5MB/s 00:00",
"scp {rootca.pem,subca.pem,issuingca.pem} root @ client.idm.example.com:/root/SmartCard/ Password: rootca.pem 100% 1237 9.6KB/s 00:00 subca.pem 100% 2514 19.6KB/s 00:00 issuingca.pem 100% 2514 19.6KB/s 00:00",
"kinit admin chmod +x config-client-for-smart-card-auth.sh ./config-client-for-smart-card-auth.sh rootca.pem subca.pem issuingca.pem Ticket cache:KEYRING:persistent:0:0 Default principal: [email protected] [...] Systemwide CA database updated. The ipa-certupdate command was successful",
"openssl x509 -in <filename>.der -inform DER -out <filename>.pem -outform PEM",
"openssl x509 -noout -text -in root-ca.pem | more",
"cd ~/ MyPlaybooks /",
"mkdir SmartCard/",
"cp /tmp/root-ca.pem ~/MyPlaybooks/SmartCard/ cp /tmp/intermediate-ca.pem ~/MyPlaybooks/SmartCard/ cp /etc/ipa/ca.crt ~/MyPlaybooks/SmartCard/ipa-ca.crt",
"[ipaclients] ipaclient1.example.com ipaclient2.example.com [ipaclients:vars] ipaadmin_password=SomeADMINpassword ipasmartcard_client_ca_certs=/home/<user_name>/MyPlaybooks/SmartCard/root-ca.pem,/home/<user_name>/MyPlaybooks/SmartCard/intermediate-ca.pem,/home/<user_name>/MyPlaybooks/SmartCard/ipa-ca.crt",
"--- - name: Playbook to set up smart card authentication for an IdM client hosts: ipaclients become: true roles: - role: ipasmartcard_client state: present",
"ansible-playbook --vault-password-file=password_file -v -i inventory install-smartcard-clients.yml",
"[user@client SmartCard]USD cat testuser.crt",
"[user@client SmartCard]USD kinit admin",
"[user@client SmartCard]USD kinit sc_user",
"[user@client SmartCard]USD export CERT=`openssl x509 -outform der -in testuser.crt | base64 -w0 -`",
"[user@client SmartCard]USD ipa user-add-cert sc_user --certificate=USDCERT",
"dnf -y install opensc gnutls-utils",
"systemctl start pcscd",
"systemctl status pcscd",
"pkcs15-init --erase-card --use-default-transport-keys Using reader with a card: Reader name PIN [Security Officer PIN] required. Please enter PIN [Security Officer PIN]:",
"pkcs15-init --create-pkcs15 --use-default-transport-keys --pin 963214 --puk 321478 --so-pin 65498714 --so-puk 784123 Using reader with a card: Reader name",
"pkcs15-init --store-pin --label testuser --auth-id 01 --so-pin 65498714 --pin 963214 --puk 321478 Using reader with a card: Reader name",
"pkcs15-init --store-private-key testuser.key --label testuser_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name",
"pkcs15-init --store-certificate testuser.crt --label testuser_crt --auth-id 01 --id 01 --format pem --pin 963214 Using reader with a card: Reader name",
"pkcs15-init --store-public-key testuserpublic.key --label testuserpublic_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name",
"pkcs15-init -F",
"klist Ticket cache: KEYRING:persistent:1358900015:krb_cache_TObtNMd Default principal: [email protected] Valid starting Expires Service principal 04/20/2020 13:58:24 04/20/2020 23:58:24 krbtgt/[email protected] renew until 04/27/2020 08:58:15",
"su - example.user PIN for smart_card"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_certificates_in_idm/configuring-idm-for-smart-card-auth_managing-certificates-in-idm |
Appendix A. Revision History | Appendix A. Revision History Note that revision numbers relate to the edition of this manual, not to version numbers of Red Hat Enterprise Linux. Revision History Revision 7.0-51 Thu Mar 4 2021 Florian Delehaye 7.9 GA version of the guide. Added a new section about adjusting DNA ID ranges manually. Revision 7.0-50 Wed May 27 2020 Florian Delehaye Several fixes and updates. Revision 7.0-49 Tue Aug 06 2019 Marc Muehlfeld Document version for 7.7 GA publication. Revision 7.0-48 Wed Jun 05 2019 Marc Muehlfeld Updated Configuring Trust Agents , added How the AD Provider Handles Trusted Domains and Changing the Format of User Names Displayed by SSSD . Revision 7.0-47 Tue Apr 08 2019 Marc Muehlfeld Several minor fixes and updates. Revision 7.0-46 Mon Oct 29 2018 Filip Hanzelka Preparing document for 7.6 GA publication. Revision 7.0-45 Mon Jun 25 2018 Filip Hanzelka Added Switching Between SSSD and Winbind for SMB Share Access . Revision 7.0-44 Thu Apr 5 2018 Filip Hanzelka Preparing document for 7.5 GA publication. Revision 7.0-43 Wed Feb 28 2018 Filip Hanzelka Updated GPO Settings Supported by SSSD. Revision 7.0-42 Mon Feb 12 2018 Aneta Steflova Petrova Updated Creating a Two-Way Trust with a Shared Secret . Revision 7.0-41 Mon Jan 29 2018 Aneta Steflova Petrova Minor fixes. Revision 7.0-40 Fri Dec 15 2017 Aneta Steflova Petrova Minor fixes. Revision 7.0-39 Mon Dec 6 2017 Aneta Steflova Petrova Updated Using Samba for Active Directory Integration . Revision 7.0-38 Mon Dec 4 2017 Aneta Steflova Petrova Updated DNS and Realm Settings for trusts. Revision 7.0-37 Mon Nov 20 2017 Aneta Steflova Petrova Updated Creating a Two-Way Trust with a Shared Secret . Revision 7.0-36 Mon Nov 6 2017 Aneta Steflova Petrova Minor fixes. Revision 7.0-35 Mon Oct 23 2017 Aneta Steflova Petrova Updated Active Directory Entries and POSIX Attributes and Configuring an AD Domain with ID Mapping as a Provider for SSSD . Revision 7.0-34 Mon Oct 9 2017 Aneta Steflova Petrova Added Configuration Options for Using Short Names . Updated Trust Controllers and Trust Agents . Revision 7.0-33 Tue Sep 26 2017 Aneta Steflova Petrova Updated the autodiscovery section in the SSSD chapter. Added two sections on configuring trusted domains. Revision 7.0-32 Tue Jul 18 2017 Aneta Steflova Petrova Document version for 7.4 GA publication. Revision 7.0-31 Tue May 23 2017 Aneta Steflova Petrova A minor fix for About Security ID Mapping. Revision 7.0-30 Mon Apr 24 2017 Aneta Steflova Petrova Minor fixes for Defining Windows Integration. Revision 7.0-29 Mon Apr 10 2017 Aneta Steflova Petrova Updated Direct Integration. Revision 7.0-28 Mon Mar 27 2017 Aneta Steflova Petrova Moved Allowing Users to Change Other Users' Passwords Cleanly to the Linux Domain Identity guide as Enabling Password Reset. Updated Supported Windows Platforms for trusts. Fixed broken links. Other minor updates. Revision 7.0-27 Mon Feb 27 2017 Aneta Steflova Petrova Updated port requirements for trusts. Minor restructuring for trust and sync. Other minor updates. Revision 7.0-26 Wed Nov 23 2016 Aneta Steflova Petrova Added ipa-winsync-migrate. Minor fixes for the trust, SSSD, and synchronization chapters. Revision 7.0-25 Tue Oct 18 2016 Aneta Steflova Petrova Version for 7.3 GA publication. Revision 7.0-24 Thu Jul 28 2016 Marc Muehlfeld Updated diagrams, added Kerberos flags for services and hosts, other minor fixes. Revision 7.0-23 Thu Jun 09 2016 Marc Muehlfeld Updated the synchronization chapter. Removed the Kerberos chapter. Other minor fixes. Revision 7.0-22 Tue Feb 09 2016 Aneta Petrova Updated realmd, removed index, moved a part of ID views to the Linux Domain Identity guide, other minor updates. Revision 7.0-21 Fri Nov 13 2015 Aneta Petrova Version for 7.2 GA release with minor updates. Revision 7.0-20 Thu Nov 12 2015 Aneta Petrova Version for 7.2 GA release. Revision 7.0-19 Fri Sep 18 2015 Tomas Capek Updated the splash page sort order. Revision 7.0-18 Thu Sep 10 2015 Aneta Petrova Updated the output format. Revision 7.0-17 Mon Jul 27 2015 Aneta Petrova Added GPO-based access control, a number of other minor changes. Revision 7.0-16 Thu Apr 02 2015 Tomas Capek Added ipa-advise, extended CIFS share with SSSD, admonition for the Identity Management for UNIX extension. Revision 7.0-15 Fri Mar 13 2015 Tomas Capek Async update with last-minute edits for 7.1. Revision 7.0-13 Wed Feb 25 2015 Tomas Capek Version for 7.1 GA release. Revision 7.0-11 Fri Dec 05 2014 Tomas Capek Rebuild to update the sort order on the splash page. Revision 7.0-7 Mon Sep 15 2014 Tomas Capek Section 5.3 Creating Trusts temporarily removed for content updates. Revision 7.0-5 June 27, 2014 Ella Deon Ballard Improving Samba+Kerberos+Winbind chapters. Revision 7.0-4 June 13, 2014 Ella Deon Ballard Adding Kerberos realm chapter. Revision 7.0-3 June 11, 2014 Ella Deon Ballard Initial release. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/doc-history |
Chapter 1. Extension APIs | Chapter 1. Extension APIs 1.1. APIService [apiregistration.k8s.io/v1] Description APIService represents a server for a particular GroupVersion. Name must be "version.group". Type object 1.2. CustomResourceDefinition [apiextensions.k8s.io/v1] Description CustomResourceDefinition represents a resource that should be exposed on the API server. Its name MUST be in the format <.spec.name>.<.spec.group>. Type object 1.3. MutatingWebhookConfiguration [admissionregistration.k8s.io/v1] Description MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object. Type object 1.4. ValidatingAdmissionPolicy [admissionregistration.k8s.io/v1] Description ValidatingAdmissionPolicy describes the definition of an admission validation policy that accepts or rejects an object without changing it. Type object 1.5. ValidatingAdmissionPolicyBinding [admissionregistration.k8s.io/v1] Description ValidatingAdmissionPolicyBinding binds the ValidatingAdmissionPolicy with paramerized resources. ValidatingAdmissionPolicyBinding and parameter CRDs together define how cluster administrators configure policies for clusters. For a given admission request, each binding will cause its policy to be evaluated N times, where N is 1 for policies/bindings that don't use params, otherwise N is the number of parameters selected by the binding. The CEL expressions of a policy must have a computed CEL cost below the maximum CEL budget. Each evaluation of the policy is given an independent CEL cost budget. Adding/removing policies, bindings, or params can not affect whether a given (policy, binding, param) combination is within its own CEL budget. Type object 1.6. ValidatingWebhookConfiguration [admissionregistration.k8s.io/v1] Description ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it. Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/extension_apis/extension-apis |
Appendix A. Using your subscription | Appendix A. Using your subscription AMQ Streams is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ Streams entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ Streams product. The Software Downloads page opens. Click the Download link for your component. Revised on 2021-04-19 16:15:39 UTC | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/amq_streams_on_openshift_overview/using_your_subscription |
Chapter 94. XML Tokenize | Chapter 94. XML Tokenize The XML Tokenize language is a built-in language in camel-xml-jaxp , which is a truly XML-aware tokenizer that can be used with the Split EIP as the conventional Tokenize to efficiently and effectively tokenize XML documents.. XML Tokenize is capable of not only recognizing XML namespaces and hierarchical structures of the document but also more efficiently tokenizing XML documents than the conventional Tokenize language. Additional dependency In order to use this component, an additional dependency is required as follows: <dependency> <groupId>org.codehaus.woodstox</groupId> <artifactId>woodstox-core-asl</artifactId> <version>4.4.1</version> </dependency> or <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-stax-starter</artifactId> </dependency> 94.1. XML Tokenizer Options The XML Tokenize language supports 4 options, which are listed below. Name Default Java Type Description headerName String Name of header to tokenize instead of using the message body. mode Enum The extraction mode. The available extraction modes are: i - injecting the contextual namespace bindings into the extracted token (default) w - wrapping the extracted token in its ancestor context u - unwrapping the extracted token to its child content t - extracting the text content of the specified element. Enum values: i w u t group Integer To group N parts together. trim Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 94.2. Example See Split EIP which has examples using the XML Tokenize language. 94.3. Spring Boot Auto-Configuration When using xtokenize with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-xml-jaxp-starter</artifactId> </dependency> The component supports 3 options, which are listed below. Name Description Default Type camel.language.xtokenize.enabled Whether to enable auto configuration of the xtokenize language. This is enabled by default. Boolean camel.language.xtokenize.mode The extraction mode. The available extraction modes are: i - injecting the contextual namespace bindings into the extracted token (default) w - wrapping the extracted token in its ancestor context u - unwrapping the extracted token to its child content t - extracting the text content of the specified element. String camel.language.xtokenize.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean | [
"<dependency> <groupId>org.codehaus.woodstox</groupId> <artifactId>woodstox-core-asl</artifactId> <version>4.4.1</version> </dependency>",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-stax-starter</artifactId> </dependency>",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-xml-jaxp-starter</artifactId> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-xml-tokenize-language-starter |
Chapter 9. PackageManifest [packages.operators.coreos.com/v1] | Chapter 9. PackageManifest [packages.operators.coreos.com/v1] Description PackageManifest holds information about a package, which is a reference to one (or more) channels under a single package. Type object 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta spec object PackageManifestSpec defines the desired state of PackageManifest status object PackageManifestStatus represents the current status of the PackageManifest 9.1.1. .spec Description PackageManifestSpec defines the desired state of PackageManifest Type object 9.1.2. .status Description PackageManifestStatus represents the current status of the PackageManifest Type object Required catalogSource catalogSourceDisplayName catalogSourcePublisher catalogSourceNamespace packageName channels defaultChannel Property Type Description catalogSource string CatalogSource is the name of the CatalogSource this package belongs to catalogSourceDisplayName string catalogSourceNamespace string CatalogSourceNamespace is the namespace of the owning CatalogSource catalogSourcePublisher string channels array Channels are the declared channels for the package, ala stable or alpha . channels[] object PackageChannel defines a single channel under a package, pointing to a version of that package. defaultChannel string DefaultChannel is, if specified, the name of the default channel for the package. The default channel will be installed if no other channel is explicitly given. If the package has a single channel, then that channel is implicitly the default. deprecation object Deprecation conveys information regarding a deprecated resource. packageName string PackageName is the name of the overall package, ala etcd . provider object AppLink defines a link to an application 9.1.3. .status.channels Description Channels are the declared channels for the package, ala stable or alpha . Type array 9.1.4. .status.channels[] Description PackageChannel defines a single channel under a package, pointing to a version of that package. Type object Required name currentCSV entries Property Type Description currentCSV string CurrentCSV defines a reference to the CSV holding the version of this package currently for the channel. currentCSVDesc object CSVDescription defines a description of a CSV deprecation object Deprecation conveys information regarding a deprecated resource. entries array Entries lists all CSVs in the channel, with their upgrade edges. entries[] object ChannelEntry defines a member of a package channel. name string Name is the name of the channel, e.g. alpha or stable 9.1.5. .status.channels[].currentCSVDesc Description CSVDescription defines a description of a CSV Type object Property Type Description annotations object (string) apiservicedefinitions APIServiceDefinitions customresourcedefinitions CustomResourceDefinitions description string LongDescription is the CSV's description displayName string DisplayName is the CSV's display name icon array Icon is the CSV's base64 encoded icon icon[] object Icon defines a base64 encoded icon and media type installModes array (InstallMode) InstallModes specify supported installation types keywords array (string) links array links[] object AppLink defines a link to an application maintainers array maintainers[] object Maintainer defines a project maintainer maturity string minKubeVersion string Minimum Kubernetes version for operator installation nativeApis array (GroupVersionKind) provider object AppLink defines a link to an application relatedImages array (string) List of related images version OperatorVersion Version is the CSV's semantic version 9.1.6. .status.channels[].currentCSVDesc.icon Description Icon is the CSV's base64 encoded icon Type array 9.1.7. .status.channels[].currentCSVDesc.icon[] Description Icon defines a base64 encoded icon and media type Type object Property Type Description base64data string mediatype string 9.1.8. .status.channels[].currentCSVDesc.links Description Type array 9.1.9. .status.channels[].currentCSVDesc.links[] Description AppLink defines a link to an application Type object Property Type Description name string url string 9.1.10. .status.channels[].currentCSVDesc.maintainers Description Type array 9.1.11. .status.channels[].currentCSVDesc.maintainers[] Description Maintainer defines a project maintainer Type object Property Type Description email string name string 9.1.12. .status.channels[].currentCSVDesc.provider Description AppLink defines a link to an application Type object Property Type Description name string url string 9.1.13. .status.channels[].deprecation Description Deprecation conveys information regarding a deprecated resource. Type object Required message Property Type Description message string Message is a human readable message describing the deprecation. 9.1.14. .status.channels[].entries Description Entries lists all CSVs in the channel, with their upgrade edges. Type array 9.1.15. .status.channels[].entries[] Description ChannelEntry defines a member of a package channel. Type object Required name Property Type Description deprecation object Deprecation conveys information regarding a deprecated resource. name string Name is the name of the bundle for this entry. version string Version is the version of the bundle for this entry. 9.1.16. .status.channels[].entries[].deprecation Description Deprecation conveys information regarding a deprecated resource. Type object Required message Property Type Description message string Message is a human readable message describing the deprecation. 9.1.17. .status.deprecation Description Deprecation conveys information regarding a deprecated resource. Type object Required message Property Type Description message string Message is a human readable message describing the deprecation. 9.1.18. .status.provider Description AppLink defines a link to an application Type object Property Type Description name string url string 9.2. API endpoints The following API endpoints are available: /apis/packages.operators.coreos.com/v1/packagemanifests GET : list objects of kind PackageManifest /apis/packages.operators.coreos.com/v1/namespaces/{namespace}/packagemanifests GET : list objects of kind PackageManifest /apis/packages.operators.coreos.com/v1/namespaces/{namespace}/packagemanifests/{name} GET : read the specified PackageManifest /apis/packages.operators.coreos.com/v1/namespaces/{namespace}/packagemanifests/{name}/icon GET : connect GET requests to icon of PackageManifest 9.2.1. /apis/packages.operators.coreos.com/v1/packagemanifests HTTP method GET Description list objects of kind PackageManifest Table 9.1. HTTP responses HTTP code Reponse body 200 - OK PackageManifestList schema 9.2.2. /apis/packages.operators.coreos.com/v1/namespaces/{namespace}/packagemanifests HTTP method GET Description list objects of kind PackageManifest Table 9.2. HTTP responses HTTP code Reponse body 200 - OK PackageManifestList schema 9.2.3. /apis/packages.operators.coreos.com/v1/namespaces/{namespace}/packagemanifests/{name} Table 9.3. Global path parameters Parameter Type Description name string name of the PackageManifest HTTP method GET Description read the specified PackageManifest Table 9.4. HTTP responses HTTP code Reponse body 200 - OK PackageManifest schema 9.2.4. /apis/packages.operators.coreos.com/v1/namespaces/{namespace}/packagemanifests/{name}/icon Table 9.5. Global path parameters Parameter Type Description name string name of the PackageManifest HTTP method GET Description connect GET requests to icon of PackageManifest Table 9.6. HTTP responses HTTP code Reponse body 200 - OK string | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operatorhub_apis/packagemanifest-packages-operators-coreos-com-v1 |
Chapter 3. Automation controller | Chapter 3. Automation controller Automation controller helps teams manage complex multitiered deployments by adding control, knowledge, and delegation to Ansible-powered environments. See Automation Controller Release Notes for 4.x for a full list of new features and enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_release_notes/controller-440-intro |
Chapter 2. Understanding build configurations | Chapter 2. Understanding build configurations The following sections define the concept of a build, build configuration, and outline the primary build strategies available. 2.1. BuildConfigs A build configuration describes a single build definition and a set of triggers for when a new build is created. Build configurations are defined by a BuildConfig , which is a REST object that can be used in a POST to the API server to create a new instance. A build configuration, or BuildConfig , is characterized by a build strategy and one or more sources. The strategy determines the process, while the sources provide its input. Depending on how you choose to create your application using OpenShift Container Platform, a BuildConfig is typically generated automatically for you if you use the web console or CLI, and it can be edited at any time. Understanding the parts that make up a BuildConfig and their available options can help if you choose to manually change your configuration later. The following example BuildConfig results in a new build every time a container image tag or the source code changes: BuildConfig object definition kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: "ruby-sample-build" 1 spec: runPolicy: "Serial" 2 triggers: 3 - type: "GitHub" github: secret: "secret101" - type: "Generic" generic: secret: "secret101" - type: "ImageChange" source: 4 git: uri: "https://github.com/openshift/ruby-hello-world" strategy: 5 sourceStrategy: from: kind: "ImageStreamTag" name: "ruby-20-centos7:latest" output: 6 to: kind: "ImageStreamTag" name: "origin-ruby-sample:latest" postCommit: 7 script: "bundle exec rake test" 1 This specification creates a new BuildConfig named ruby-sample-build . 2 The runPolicy field controls whether builds created from this build configuration can be run simultaneously. The default value is Serial , which means new builds run sequentially, not simultaneously. 3 You can specify a list of triggers, which cause a new build to be created. 4 The source section defines the source of the build. The source type determines the primary source of input, and can be either Git , to point to a code repository location, Dockerfile , to build from an inline Dockerfile, or Binary , to accept binary payloads. It is possible to have multiple sources at once. See the documentation for each source type for details. 5 The strategy section describes the build strategy used to execute the build. You can specify a Source , Docker , or Custom strategy here. This example uses the ruby-20-centos7 container image that Source-to-image (S2I) uses for the application build. 6 After the container image is successfully built, it is pushed into the repository described in the output section. 7 The postCommit section defines an optional build hook. | [
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: \"ruby-sample-build\" 1 spec: runPolicy: \"Serial\" 2 triggers: 3 - type: \"GitHub\" github: secret: \"secret101\" - type: \"Generic\" generic: secret: \"secret101\" - type: \"ImageChange\" source: 4 git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: 5 sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\" output: 6 to: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" postCommit: 7 script: \"bundle exec rake test\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/builds_using_buildconfig/understanding-buildconfigs |
Chapter 28. Use Red Hat JBoss Data Grid with Google Compute Engine | Chapter 28. Use Red Hat JBoss Data Grid with Google Compute Engine 28.1. The GOOGLE_PING Protocol GOOGLE_PING is a discovery protocol used by JGroups during cluster formation. It is ideal to use with Google Compute Engine (GCE) and uses Google Cloud Storage to store information about individual cluster members. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-use_red_hat_jboss_data_grid_with_google_compute_engine |
Power Monitoring | Power Monitoring OpenShift Container Platform 4.18 Configuring and using power monitoring for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/power_monitoring/index |
Chapter 3. Setting up the environment for an OpenShift installation | Chapter 3. Setting up the environment for an OpenShift installation 3.1. Installing RHEL on the provisioner node With the configuration of the prerequisites complete, the step is to install RHEL 9.x on the provisioner node. The installer uses the provisioner node as the orchestrator while installing the OpenShift Container Platform cluster. For the purposes of this document, installing RHEL on the provisioner node is out of scope. However, options include but are not limited to using a RHEL Satellite server, PXE, or installation media. 3.2. Preparing the provisioner node for OpenShift Container Platform installation Perform the following steps to prepare the environment. Procedure Log in to the provisioner node via ssh . Create a non-root user ( kni ) and provide that user with sudo privileges: # useradd kni # passwd kni # echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni # chmod 0440 /etc/sudoers.d/kni Create an ssh key for the new user: # su - kni -c "ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''" Log in as the new user on the provisioner node: # su - kni Use Red Hat Subscription Manager to register the provisioner node: USD sudo subscription-manager register --username=<user> --password=<pass> --auto-attach USD sudo subscription-manager repos --enable=rhel-9-for-<architecture>-appstream-rpms --enable=rhel-9-for-<architecture>-baseos-rpms Note For more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager . Install the following packages: USD sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool Modify the user to add the libvirt group to the newly created user: USD sudo usermod --append --groups libvirt <user> Restart firewalld and enable the http service: USD sudo systemctl start firewalld USD sudo firewall-cmd --zone=public --add-service=http --permanent USD sudo firewall-cmd --reload Start and enable the libvirtd service: USD sudo systemctl enable libvirtd --now Create the default storage pool and start it: USD sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images USD sudo virsh pool-start default USD sudo virsh pool-autostart default Create a pull-secret.txt file: USD vim pull-secret.txt In a web browser, navigate to Install OpenShift on Bare Metal with installer-provisioned infrastructure . Click Copy pull secret . Paste the contents into the pull-secret.txt file and save the contents in the kni user's home directory. 3.3. Checking NTP server synchronization The OpenShift Container Platform installation program installs the chrony Network Time Protocol (NTP) service on the cluster nodes. To complete installation, each node must have access to an NTP time server. You can verify NTP server synchronization by using the chrony service. For disconnected clusters, you must configure the NTP servers on the control plane nodes. For more information see the Additional resources section. Prerequisites You installed the chrony package on the target node. Procedure Log in to the node by using the ssh command. View the NTP servers available to the node by running the following command: USD chronyc sources Example output MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^+ time.cloudflare.com 3 10 377 187 -209us[ -209us] +/- 32ms ^+ t1.time.ir2.yahoo.com 2 10 377 185 -4382us[-4382us] +/- 23ms ^+ time.cloudflare.com 3 10 377 198 -996us[-1220us] +/- 33ms ^* brenbox.westnet.ie 1 10 377 193 -9538us[-9761us] +/- 24ms Use the ping command to ensure that the node can access an NTP server, for example: USD ping time.cloudflare.com Example output PING time.cloudflare.com (162.159.200.123) 56(84) bytes of data. 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=1 ttl=54 time=32.3 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=2 ttl=54 time=30.9 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=3 ttl=54 time=36.7 ms ... Additional resources Optional: Configuring NTP for disconnected clusters Network Time Protocol (NTP) 3.4. Configuring networking Before installation, you must configure the networking on the provisioner node. Installer-provisioned clusters deploy with a bare-metal bridge and network, and an optional provisioning bridge and network. Note You can also configure networking from the web console. Procedure Export the bare-metal network NIC name by running the following command: USD export PUB_CONN=<baremetal_nic_name> Configure the bare-metal network: Note The SSH connection might disconnect after executing these steps. For a network using DHCP, run the following command: USD sudo nohup bash -c " nmcli con down \"USDPUB_CONN\" nmcli con delete \"USDPUB_CONN\" # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists nmcli con down \"System USDPUB_CONN\" nmcli con delete \"System USDPUB_CONN\" nmcli connection add ifname baremetal type bridge <con_name> baremetal bridge.stp no 1 nmcli con add type bridge-slave ifname \"USDPUB_CONN\" master baremetal pkill dhclient;dhclient baremetal " 1 Replace <con_name> with the connection name. For a network using static IP addressing and no DHCP network, run the following command: USD sudo nohup bash -c " nmcli con down \"USDPUB_CONN\" nmcli con delete \"USDPUB_CONN\" # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists nmcli con down \"System USDPUB_CONN\" nmcli con delete \"System USDPUB_CONN\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no ipv4.method manual ipv4.addr "x.x.x.x/yy" ipv4.gateway "a.a.a.a" ipv4.dns "b.b.b.b" 1 nmcli con add type bridge-slave ifname \"USDPUB_CONN\" master baremetal nmcli con up baremetal " 1 Replace <con_name> with the connection name. Replace x.x.x.x/yy with the IP address and CIDR for the network. Replace a.a.a.a with the network gateway. Replace b.b.b.b with the IP address of the DNS server. Optional: If you are deploying with a provisioning network, export the provisioning network NIC name by running the following command: USD export PROV_CONN=<prov_nic_name> Optional: If you are deploying with a provisioning network, configure the provisioning network by running the following command: USD sudo nohup bash -c " nmcli con down \"USDPROV_CONN\" nmcli con delete \"USDPROV_CONN\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \"USDPROV_CONN\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning " Note The SSH connection might disconnect after executing these steps. The IPv6 address can be any address that is not routable through the bare-metal network. Ensure that UEFI is enabled and UEFI PXE settings are set to the IPv6 protocol when using IPv6 addressing. Optional: If you are deploying with a provisioning network, configure the IPv4 address on the provisioning network connection by running the following command: USD nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual SSH back into the provisioner node (if required) by running the following command: # ssh kni@provisioner.<cluster-name>.<domain> Verify that the connection bridges have been properly created by running the following command: USD sudo nmcli con show Example output NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2 3.5. Establishing communication between subnets In a typical OpenShift Container Platform cluster setup, all nodes, including the control plane and worker nodes, reside in the same network. However, for edge computing scenarios, it can be beneficial to locate worker nodes closer to the edge. This often involves using different network segments or subnets for the remote worker nodes than the subnet used by the control plane and local worker nodes. Such a setup can reduce latency for the edge and allow for enhanced scalability. However, the network must be configured properly before installing OpenShift Container Platform to ensure that the edge subnets containing the remote worker nodes can reach the subnet containing the control plane nodes and receive traffic from the control plane too. Important All control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details. Deploying a cluster with multiple subnets requires using virtual media. This procedure details the network configuration required to allow the remote worker nodes in the second subnet to communicate effectively with the control plane nodes in the first subnet and to allow the control plane nodes in the first subnet to communicate effectively with the remote worker nodes in the second subnet. In this procedure, the cluster spans two subnets: The first subnet ( 10.0.0.0 ) contains the control plane and local worker nodes. The second subnet ( 192.168.0.0 ) contains the edge worker nodes. Procedure Configure the first subnet to communicate with the second subnet: Log in as root to a control plane node by running the following command: USD sudo su - Get the name of the network interface by running the following command: # nmcli dev status Add a route to the second subnet ( 192.168.0.0 ) via the gateway by running the following command: # nmcli connection modify <interface_name> +ipv4.routes "192.168.0.0/24 via <gateway>" Replace <interface_name> with the interface name. Replace <gateway> with the IP address of the actual gateway. Example # nmcli connection modify eth0 +ipv4.routes "192.168.0.0/24 via 192.168.0.1" Apply the changes by running the following command: # nmcli connection up <interface_name> Replace <interface_name> with the interface name. Verify the routing table to ensure the route has been added successfully: # ip route Repeat the steps for each control plane node in the first subnet. Note Adjust the commands to match your actual interface names and gateway. Configure the second subnet to communicate with the first subnet: Log in as root to a remote worker node by running the following command: USD sudo su - Get the name of the network interface by running the following command: # nmcli dev status Add a route to the first subnet ( 10.0.0.0 ) via the gateway by running the following command: # nmcli connection modify <interface_name> +ipv4.routes "10.0.0.0/24 via <gateway>" Replace <interface_name> with the interface name. Replace <gateway> with the IP address of the actual gateway. Example # nmcli connection modify eth0 +ipv4.routes "10.0.0.0/24 via 10.0.0.1" Apply the changes by running the following command: # nmcli connection up <interface_name> Replace <interface_name> with the interface name. Verify the routing table to ensure the route has been added successfully by running the following command: # ip route Repeat the steps for each worker node in the second subnet. Note Adjust the commands to match your actual interface names and gateway. Once you have configured the networks, test the connectivity to ensure the remote worker nodes can reach the control plane nodes and the control plane nodes can reach the remote worker nodes. From the control plane nodes in the first subnet, ping a remote worker node in the second subnet by running the following command: USD ping <remote_worker_node_ip_address> If the ping is successful, it means the control plane nodes in the first subnet can reach the remote worker nodes in the second subnet. If you don't receive a response, review the network configurations and repeat the procedure for the node. From the remote worker nodes in the second subnet, ping a control plane node in the first subnet by running the following command: USD ping <control_plane_node_ip_address> If the ping is successful, it means the remote worker nodes in the second subnet can reach the control plane in the first subnet. If you don't receive a response, review the network configurations and repeat the procedure for the node. 3.6. Retrieving the OpenShift Container Platform installer Use the stable-4.x version of the installation program and your selected architecture to deploy the generally available stable version of OpenShift Container Platform: USD export VERSION=stable-4.14 USD export RELEASE_ARCH=<architecture> USD export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}') 3.7. Extracting the OpenShift Container Platform installer After retrieving the installer, the step is to extract it. Procedure Set the environment variables: USD export cmd=openshift-baremetal-install USD export pullsecret_file=~/pull-secret.txt USD export extract_dir=USD(pwd) Get the oc binary: USD curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc Extract the installer: USD sudo cp oc /usr/local/bin USD oc adm release extract --registry-config "USD{pullsecret_file}" --command=USDcmd --to "USD{extract_dir}" USD{RELEASE_IMAGE} USD sudo cp openshift-baremetal-install /usr/local/bin 3.8. Optional: Creating an RHCOS images cache To employ image caching, you must download the Red Hat Enterprise Linux CoreOS (RHCOS) image used by the bootstrap VM to provision the cluster nodes. Image caching is optional, but it is especially useful when running the installation program on a network with limited bandwidth. Note The installation program no longer needs the clusterOSImage RHCOS image because the correct image is in the release payload. If you are running the installation program on a network with limited bandwidth and the RHCOS images download takes more than 15 to 20 minutes, the installation program will timeout. Caching images on a web server will help in such scenarios. Warning If you enable TLS for the HTTPD server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OpenShift Container Platform hub and spoke clusters and the HTTPD server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported. Install a container that contains the images. Procedure Install podman : USD sudo dnf install -y podman Open firewall port 8080 to be used for RHCOS image caching: USD sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent USD sudo firewall-cmd --reload Create a directory to store the bootstraposimage : USD mkdir /home/kni/rhcos_image_cache Set the appropriate SELinux context for the newly created directory: USD sudo semanage fcontext -a -t httpd_sys_content_t "/home/kni/rhcos_image_cache(/.*)?" USD sudo restorecon -Rv /home/kni/rhcos_image_cache/ Get the URI for the RHCOS image that the installation program will deploy on the bootstrap VM: USD export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.qemu.formats["qcow2.gz"].disk.location') Get the name of the image that the installation program will deploy on the bootstrap VM: USD export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/} Get the SHA hash for the RHCOS image that will be deployed on the bootstrap VM: USD export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.qemu.formats["qcow2.gz"].disk["uncompressed-sha256"]') Download the image and place it in the /home/kni/rhcos_image_cache directory: USD curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME} Confirm SELinux type is of httpd_sys_content_t for the new file: USD ls -Z /home/kni/rhcos_image_cache Create the pod: USD podman run -d --name rhcos_image_cache \ 1 -v /home/kni/rhcos_image_cache:/var/www/html \ -p 8080:8080/tcp \ registry.access.redhat.com/ubi9/httpd-24 1 Creates a caching webserver with the name rhcos_image_cache . This pod serves the bootstrapOSImage image in the install-config.yaml file for deployment. Generate the bootstrapOSImage configuration: USD export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d"/" -f1) USD export BOOTSTRAP_OS_IMAGE="http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}" USD echo " bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}" Add the required configuration to the install-config.yaml file under platform.baremetal : platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1 1 Replace <bootstrap_os_image> with the value of USDBOOTSTRAP_OS_IMAGE . See the "Configuring the install-config.yaml file" section for additional details. 3.9. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, NetworkManager sets the hostnames. By default, DHCP provides the hostnames to NetworkManager , which is the recommended method. NetworkManager gets the hostnames through a reverse DNS lookup in the following cases: If DHCP does not provide the hostnames If you use kernel arguments to set the hostnames If you use another method to set the hostnames Reverse DNS lookup occurs after the network has been initialized on a node, and can increase the time it takes NetworkManager to set the hostname. Other system services can start prior to NetworkManager setting the hostname, which can cause those services to use a default hostname such as localhost . Tip You can avoid the delay in setting hostnames by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 3.10. Configuring the install-config.yaml file 3.10.1. Configuring the install-config.yaml file The install-config.yaml file requires some additional details. Most of the information teaches the installation program and the resulting cluster enough about the available hardware that it is able to fully manage it. Note The installation program no longer needs the clusterOSImage RHCOS image because the correct image is in the release payload. Configure install-config.yaml . Change the appropriate variables to match the environment, including pullSecret and sshKey : apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public_cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIPs: - <api_ip> ingressVIPs: - <wildcard_ip> provisioningNetworkCIDR: <CIDR> bootstrapExternalStaticIP: <bootstrap_static_ip_address> 2 bootstrapExternalStaticGateway: <bootstrap_static_gateway> 3 bootstrapExternalStaticDNS: <bootstrap_static_dns> 4 hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out_of_band_ip> 5 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" 6 - name: <openshift_master_1> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" - name: <openshift_master_2> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" - name: <openshift_worker_0> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> - name: <openshift_worker_1> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>' 1 Scale the worker machines based on the number of worker nodes that are part of the OpenShift Container Platform cluster. Valid options for the replicas value are 0 and integers greater than or equal to 2 . Set the number of replicas to 0 to deploy a three-node cluster, which contains only three control plane machines. A three-node cluster is a smaller, more resource-efficient cluster that can be used for testing, development, and production. You cannot install the cluster with only one worker. 2 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticIP configuration setting to specify the static IP address of the bootstrap VM when there is no DHCP server on the bare-metal network. 3 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticGateway configuration setting to specify the gateway IP address for the bootstrap VM when there is no DHCP server on the bare-metal network. 4 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticDNS configuration setting to specify the DNS address for the bootstrap VM when there is no DHCP server on the bare-metal network. 5 See the BMC addressing sections for more options. 6 To set the path to the installation disk drive, enter the kernel name of the disk. For example, /dev/sda . Important Because the disk discovery order is not guaranteed, the kernel name of the disk can change across booting options for machines with multiple disks. For example, /dev/sda becomes /dev/sdb and vice versa. To avoid this issue, you must use persistent disk attributes, such as the disk World Wide Name (WWN) or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. To use the disk WWN, replace the deviceName parameter with the wwnWithExtension parameter. Depending on the parameter that you use, enter either of the following values: The disk name. For example, /dev/sda , or /dev/disk/by-path/ . The disk WWN. For example, "0x64cd98f04fde100024684cf3034da5c2" . Ensure that you enter the disk WWN value within quotes so that it is used as a string value and not a hexadecimal value. Failure to meet these requirements for the rootDeviceHints parameter might result in the following error: ironic-inspector inspection failed: No disks satisfied root device hints Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP and ingressVIP configuration settings. In OpenShift Container Platform 4.12 and later, these configuration settings are deprecated. Instead, use a list format in the apiVIPs and ingressVIPs configuration settings to specify IPv4 addresses, IPv6 addresses, or both IP address formats. Create a directory to store the cluster configuration: USD mkdir ~/clusterconfigs Copy the install-config.yaml file to the new directory: USD cp install-config.yaml ~/clusterconfigs Ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster: USD ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off Remove old bootstrap resources if any are left over from a deployment attempt: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done 3.10.2. Additional install-config parameters See the following tables for the required parameters, the hosts parameter, and the bmc parameter for the install-config.yaml file. Table 3.1. Required parameters Parameters Default Description baseDomain The domain name for the cluster. For example, example.com . bootMode UEFI The boot mode for a node. Options are legacy , UEFI , and UEFISecureBoot . If bootMode is not set, Ironic sets it while inspecting the node. bootstrapExternalStaticDNS The static network DNS of the bootstrap node. You must set this value when deploying a cluster with static IP addresses when there is no Dynamic Host Configuration Protocol (DHCP) server on the bare-metal network. If you do not set this value, the installation program will use the value from bootstrapExternalStaticGateway , which causes problems when the IP address values of the gateway and DNS are different. bootstrapExternalStaticIP The static IP address for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. bootstrapExternalStaticGateway The static IP address of the gateway for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. sshKey The sshKey configuration setting contains the key in the ~/.ssh/id_rsa.pub file required to access the control plane nodes and worker nodes. Typically, this key is from the provisioner node. pullSecret The pullSecret configuration setting contains a copy of the pull secret downloaded from the Install OpenShift on Bare Metal page when preparing the provisioner node. The name to be given to the OpenShift Container Platform cluster. For example, openshift . The public CIDR (Classless Inter-Domain Routing) of the external network. For example, 10.0.0.0/24 . The OpenShift Container Platform cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes. Replicas sets the number of worker (or compute) nodes in the OpenShift Container Platform cluster. The OpenShift Container Platform cluster requires a name for control plane (master) nodes. Replicas sets the number of control plane (master) nodes included as part of the OpenShift Container Platform cluster. provisioningNetworkInterface The name of the network interface on nodes connected to the provisioning network. For OpenShift Container Platform 4.9 and later releases, use the bootMACAddress configuration setting to enable Ironic to identify the IP address of the NIC instead of using the provisioningNetworkInterface configuration setting to identify the name of the NIC. defaultMachinePlatform The default configuration used for machine pools without a platform configuration. apiVIPs (Optional) The virtual IP address for Kubernetes API communication. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the apiVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses api.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP configuration setting. From OpenShift Container Platform 4.12 or later, the apiVIP configuration setting is deprecated. Instead, use a list format for the apiVIPs configuration setting to specify an IPv4 address, an IPv6 address or both IP address formats. disableCertificateVerification False redfish and redfish-virtualmedia need this parameter to manage BMC addresses. The value should be True when using a self-signed certificate for BMC addresses. ingressVIPs (Optional) The virtual IP address for ingress traffic. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the ingressVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses test.apps.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the ingressVIP configuration setting. In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a list format for the ingressVIPs configuration setting to specify an IPv4 addresses, an IPv6 addresses or both IP address formats. Table 3.2. Optional Parameters Parameters Default Description provisioningDHCPRange 172.22.0.10,172.22.0.100 Defines the IP range for nodes on the provisioning network. provisioningNetworkCIDR 172.22.0.0/24 The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network. clusterProvisioningIP The third IP address of the provisioningNetworkCIDR . The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, 172.22.0.3 . bootstrapProvisioningIP The second IP address of the provisioningNetworkCIDR . The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the provisioning subnet. For example, 172.22.0.2 or 2620:52:0:1307::2 . externalBridge baremetal The name of the bare-metal bridge of the hypervisor attached to the bare-metal network. provisioningBridge provisioning The name of the provisioning bridge on the provisioner host attached to the provisioning network. architecture Defines the host architecture for your cluster. Valid values are amd64 or arm64 . defaultMachinePlatform The default configuration used for machine pools without a platform configuration. bootstrapOSImage A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: https://mirror.openshift.com/rhcos-<version>-qemu.qcow2.gz?sha256=<uncompressed_sha256> . provisioningNetwork The provisioningNetwork configuration setting determines whether the cluster uses the provisioning network. If it does, the configuration setting also determines if the cluster manages the network. Disabled : Set this parameter to Disabled to disable the requirement for a provisioning network. When set to Disabled , you must only use virtual media based provisioning, or bring up the cluster using the assisted installer. If Disabled and using power management, BMCs must be accessible from the bare-metal network. If Disabled , you must provide two IP addresses on the bare-metal network that are used for the provisioning services. Managed : Set this parameter to Managed , which is the default, to fully manage the provisioning network, including DHCP, TFTP, and so on. Unmanaged : Set this parameter to Unmanaged to enable the provisioning network but take care of manual configuration of DHCP. Virtual media provisioning is recommended but PXE is still available if required. httpProxy Set this parameter to the appropriate HTTP proxy used within your environment. httpsProxy Set this parameter to the appropriate HTTPS proxy used within your environment. noProxy Set this parameter to the appropriate list of exclusions for proxy usage within your environment. Hosts The hosts parameter is a list of separate bare metal assets used to build the cluster. Table 3.3. Hosts Name Default Description name The name of the BareMetalHost resource to associate with the details. For example, openshift-master-0 . role The role of the bare metal node. Either master or worker . bmc Connection details for the baseboard management controller. See the BMC addressing section for additional details. bootMACAddress The MAC address of the NIC that the host uses for the provisioning network. Ironic retrieves the IP address using the bootMACAddress configuration setting. Then, it binds to the host. Note You must provide a valid MAC address from the host if you disabled the provisioning network. networkConfig Set this optional parameter to configure the network interface of a host. See "(Optional) Configuring host network interfaces" for additional details. 3.10.3. BMC addressing Most vendors support Baseboard Management Controller (BMC) addressing with the Intelligent Platform Management Interface (IPMI). IPMI does not encrypt communications. It is suitable for use within a data center over a secured or dedicated management network. Check with your vendor to see if they support Redfish network boot. Redfish delivers simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). Redfish is human readable and machine capable, and leverages common internet and web services standards to expose information directly to the modern tool chain. If your hardware does not support Redfish network boot, use IPMI. IPMI Hosts using IPMI use the ipmi://<out-of-band-ip>:<port> address format, which defaults to port 623 if not specified. The following example demonstrates an IPMI configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password> Important The provisioning network is required when PXE booting using IPMI for BMC addressing. It is not possible to PXE boot hosts without a provisioning network. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details. Redfish network boot To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True Redfish APIs Several redfish API endpoints are called onto your BCM when using the bare-metal installer-provisioned infrastructure. Important You need to ensure that your BMC supports all of the redfish APIs before installation. List of redfish APIs Power on curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"ResetType": "On"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset Power off curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"ResetType": "ForceOff"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset Temporary boot using pxe curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "pxe", "BootSourceOverrideEnabled": "Once"}} Set BIOS boot mode using Legacy or UEFI curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideMode":"UEFI"}} List of redfish-virtualmedia APIs Set temporary boot device using cd or dvd curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "cd", "BootSourceOverrideEnabled": "Once"}}' Mount virtual media curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" -H "If-Match: *" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{"Image": "https://example.com/test.iso", "TransferProtocolType": "HTTPS", "UserName": "", "Password":""}' Note The PowerOn and PowerOff commands for redfish APIs are the same for the redfish-virtualmedia APIs. Important HTTPS and HTTP are the only supported parameter types for TransferProtocolTypes . 3.10.4. BMC addressing for Dell iDRAC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Dell hardware, Red Hat supports integrated Dell Remote Access Controller (iDRAC) virtual media, Redfish network boot, and IPMI. BMC address formats for Dell iDRAC Protocol Address Format iDRAC virtual media idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 Redfish network boot redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 IPMI ipmi://<out-of-band-ip> Important Use idrac-virtualmedia as the protocol for Redfish virtual media. redfish-virtualmedia will not work on Dell hardware. Dell's idrac-virtualmedia uses the Redfish standard with Dell's OEM extensions. See the following sections for additional details. Redfish virtual media for Dell iDRAC For Redfish virtual media on Dell servers, use idrac-virtualmedia:// in the address setting. Using redfish-virtualmedia:// will not work. Note Use idrac-virtualmedia:// as the protocol for Redfish virtual media. Using redfish-virtualmedia:// will not work on Dell hardware, because the idrac-virtualmedia:// protocol corresponds to the idrac hardware type and the Redfish protocol in Ironic. Dell's idrac-virtualmedia:// protocol uses the Redfish standard with Dell's OEM extensions. Ironic also supports the idrac type with the WSMAN protocol. Therefore, you must specify idrac-virtualmedia:// to avoid unexpected behavior when electing to use Redfish with virtual media on Dell hardware. The following example demonstrates using iDRAC virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. Note Ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True Redfish network boot for iDRAC To enable Redfish, use redfish:// or redfish+http:// to disable transport layer security (TLS). The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True Note There is a known issue on Dell iDRAC 9 with firmware version 04.40.00.00 and all releases up to including the 5.xx series for installer-provisioned installations on bare metal deployments. The virtual console plugin defaults to eHTML5, an enhanced version of HTML5, which causes problems with the InsertVirtualMedia workflow. Set the plugin to use HTML5 to avoid this issue. The menu path is Configuration Virtual console Plug-in Type HTML5 . Ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . 3.10.5. BMC addressing for HPE iLO The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For HPE integrated Lights Out (iLO), Red Hat supports Redfish virtual media, Redfish network boot, and IPMI. Table 3.4. BMC address formats for HPE iLO Protocol Address Format Redfish virtual media redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 Redfish network boot redfish://<out-of-band-ip>/redfish/v1/Systems/1 IPMI ipmi://<out-of-band-ip> See the following sections for additional details. Redfish virtual media for HPE iLO To enable Redfish virtual media for HPE servers, use redfish-virtualmedia:// in the address setting. The following example demonstrates using Redfish virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True Note Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media. Redfish network boot for HPE iLO To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True 3.10.6. BMC addressing for Fujitsu iRMC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Fujitsu hardware, Red Hat supports integrated Remote Management Controller (iRMC) and IPMI. Table 3.5. BMC address formats for Fujitsu iRMC Protocol Address Format iRMC irmc://<out-of-band-ip> IPMI ipmi://<out-of-band-ip> iRMC Fujitsu nodes can use irmc://<out-of-band-ip> and defaults to port 443 . The following example demonstrates an iRMC configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password> Note Currently Fujitsu supports iRMC S5 firmware version 3.05P and above for installer-provisioned installation on bare metal. 3.10.7. Root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 3.6. Subfields Subfield Description deviceName A string containing a Linux device name such as /dev/vda or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. wwnWithExtension A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. wwnVendorExtension A string containing the unique vendor storage identifier. The hint must match the actual value exactly. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: "/dev/sda" 3.10.8. Optional: Setting proxy settings To deploy an OpenShift Container Platform cluster using a proxy, make the following changes to the install-config.yaml file. apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR> The following is an example of noProxy with values. noProxy: .example.com,172.22.0.0/24,10.10.0.0/24 With a proxy enabled, set the appropriate values of the proxy in the corresponding key/value pair. Key considerations: If the proxy does not have an HTTPS proxy, change the value of httpsProxy from https:// to http:// . If using a provisioning network, include it in the noProxy setting, otherwise the installer will fail. Set all of the proxy settings as environment variables within the provisioner node. For example, HTTP_PROXY , HTTPS_PROXY , and NO_PROXY . Note When provisioning with IPv6, you cannot define a CIDR address block in the noProxy settings. You must define each address separately. 3.10.9. Optional: Deploying with no provisioning network To deploy an OpenShift Container Platform cluster without a provisioning network, make the following changes to the install-config.yaml file. platform: baremetal: apiVIPs: - <api_VIP> ingressVIPs: - <ingress_VIP> provisioningNetwork: "Disabled" 1 1 Add the provisioningNetwork configuration setting, if needed, and set it to Disabled . Important The provisioning network is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details. 3.10.10. Optional: Deploying with dual-stack networking For dual-stack networking in OpenShift Container Platform clusters, you can configure IPv4 and IPv6 address endpoints for cluster nodes. To configure IPv4 and IPv6 address endpoints for cluster nodes, edit the machineNetwork , clusterNetwork , and serviceNetwork configuration settings in the install-config.yaml file. Each setting must have two CIDR entries each. For a cluster with the IPv4 family as the primary address family, specify the IPv4 setting first. For a cluster with the IPv6 family as the primary address family, specify the IPv6 setting first. machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112 Important On a bare-metal platform, if you specified an NMState configuration in the networkConfig section of your install-config.yaml file, add interfaces.wait-ip: ipv4+ipv6 to the NMState YAML file to resolve an issue that prevents your cluster from deploying on a dual-stack network. Example NMState YAML configuration file that includes the wait-ip parameter networkConfig: nmstate: interfaces: - name: <interface_name> # ... wait-ip: ipv4+ipv6 # ... To provide an interface to the cluster for applications that use IPv4 and IPv6 addresses, configure IPv4 and IPv6 virtual IP (VIP) address endpoints for the Ingress VIP and API VIP services. To configure IPv4 and IPv6 address endpoints, edit the apiVIPs and ingressVIPs configuration settings in the install-config.yaml file . The apiVIPs and ingressVIPs configuration settings use a list format. The order of the list indicates the primary and secondary VIP address for each service. platform: baremetal: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6> Note For a cluster with dual-stack networking configuration, you must assign both IPv4 and IPv6 addresses to the same interface. 3.10.11. Optional: Configuring host network interfaces Before installation, you can set the networkConfig configuration setting in the install-config.yaml file to configure host network interfaces using NMState. The most common use case for this functionality is to specify a static IP address on the bare-metal network, but you can also configure other networks such as a storage network. This functionality supports other NMState features such as VLAN, VXLAN, bridges, bonds, routes, MTU, and DNS resolver settings. Prerequisites Configure a PTR DNS record with a valid hostname for each node with a static IP address. Install the NMState CLI ( nmstate ). Procedure Optional: Consider testing the NMState syntax with nmstatectl gc before including it in the install-config.yaml file, because the installer will not check the NMState YAML syntax. Note Errors in the YAML syntax might result in a failure to apply the network configuration. Additionally, maintaining the validated YAML syntax is useful when applying changes using Kubernetes NMState after deployment or when expanding the cluster. Create an NMState YAML file: interfaces: - name: <nic1_name> 1 type: ethernet state: up ipv4: address: - ip: <ip_address> 2 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 3 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 4 -hop-interface: <next_hop_nic1_name> 5 1 2 3 4 5 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. Test the configuration file by running the following command: USD nmstatectl gc <nmstate_yaml_file> Replace <nmstate_yaml_file> with the configuration file name. Use the networkConfig configuration setting by adding the NMState configuration to hosts within the install-config.yaml file: hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: "/dev/sda" networkConfig: 1 interfaces: - name: <nic1_name> 2 type: ethernet state: up ipv4: address: - ip: <ip_address> 3 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 4 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 5 -hop-interface: <next_hop_nic1_name> 6 1 Add the NMState YAML syntax to configure the host interfaces. 2 3 4 5 6 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. Important After deploying the cluster, you cannot modify the networkConfig configuration setting of install-config.yaml file to make changes to the host network interface. Use the Kubernetes NMState Operator to make changes to the host network interface after deployment. 3.10.12. Configuring host network interfaces for subnets For edge computing scenarios, it can be beneficial to locate compute nodes closer to the edge. To locate remote nodes in subnets, you might use different network segments or subnets for the remote nodes than you used for the control plane subnet and local compute nodes. You can reduce latency for the edge and allow for enhanced scalability by setting up subnets for edge computing scenarios. Important When using the default load balancer, OpenShiftManagedDefault and adding remote nodes to your OpenShift Container Platform cluster, all control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details. If you have established different network segments or subnets for remote nodes as described in the section on "Establishing communication between subnets", you must specify the subnets in the machineNetwork configuration setting if the workers are using static IP addresses, bonds or other advanced networking. When setting the node IP address in the networkConfig parameter for each remote node, you must also specify the gateway and the DNS server for the subnet containing the control plane nodes when using static IP addresses. This ensures that the remote nodes can reach the subnet containing the control plane and that they can receive network traffic from the control plane. Note Deploying a cluster with multiple subnets requires using virtual media, such as redfish-virtualmedia or idrac-virtualmedia , because remote nodes cannot access the local provisioning network. Procedure Add the subnets to the machineNetwork in the install-config.yaml file when using static IP addresses: networking: machineNetwork: - cidr: 10.0.0.0/24 - cidr: 192.168.0.0/24 networkType: OVNKubernetes Add the gateway and DNS configuration to the networkConfig parameter of each edge compute node using NMState syntax when using a static IP address or advanced networking such as bonds: networkConfig: interfaces: - name: <interface_name> 1 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: <node_ip> 2 prefix-length: 24 gateway: <gateway_ip> 3 dns-resolver: config: server: - <dns_ip> 4 1 Replace <interface_name> with the interface name. 2 Replace <node_ip> with the IP address of the node. 3 Replace <gateway_ip> with the IP address of the gateway. 4 Replace <dns_ip> with the IP address of the DNS server. 3.10.13. Optional: Configuring address generation modes for SLAAC in dual-stack networks For dual-stack clusters that use Stateless Address AutoConfiguration (SLAAC), you must specify a global value for the ipv6.addr-gen-mode network setting. You can set this value using NMState to configure the ramdisk and the cluster configuration files. If you don't configure a consistent ipv6.addr-gen-mode in these locations, IPv6 address mismatches can occur between CSR resources and BareMetalHost resources in the cluster. Prerequisites Install the NMState CLI ( nmstate ). Procedure Optional: Consider testing the NMState YAML syntax with the nmstatectl gc command before including it in the install-config.yaml file because the installation program will not check the NMState YAML syntax. Create an NMState YAML file: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1 1 Replace <address_mode> with the type of address generation mode required for IPv6 addresses in the cluster. Valid values are eui64 , stable-privacy , or random . Test the configuration file by running the following command: USD nmstatectl gc <nmstate_yaml_file> 1 1 Replace <nmstate_yaml_file> with the name of the test configuration file. Add the NMState configuration to the hosts.networkConfig section within the install-config.yaml file: hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: "/dev/sda" networkConfig: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1 ... 1 Replace <address_mode> with the type of address generation mode required for IPv6 addresses in the cluster. Valid values are eui64 , stable-privacy , or random . 3.10.14. Optional: Configuring host network interfaces for dual port NIC Before installation, you can set the networkConfig configuration setting in the install-config.yaml file to configure host network interfaces by using NMState to support dual port NIC. Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Virtualization only supports the following bond modes: mode=1 active-backup mode=2 balance-xor mode=4 802.3ad Prerequisites Configure a PTR DNS record with a valid hostname for each node with a static IP address. Install the NMState CLI ( nmstate ). Note Errors in the YAML syntax might result in a failure to apply the network configuration. Additionally, maintaining the validated YAML syntax is useful when applying changes by using Kubernetes NMState after deployment or when expanding the cluster. Procedure Add the NMState configuration to the networkConfig field to hosts within the install-config.yaml file: hosts: - name: worker-0 role: worker bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: false bootMACAddress: <NIC1_mac_address> bootMode: UEFI networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false dhcp: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.19.17.254 -hop-interface: bond0 14 table-id: 254 1 The networkConfig field has information about the network configuration of the host, with subfields including interfaces , dns-resolver , and routes . 2 The interfaces field is an array of network interfaces defined for the host. 3 The name of the interface. 4 The type of interface. This example creates a ethernet interface. 5 Set this to `false to disable DHCP for the physical function (PF) if it is not strictly required. 6 Set to the number of SR-IOV virtual functions (VFs) to instantiate. 7 Set this to up . 8 Set this to false to disable IPv4 addressing for the VF attached to the bond. 9 Sets a minimum transmission rate, in Mbps, for the VF. This sample value sets a rate of 100 Mbps. This value must be less than or equal to the maximum transmission rate. Intel NICs do not support the min-tx-rate parameter. For more information, see BZ#1772847 . 10 Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps. 11 Sets the desired bond mode. 12 Sets the preferred port of the bonding interface. The bond uses the primary device as the first device of the bonding interfaces. The bond does not abandon the primary device interface unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode (mode 1) and balance-tlb (mode 5). 13 Sets a static IP address for the bond interface. This is the node IP address. 14 Sets bond0 as the gateway for the default route. Important After deploying the cluster, you cannot change the networkConfig configuration setting of the install-config.yaml file to make changes to the host network interface. Use the Kubernetes NMState Operator to make changes to the host network interface after deployment. Additional resources Configuring network bonding 3.10.15. Configuring multiple cluster nodes You can simultaneously configure OpenShift Container Platform cluster nodes with identical settings. Configuring multiple cluster nodes avoids adding redundant information for each node to the install-config.yaml file. This file contains specific parameters to apply an identical configuration to multiple nodes in the cluster. Compute nodes are configured separately from the controller node. However, configurations for both node types use the highlighted parameters in the install-config.yaml file to enable multi-node configuration. Set the networkConfig parameters to BOND , as shown in the following example: hosts: - name: ostest-master-0 [...] networkConfig: &BOND interfaces: - name: bond0 type: bond state: up ipv4: dhcp: true enabled: true link-aggregation: mode: active-backup port: - enp2s0 - enp3s0 - name: ostest-master-1 [...] networkConfig: *BOND - name: ostest-master-2 [...] networkConfig: *BOND Note Configuration of multiple cluster nodes is only available for initial deployments on installer-provisioned infrastructure. 3.10.16. Optional: Configuring managed Secure Boot You can enable managed Secure Boot when deploying an installer-provisioned cluster using Redfish BMC addressing, such as redfish , redfish-virtualmedia , or idrac-virtualmedia . To enable managed Secure Boot, add the bootMode configuration setting to each node: Example hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <username> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "/dev/sda" bootMode: UEFISecureBoot 2 1 Ensure the bmc.address setting uses redfish , redfish-virtualmedia , or idrac-virtualmedia as the protocol. See "BMC addressing for HPE iLO" or "BMC addressing for Dell iDRAC" for additional details. 2 The bootMode setting is UEFI by default. Change it to UEFISecureBoot to enable managed Secure Boot. Note See "Configuring nodes" in the "Prerequisites" to ensure the nodes can support managed Secure Boot. If the nodes do not support managed Secure Boot, see "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section. Configuring Secure Boot manually requires Redfish virtual media. Note Red Hat does not support Secure Boot with IPMI, because IPMI does not provide Secure Boot management facilities. 3.11. Manifest configuration files 3.11.1. Creating the OpenShift Container Platform manifests Create the OpenShift Container Platform manifests. USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated 3.11.2. Optional: Configuring NTP for disconnected clusters OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. OpenShift Container Platform nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server. Procedure Install Butane on your installation host by using the following command: USD sudo dnf -y install butane Create a Butane config, 99-master-chrony-conf-override.bu , including the contents of the chrony.conf file for the control plane nodes. Note See "Creating machine configs with Butane" for information about Butane. Butane config example variant: openshift version: 4.14.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml , containing the configuration to be delivered to the control plane nodes: USD butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml Create a Butane config, 99-worker-chrony-conf-override.bu , including the contents of the chrony.conf file for the worker nodes that references the NTP servers on the control plane nodes. Butane config example variant: openshift version: 4.14.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml , containing the configuration to be delivered to the worker nodes: USD butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml 3.11.3. Configuring network components to run on the control plane You can configure networking components to run exclusively on the control plane nodes. By default, OpenShift Container Platform allows any node in the machine config pool to host the ingressVIP virtual IP address. However, some environments deploy worker nodes in separate subnets from the control plane nodes, which requires configuring the ingressVIP virtual IP address to run on the control plane nodes. Important When deploying remote workers in separate subnets, you must place the ingressVIP virtual IP address exclusively with the control plane nodes. Procedure Change to the directory storing the install-config.yaml file: USD cd ~/clusterconfigs Switch to the manifests subdirectory: USD cd manifests Create a file named cluster-network-avoid-workers-99-config.yaml : USD touch cluster-network-avoid-workers-99-config.yaml Open the cluster-network-avoid-workers-99-config.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:, This manifest places the ingressVIP virtual IP address on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only: openshift-ingress-operator keepalived Save the cluster-network-avoid-workers-99-config.yaml file. Create a manifests/cluster-ingress-default-ingresscontroller.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: "" Consider backing up the manifests directory. The installer deletes the manifests/ directory when creating the cluster. Modify the cluster-scheduler-02-config.yml manifest to make the control plane nodes schedulable by setting the mastersSchedulable field to true . Control plane nodes are not schedulable by default. For example: Note If control plane nodes are not schedulable after completing this procedure, deploying the cluster will fail. 3.11.4. Optional: Deploying routers on worker nodes During installation, the installer deploys router pods on worker nodes. By default, the installer installs two router pods. If a deployed cluster requires additional routers to handle external traffic loads destined for services within the OpenShift Container Platform cluster, you can create a yaml file to set an appropriate number of router replicas. Important Deploying a cluster with only one worker node is not supported. While modifying the router replicas will address issues with the degraded state when deploying with one worker, the cluster loses high availability for the ingress API, which is not suitable for production environments. Note By default, the installer deploys two routers. If the cluster has no worker nodes, the installer deploys the two routers on the control plane nodes by default. Procedure Create a router-replicas.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" Note Replace <num-of-router-pods> with an appropriate value. If working with just one worker node, set replicas: to 1 . If working with more than 3 worker nodes, you can increase replicas: from the default value 2 as appropriate. Save and copy the router-replicas.yaml file to the clusterconfigs/openshift directory: USD cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml 3.11.5. Optional: Configuring the BIOS The following procedure configures the BIOS during the installation process. Procedure Create the manifests. Modify the BareMetalHost resource file corresponding to the node: USD vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml Add the BIOS configuration to the spec section of the BareMetalHost resource: spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true Note Red Hat supports three BIOS configurations. Only servers with BMC type irmc are supported. Other types of servers are currently not supported. Create the cluster. Additional resources Bare metal configuration 3.11.6. Optional: Configuring the RAID The following procedure configures a redundant array of independent disks (RAID) during the installation process. Note OpenShift Container Platform supports hardware RAID for baseboard management controllers (BMCs) using the iRMC protocol only. OpenShift Container Platform 4.14 does not support software RAID. If you want to configure a hardware RAID for the node, verify that the node has a RAID controller. Procedure Create the manifests. Modify the BareMetalHost resource corresponding to the node: USD vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml Note The following example uses a hardware RAID configuration because OpenShift Container Platform 4.14 does not support software RAID. If you added a specific RAID configuration to the spec section, this causes the node to delete the original RAID configuration in the preparing phase and perform a specified configuration on the RAID. For example: spec: raid: hardwareRAIDVolumes: - level: "0" 1 name: "sda" numberOfPhysicalDisks: 1 rotational: true sizeGibibytes: 0 1 level is a required field, and the others are optional fields. If you added an empty RAID configuration to the spec section, the empty configuration causes the node to delete the original RAID configuration during the preparing phase, but does not perform a new configuration. For example: spec: raid: hardwareRAIDVolumes: [] If you do not add a raid field in the spec section, the original RAID configuration is not deleted, and no new configuration will be performed. Create the cluster. 3.11.7. Optional: Configuring storage on nodes You can make changes to operating systems on OpenShift Container Platform nodes by creating MachineConfig objects that are managed by the Machine Config Operator (MCO). The MachineConfig specification includes an ignition config for configuring the machines at first boot. This config object can be used to modify files, systemd services, and other operating system features running on OpenShift Container Platform machines. Procedure Use the ignition config to configure storage on nodes. The following MachineSet manifest example demonstrates how to add a partition to a device on a primary node. In this example, apply the manifest before installation to have a partition named recovery with a size of 16 GiB on the primary node. Create a custom-partitions.yaml file and include a MachineConfig object that contains your partition layout: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: primary name: 10_primary_storage_config spec: config: ignition: version: 3.2.0 storage: disks: - device: </dev/xxyN> partitions: - label: recovery startMiB: 32768 sizeMiB: 16384 filesystems: - device: /dev/disk/by-partlabel/recovery label: recovery format: xfs Save and copy the custom-partitions.yaml file to the clusterconfigs/openshift directory: USD cp ~/<MachineConfig_manifest> ~/clusterconfigs/openshift Additional resources Bare metal configuration Partition naming scheme 3.12. Creating a disconnected registry In some cases, you might want to install an OpenShift Container Platform cluster using a local copy of the installation registry. This could be for enhancing network efficiency because the cluster nodes are on a network that does not have access to the internet. A local, or mirrored, copy of the registry requires the following: A certificate for the registry node. This can be a self-signed certificate. A web server that a container on a system will serve. An updated pull secret that contains the certificate and local repository information. Note Creating a disconnected registry on a registry node is optional. If you need to create a disconnected registry on a registry node, you must complete all of the following sub-sections. Prerequisites If you have already prepared a mirror registry for Mirroring images for a disconnected installation , you can skip directly to Modify the install-config.yaml file to use the disconnected registry . 3.12.1. Preparing the registry node to host the mirrored registry The following steps must be completed prior to hosting a mirrored registry on bare metal. Procedure Open the firewall port on the registry node: USD sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent USD sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent USD sudo firewall-cmd --reload Install the required packages for the registry node: USD sudo yum -y install python3 podman httpd httpd-tools jq Create the directory structure where the repository information will be held: USD sudo mkdir -p /opt/registry/{auth,certs,data} 3.12.2. Mirroring the OpenShift Container Platform image repository for a disconnected registry Complete the following steps to mirror the OpenShift Container Platform image repository for a disconnected registry. Prerequisites Your mirror host has access to the internet. You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured. You downloaded the pull secret from Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. Procedure Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page. Set the required environment variables: Export the release version: USD OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your cluster: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Mirror the version images to the mirror registry: If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Mirror the images to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Take the media to the restricted network environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. If the local container registry is connected to the mirror host, take the following actions: Directly push the release images to the local registry by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster. Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Note The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: If your mirror host does not have internet access, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}" If the local container registry is connected to the mirror host, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Important To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content. You must perform this step on a machine with an active internet connection. If you are in a disconnected environment, use the --image flag as part of must-gather and point to the payload image. For clusters using installer-provisioned infrastructure, run the following command: USD openshift-baremetal-install 3.12.3. Modify the install-config.yaml file to use the disconnected registry On the provisioner node, the install-config.yaml file should use the newly created pull-secret from the pull-secret-update.txt file. The install-config.yaml file must also contain the disconnected registry node's certificate and registry information. Procedure Add the disconnected registry node's certificate to the install-config.yaml file: USD echo "additionalTrustBundle: |" >> install-config.yaml The certificate should follow the "additionalTrustBundle: |" line and be properly indented, usually by two spaces. USD sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml Add the mirror information for the registry to the install-config.yaml file: USD echo "imageContentSources:" >> install-config.yaml USD echo "- mirrors:" >> install-config.yaml USD echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml Replace registry.example.com with the registry's fully qualified domain name. USD echo " source: quay.io/openshift-release-dev/ocp-release" >> install-config.yaml USD echo "- mirrors:" >> install-config.yaml USD echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml Replace registry.example.com with the registry's fully qualified domain name. USD echo " source: quay.io/openshift-release-dev/ocp-v4.0-art-dev" >> install-config.yaml 3.13. Validation checklist for installation ❏ OpenShift Container Platform installer has been retrieved. ❏ OpenShift Container Platform installer has been extracted. ❏ Required parameters for the install-config.yaml have been configured. ❏ The hosts parameter for the install-config.yaml has been configured. ❏ The bmc parameter for the install-config.yaml has been configured. ❏ Conventions for the values configured in the bmc address field have been applied. ❏ Created the OpenShift Container Platform manifests. ❏ (Optional) Deployed routers on worker nodes. ❏ (Optional) Created a disconnected registry. ❏ (Optional) Validate disconnected registry settings if in use. | [
"useradd kni",
"passwd kni",
"echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni",
"chmod 0440 /etc/sudoers.d/kni",
"su - kni -c \"ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''\"",
"su - kni",
"sudo subscription-manager register --username=<user> --password=<pass> --auto-attach",
"sudo subscription-manager repos --enable=rhel-9-for-<architecture>-appstream-rpms --enable=rhel-9-for-<architecture>-baseos-rpms",
"sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool",
"sudo usermod --append --groups libvirt <user>",
"sudo systemctl start firewalld",
"sudo firewall-cmd --zone=public --add-service=http --permanent",
"sudo firewall-cmd --reload",
"sudo systemctl enable libvirtd --now",
"sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images",
"sudo virsh pool-start default",
"sudo virsh pool-autostart default",
"vim pull-secret.txt",
"chronyc sources",
"MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^+ time.cloudflare.com 3 10 377 187 -209us[ -209us] +/- 32ms ^+ t1.time.ir2.yahoo.com 2 10 377 185 -4382us[-4382us] +/- 23ms ^+ time.cloudflare.com 3 10 377 198 -996us[-1220us] +/- 33ms ^* brenbox.westnet.ie 1 10 377 193 -9538us[-9761us] +/- 24ms",
"ping time.cloudflare.com",
"PING time.cloudflare.com (162.159.200.123) 56(84) bytes of data. 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=1 ttl=54 time=32.3 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=2 ttl=54 time=30.9 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=3 ttl=54 time=36.7 ms",
"export PUB_CONN=<baremetal_nic_name>",
"sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge <con_name> baremetal bridge.stp no 1 nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal pkill dhclient;dhclient baremetal \"",
"sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no ipv4.method manual ipv4.addr \"x.x.x.x/yy\" ipv4.gateway \"a.a.a.a\" ipv4.dns \"b.b.b.b\" 1 nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal nmcli con up baremetal \"",
"export PROV_CONN=<prov_nic_name>",
"sudo nohup bash -c \" nmcli con down \\\"USDPROV_CONN\\\" nmcli con delete \\\"USDPROV_CONN\\\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \\\"USDPROV_CONN\\\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning \"",
"nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual",
"ssh kni@provisioner.<cluster-name>.<domain>",
"sudo nmcli con show",
"NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2",
"sudo su -",
"nmcli dev status",
"nmcli connection modify <interface_name> +ipv4.routes \"192.168.0.0/24 via <gateway>\"",
"nmcli connection modify eth0 +ipv4.routes \"192.168.0.0/24 via 192.168.0.1\"",
"nmcli connection up <interface_name>",
"ip route",
"sudo su -",
"nmcli dev status",
"nmcli connection modify <interface_name> +ipv4.routes \"10.0.0.0/24 via <gateway>\"",
"nmcli connection modify eth0 +ipv4.routes \"10.0.0.0/24 via 10.0.0.1\"",
"nmcli connection up <interface_name>",
"ip route",
"ping <remote_worker_node_ip_address>",
"ping <control_plane_node_ip_address>",
"export VERSION=stable-4.14",
"export RELEASE_ARCH=<architecture>",
"export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')",
"export cmd=openshift-baremetal-install",
"export pullsecret_file=~/pull-secret.txt",
"export extract_dir=USD(pwd)",
"curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc",
"sudo cp oc /usr/local/bin",
"oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}",
"sudo cp openshift-baremetal-install /usr/local/bin",
"sudo dnf install -y podman",
"sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent",
"sudo firewall-cmd --reload",
"mkdir /home/kni/rhcos_image_cache",
"sudo semanage fcontext -a -t httpd_sys_content_t \"/home/kni/rhcos_image_cache(/.*)?\"",
"sudo restorecon -Rv /home/kni/rhcos_image_cache/",
"export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk.location')",
"export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/}",
"export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk[\"uncompressed-sha256\"]')",
"curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME}",
"ls -Z /home/kni/rhcos_image_cache",
"podman run -d --name rhcos_image_cache \\ 1 -v /home/kni/rhcos_image_cache:/var/www/html -p 8080:8080/tcp registry.access.redhat.com/ubi9/httpd-24",
"export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d\"/\" -f1)",
"export BOOTSTRAP_OS_IMAGE=\"http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}\"",
"echo \" bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}\"",
"platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1",
"apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public_cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIPs: - <api_ip> ingressVIPs: - <wildcard_ip> provisioningNetworkCIDR: <CIDR> bootstrapExternalStaticIP: <bootstrap_static_ip_address> 2 bootstrapExternalStaticGateway: <bootstrap_static_gateway> 3 bootstrapExternalStaticDNS: <bootstrap_static_dns> 4 hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out_of_band_ip> 5 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" 6 - name: <openshift_master_1> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" - name: <openshift_master_2> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" - name: <openshift_worker_0> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> - name: <openshift_worker_1> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'",
"ironic-inspector inspection failed: No disks satisfied root device hints",
"mkdir ~/clusterconfigs",
"cp install-config.yaml ~/clusterconfigs",
"ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"metadata: name:",
"networking: machineNetwork: - cidr:",
"compute: - name: worker",
"compute: replicas: 2",
"controlPlane: name: master",
"controlPlane: replicas: 3",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{\"ResetType\": \"On\"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset",
"curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{\"ResetType\": \"ForceOff\"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"pxe\", \"BootSourceOverrideEnabled\": \"Once\"}}",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideMode\":\"UEFI\"}}",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"cd\", \"BootSourceOverrideEnabled\": \"Once\"}}'",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: *\" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{\"Image\": \"https://example.com/test.iso\", \"TransferProtocolType\": \"HTTPS\", \"UserName\": \"\", \"Password\":\"\"}'",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password>",
"- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"",
"apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR>",
"noProxy: .example.com,172.22.0.0/24,10.10.0.0/24",
"platform: baremetal: apiVIPs: - <api_VIP> ingressVIPs: - <ingress_VIP> provisioningNetwork: \"Disabled\" 1",
"machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112",
"networkConfig: nmstate: interfaces: - name: <interface_name> wait-ip: ipv4+ipv6",
"platform: baremetal: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6>",
"interfaces: - name: <nic1_name> 1 type: ethernet state: up ipv4: address: - ip: <ip_address> 2 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 4 next-hop-interface: <next_hop_nic1_name> 5",
"nmstatectl gc <nmstate_yaml_file>",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: \"/dev/sda\" networkConfig: 1 interfaces: - name: <nic1_name> 2 type: ethernet state: up ipv4: address: - ip: <ip_address> 3 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 4 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 5 next-hop-interface: <next_hop_nic1_name> 6",
"networking: machineNetwork: - cidr: 10.0.0.0/24 - cidr: 192.168.0.0/24 networkType: OVNKubernetes",
"networkConfig: interfaces: - name: <interface_name> 1 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: <node_ip> 2 prefix-length: 24 gateway: <gateway_ip> 3 dns-resolver: config: server: - <dns_ip> 4",
"interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1",
"nmstatectl gc <nmstate_yaml_file> 1",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: \"/dev/sda\" networkConfig: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1",
"hosts: - name: worker-0 role: worker bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: false bootMACAddress: <NIC1_mac_address> bootMode: UEFI networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false dhcp: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.19.17.254 next-hop-interface: bond0 14 table-id: 254",
"hosts: - name: ostest-master-0 [...] networkConfig: &BOND interfaces: - name: bond0 type: bond state: up ipv4: dhcp: true enabled: true link-aggregation: mode: active-backup port: - enp2s0 - enp3s0 - name: ostest-master-1 [...] networkConfig: *BOND - name: ostest-master-2 [...] networkConfig: *BOND",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <username> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"/dev/sda\" bootMode: UEFISecureBoot 2",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated",
"sudo dnf -y install butane",
"variant: openshift version: 4.14.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan",
"butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml",
"variant: openshift version: 4.14.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony",
"butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml",
"cd ~/clusterconfigs",
"cd manifests",
"touch cluster-network-avoid-workers-99-config.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:,",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: \"\"",
"sed -i \"s;mastersSchedulable: false;mastersSchedulable: true;g\" clusterconfigs/manifests/cluster-scheduler-02-config.yml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\"",
"cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml",
"vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml",
"spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true",
"vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml",
"spec: raid: hardwareRAIDVolumes: - level: \"0\" 1 name: \"sda\" numberOfPhysicalDisks: 1 rotational: true sizeGibibytes: 0",
"spec: raid: hardwareRAIDVolumes: []",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: primary name: 10_primary_storage_config spec: config: ignition: version: 3.2.0 storage: disks: - device: </dev/xxyN> partitions: - label: recovery startMiB: 32768 sizeMiB: 16384 filesystems: - device: /dev/disk/by-partlabel/recovery label: recovery format: xfs",
"cp ~/<MachineConfig_manifest> ~/clusterconfigs/openshift",
"sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent",
"sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent",
"sudo firewall-cmd --reload",
"sudo yum -y install python3 podman httpd httpd-tools jq",
"sudo mkdir -p /opt/registry/{auth,certs,data}",
"OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<cluster_architecture> 1",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}\"",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"openshift-baremetal-install",
"echo \"additionalTrustBundle: |\" >> install-config.yaml",
"sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml",
"echo \"imageContentSources:\" >> install-config.yaml",
"echo \"- mirrors:\" >> install-config.yaml",
"echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml",
"echo \" source: quay.io/openshift-release-dev/ocp-release\" >> install-config.yaml",
"echo \"- mirrors:\" >> install-config.yaml",
"echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml",
"echo \" source: quay.io/openshift-release-dev/ocp-v4.0-art-dev\" >> install-config.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-installation-workflow |
Chapter 6. Booting from cinder volumes | Chapter 6. Booting from cinder volumes You can create volumes in the Block Storage service (cinder) and connect these volumes to bare metal instances that you create with the Bare Metal Provisioning service (ironic). 6.1. Cinder volume boot for bare metal nodes You can boot bare metal nodes from a block storage device that is stored in OpenStack Block Storage (cinder). OpenStack Bare Metal (ironic) connects bare metal nodes to volumes through an iSCSI interface. Ironic enables this feature during the overcloud deployment. However, consider the following conditions before you deploy the overcloud: The overcloud requires the cinder iSCSI backend to be enabled. Set the CinderEnableIscsiBackend heat parameter to true during overcloud deployment. You cannot use the cinder volume boot feature with a Red Hat Ceph Storage backend. You must set the rd.iscsi.firmware=1 kernel parameter on the boot disk. 6.2. Configuring nodes for cinder volume boot You must configure certain options for each bare metal node to successfully boot from a cinder volume. Procedure Log in to the undercloud as the stack user. Source the overcloud credentials: Set the iscsi_boot capability to true and the storage-interface to cinder for the selected node: Replace <NODEID> with the ID of the chosen node. Create an iSCSI connector for the node: The connector ID for each node must be unique. In this example, the connector is iqn.2010-10.org.openstack.node<NUM> where <NUM> is an incremented number for each node. 6.3. Configuring iSCSI kernel parameters on the boot disk You must enable the iSCSI booting in the kernel on the image. To accomplish this, mount the QCOW2 image and enable iSCSI components on the image. Prerequisites Download a Red Hat Enterprise Linux QCOW2 image and copy it to the /home/stack/ directory on the undercloud. You can download Red Hat Enterprise Linux KVM images in QCOW2 format from the following pages: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Procedure Log in to the undercloud as the stack user. Mount the QCOW2 image and access it as the root user: Load the nbd kernel module: Connect the QCOW image as /dev/nbd0 : Check the partitions on the NBD: New Red Hat Enterprise Linux QCOW2 images contain only one partition, which is usually named /dev/nbd0p1 on the NBD. Create a mount point for the image: Mount the image: Mount your dev directory so that the image has access to device information on the host: Change the root directory to the mount point: Configure iSCSI on the image: Note Some commands in this step might report the following error: This error is not critical and you can ignore the error. Move the resolv.conf file to a temporary location: Create a temporary resolv.conf file to resolve DNS requests for the Red Hat Content Delivery Network. This example uses 8.8.8.8 for the nameserver: Register the mounted image to the Red Hat Content Delivery Network: Enter your user name and password when the command prompts you. Attach a subscription that contains Red Hat Enterprise Linux: Substitute <POOLID> with the pool ID of the subscription. Disable the default repositories: Enable the Red Hat Enterprise Linux repository: Red Hat Enterprise Linux 7: Red Hat Enterprise Linux 8: Install the iscsi-initiator-utils package: Unregister the mounted image: Restore the original resolv.conf file: Check the kernel version on the mounted image: For example, if the output is kernel-3.10.0-1062.el7.x86_64 , the kernel version is 3.10.0-1062.el7.x86_64 . Note this kernel version for the step. Note New Red Hat Enterprise Linux QCOW2 images have only one kernel version installed. If more than one kernel version is installed, use the latest one. Add the network and iscsi dracut modules to the initramfs image: Replace <KERNELVERSION> with the version number that you obtained from rpm -qa kernel . The following example uses 3.10.0-1062.el7.x86_64 as the kernel version: Exit from the mounted image back to your host operating system: Unmount the image: Unmount the dev directory from the temporary mount point: Unmount the image from the mount point: Disconnect the QCOW2 image from /dev/nbd0/ : Rebuild the grub menu configuration on the image: Install the libguestfs-tools package: Important If you install the libguestfs-tools package on the undercloud, disable iscsid.socket to avoid port conflicts with the tripleo_iscsid service on the undercloud: Set the libguestfs backend to use QEMU directly: Update the grub configuration on the image: 6.4. Creating and using a boot volume in cinder You must upload the iSCSI-enabled image to OpenStack Image Storage (glance) and create the boot volume in OpenStack Block Storage (cinder). Procedure Log in to the undercloud as the stack user. Upload the iSCSI-enabled image to glance: Create a volume from the image: Create a bare metal instance that uses the boot volume in cinder: | [
"source ~/overcloudrc",
"openstack baremetal node set --property capabilities=iscsi_boot:true --storage-interface cinder <NODEID>",
"openstack baremetal volume connector create --node <NODEID> --type iqn --connector-id iqn.2010-10.org.openstack.node<NUM>",
"sudo modprobe nbd",
"sudo qemu-nbd --connect=/dev/nbd0 <IMAGE>",
"sudo fdisk /dev/nbd0 -l",
"mkdir /tmp/mountpoint",
"sudo mount /dev/nbd0p1 /tmp/mountpoint/",
"sudo mount -o bind /dev /tmp/mountpoint/dev",
"sudo chroot /tmp/mountpoint /bin/bash",
"lscpu: cannot open /proc/cpuinfo: No such file or directory",
"mv /etc/resolv.conf /etc/resolv.conf.bak",
"echo \"nameserver 8.8.8.8\" > /etc/resolv.conf",
"subscription-manager register",
"subscription-manager list --all --available subscription-manager attach --pool <POOLID>",
"subscription-manager repos --disable \"*\"",
"subscription-manager repos --enable \"rhel-7-server-rpms\"",
"subscription-manager repos --enable \"rhel-8-for-x86_64-baseos-eus-rpms\"",
"yum install -y iscsi-initiator-utils",
"subscription-manager unregister",
"mv /etc/resolv.conf.bak /etc/resolv.conf",
"rpm -qa kernel",
"dracut --force --add \"network iscsi\" /boot/initramfs-<KERNELVERSION>.img <KERNELVERSION>",
"dracut --force --add \"network iscsi\" /boot/initramfs-3.10.0-1062.el7.x86_64.img 3.10.0-1062.el7.x86_64",
"exit",
"sudo umount /tmp/mountpoint/dev",
"sudo umount /tmp/mountpoint",
"sudo qemu-nbd --disconnect /dev/nbd0",
"sudo yum -y install libguestfs-tools",
"sudo systemctl disable --now iscsid.socket",
"export LIBGUESTFS_BACKEND=direct",
"guestfish -a /tmp/images/{{ dib_image }} -m /dev/sda3 sh \"mount /dev/sda2 /boot/efi && rm /boot/grub2/grubenv && /sbin/grub2-mkconfig -o /boot/grub2/grub.cfg && cp /boot/grub2/grub.cfg /boot/efi/EFI/redhat/grub.cfg && grubby --update-kernel=ALL --args=\\\"rd.iscsi.firmware=1\\\" && cp /boot/grub2/grubenv /boot/efi/EFI/redhat/grubenv && echo Success\"",
"openstack image create --disk-format qcow2 --container-format bare --file rhel-server-7.7-x86_64-kvm.qcow2 rhel-server-7.7-iscsi",
"openstack volume create --size 10 --image rhel-server-7.7-iscsi --bootable rhel-test-volume",
"openstack server create --flavor baremetal --volume rhel-test-volume --key default rhel-test"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/bare_metal_provisioning/booting-from-cinder-volumes |
3.6. Additional Resources | 3.6. Additional Resources See the following resources for more information about managing users and groups. 3.6.1. Installed Documentation For information about various utilities for managing users and groups, see the following manual pages: chage (1) - A command to modify password aging policies and account expiration. gpasswd (1) - A command to administer the /etc/group file. groupadd (8) - A command to add groups. grpck (8) - A command to verify the /etc/group file. groupdel (8) - A command to remove groups. groupmod (8) - A command to modify group membership. pwck (8) - A command to verify the /etc/passwd and /etc/shadow files. pwconv (8) - A tool to convert standard passwords to shadow passwords. pwunconv (8) - A tool to convert shadow passwords to standard passwords. useradd (8) - A command to add users. userdel (8) - A command to remove users. usermod (8) - A command to modify users. For information about related configuration files, see: group (5) - The file containing group information for the system. passwd (5) - The file containing user information for the system. shadow (5) - The file containing passwords and account expiration information for the system. login.defs (5) - The file containing shadow password suite configuration. useradd (8) - For /etc/default/useradd , section "Changing the default values" in manual page. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-users-groups-additional-resources |
11.2. Configuring the Kerberos KDC | 11.2. Configuring the Kerberos KDC Install the master KDC first and then install any necessary secondary servers after the master is set up. Important Setting up Kerberos KDC manually is not recommended. The recommended way to introduce Kerberos into Red Hat Enterprise Linux environments is to use the Identity Management feature. 11.2.1. Configuring the Master KDC Server Important The KDC system should be a dedicated machine. This machine needs to be very secure - if possible, it should not run any services other than the KDC. Install the required packages for the KDC: Edit the /etc/krb5.conf and /var/kerberos/krb5kdc/kdc.conf configuration files to reflect the realm name and domain-to-realm mappings. For example: A simple realm can be constructed by replacing instances of EXAMPLE.COM and example.com with the correct domain name - being certain to keep uppercase and lowercase names in the correct format - and by changing the KDC from kerberos.example.com to the name of the Kerberos server. By convention, all realm names are uppercase and all DNS host names and domain names are lowercase. The man pages of these configuration files have full details about the file formats. Create the database using the kdb5_util utility. The create command creates the database that stores keys for the Kerberos realm. The -s argument creates a stash file in which the master server key is stored. If no stash file is present from which to read the key, the Kerberos server ( krb5kdc ) prompts the user for the master server password (which can be used to regenerate the key) every time it starts. Edit the /var/kerberos/krb5kdc/kadm5.acl file. This file is used by kadmind to determine which principals have administrative access to the Kerberos database and their level of access. For example: Most users are represented in the database by a single principal (with a NULL , or empty, instance, such as [email protected] ). In this configuration, users with a second principal with an instance of admin (for example, joe/[email protected] ) are able to exert full administrative control over the realm's Kerberos database. After kadmind has been started on the server, any user can access its services by running kadmin on any of the clients or servers in the realm. However, only users listed in the kadm5.acl file can modify the database in any way, except for changing their own passwords. Note The kadmin utility communicates with the kadmind server over the network, and uses Kerberos to handle authentication. Consequently, the first principal must already exist before connecting to the server over the network to administer it. Create the first principal with the kadmin.local command, which is specifically designed to be used on the same host as the KDC and does not use Kerberos for authentication. Create the first principal using kadmin.local at the KDC terminal: Start Kerberos using the following commands: Add principals for the users using the addprinc command within kadmin . kadmin and kadmin.local are command line interfaces to the KDC. As such, many commands - such as addprinc - are available after launching the kadmin program. Refer to the kadmin man page for more information. Verify that the KDC is issuing tickets. First, run kinit to obtain a ticket and store it in a credential cache file. , use klist to view the list of credentials in the cache and use kdestroy to destroy the cache and the credentials it contains. Note By default, kinit attempts to authenticate using the same system login user name (not the Kerberos server). If that user name does not correspond to a principal in the Kerberos database, kinit issues an error message. If that happens, supply kinit with the name of the correct principal as an argument on the command line: 11.2.2. Setting up Secondary KDCs When there are multiple KDCs for a given realm, one KDC (the master KDC ) keeps a writable copy of the realm database and runs kadmind . The master KDC is also the realm's admin server . Additional secondary KDCs keep read-only copies of the database and run kpropd . The master and slave propagation procedure entails the master KDC dumping its database to a temporary dump file and then transmitting that file to each of its slaves, which then overwrite their previously received read-only copies of the database with the contents of the dump file. To set up a secondary KDC: Install the required packages for the KDC: Copy the master KDC's krb5.conf and kdc.conf files to the secondary KDC. Start kadmin.local from a root shell on the master KDC. Use the kadmin.local add_principal command to create a new entry for the master KDC's host service. [root@masterkdc ~]# kadmin.local -r EXAMPLE.COM Authenticating as principal root/[email protected] with password. kadmin: add_principal -randkey host/masterkdc.example.com Principal "host/[email protected]" created. kadmin: ktadd host/masterkdc.example.com Entry for principal host/masterkdc.example.com with kvno 3, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/masterkdc.example.com with kvno 3, encryption type ArcFour with HMAC/md5 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/masterkdc.example.com with kvno 3, encryption type DES with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/masterkdc.example.com with kvno 3, encryption type DES cbc mode with RSA-MD5 added to keytab WRFILE:/etc/krb5.keytab. kadmin: quit Use the kadmin.local ktadd command to set a random key for the service and store the random key in the master's default keytab file. Note This key is used by the kprop command to authenticate to the secondary servers. You will only need to do this once, regardless of how many secondary KDC servers you install. Start kadmin from a root shell on the secondary KDC. Use the kadmin.local add_principal command to create a new entry for the secondary KDC's host service. [root@slavekdc ~]# kadmin -p jsmith/[email protected] -r EXAMPLE.COM Authenticating as principal jsmith/[email protected] with password. Password for jsmith/[email protected]: kadmin: add_principal -randkey host/slavekdc.example.com Principal "host/[email protected]" created. kadmin: ktadd host/[email protected] Entry for principal host/slavekdc.example.com with kvno 3, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/slavekdc.example.com with kvno 3, encryption type ArcFour with HMAC/md5 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/slavekdc.example.com with kvno 3, encryption type DES with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/slavekdc.example.com with kvno 3, encryption type DES cbc mode with RSA-MD5 added to keytab WRFILE:/etc/krb5.keytab. kadmin: quit Use the kadmin.local ktadd command to set a random key for the service and store the random key in the secondary KDC server's default keytab file. This key is used by the kpropd service when authenticating clients. With its service key, the secondary KDC could authenticate any client which would connect to it. Obviously, not all potential clients should be allowed to provide the kprop service with a new realm database. To restrict access, the kprop service on the secondary KDC will only accept updates from clients whose principal names are listed in /var/kerberos/krb5kdc/kpropd.acl . Add the master KDC's host service's name to that file. Once the secondary KDC has obtained a copy of the database, it will also need the master key which was used to encrypt it. If the KDC database's master key is stored in a stash file on the master KDC (typically named /var/kerberos/krb5kdc/.k5.REALM ), either copy it to the secondary KDC using any available secure method, or create a dummy database and identical stash file on the secondary KDC by running kdb5_util create -s and supplying the same password. The dummy database will be overwritten by the first successful database propagation. Ensure that the secondary KDC's firewall allows the master KDC to contact it using TCP on port 754 ( krb5_prop ), and start the kprop service. Verify that the kadmin service is disabled . Perform a manual database propagation test by dumping the realm database on the master KDC to the default data file which the kprop command will read ( /var/kerberos/krb5kdc/slave_datatrans ). [root@masterkdc ~]# kdb5_util dump /var/kerberos/krb5kdc/slave_datatrans Use the kprop command to transmit its contents to the secondary KDC. [root@masterkdc ~]# kprop slavekdc.example.com Using kinit , verify that the client system is able to correctly obtain the initial credentials from the KDC. The /etc/krb5.conf for the client should list only the secondary KDC in its list of KDCs. Create a script which dumps the realm database and runs the kprop command to transmit the database to each secondary KDC in turn, and configure the cron service to run the script periodically. 11.2.3. Kerberos Key Distribution Center Proxy Some administrators might choose to make the default Kerberos ports inaccessible in their deployment. To allow users, hosts, and services to obtain Kerberos credentials, you can use the HTTPS service as a proxy that communicates with Kerberos via the HTTPS port 443. In Identity Management (IdM), the Kerberos Key Distribution Center Proxy (KKDCP) provides this functionality. KKDCP Server On an IdM server, KKDCP is enabled by default. The KKDCP is automatically enabled each time the Apache web server starts, if the attribute and value pair ipaConfigString=kdcProxyEnabled exists in the directory. In this situation, the symbolic link /etc/httpd/conf.d/ipa-kdc-proxy.conf is created. Thus, you can verify that KKDCP is enabled on an IdM Server by checking that the symbolic link exists. See the example server configurations below for more details. Example 11.1. Configuring the KKDCP server I Using the following example configuration, you can enable TCP to be used as the transport protocol between the IdM KKDCP and the Active Directory realm, where multiple Kerberos servers are used: In the /etc/ipa/kdcproxy/kdcproxy.conf file, set the use_dns parameter in the [global] section to false : Put the proxied realm information into the /etc/ipa/kdcproxy/kdcproxy.conf file. For the [AD. EXAMPLE.COM ] realm with proxy, for example, list the realm configuration parameters as follows: Important The realm configuration parameters must list multiple servers separated by a space, as opposed to /etc/krb5.conf and kdc.conf , in which certain options may be specified multiple times. Restart IdM services: Example 11.2. Configuring the KKDCP server II This example server configuration relies on the DNS service records to find AD servers to communicate with. In the /etc/ipa/kdcproxy/kdcproxy.conf file, the [global] section, set the use_dns parameter to true : The configs parameter allows you to load other configuration modules. In this case, the configuration is read from the MIT libkrb5 library. Optional: In case you do not want to use DNS service records, add explicit AD servers to the [realms] section of the /etc/krb5.conf file. If the realm with proxy is, for example, AD. EXAMPLE.COM , you add: Restart IdM services: KKDCP Client Client systems point to the KDC proxies through their /etc/krb5.conf files. Follow this procedure to reach the AD server. On the client, open the /etc/krb5.conf file, and add the name of the AD realm to the [realms] section: Open the /etc/sssd/sssd.conf file, and add the krb5_use_kdcinfo = False line to your IdM domain section: Restart the SSSD service: Additional Resources For details on configuring KKDCP for an Active Directory realm, see Configure IPA server as a KDC Proxy for AD Kerberos communication in Red Hat Knowledgebase. | [
"yum install krb5-server krb5-libs krb5-workstation",
"[logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = EXAMPLE.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true allow_weak_crypto = true [realms] EXAMPLE.COM = { kdc = kdc.example.com.:88 admin_server = kdc.example.com default_domain = example.com } [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM",
"kdb5_util create -s",
"*/[email protected] *",
"kadmin.local -q \"addprinc username /admin\"",
"systemctl start krb5kdc.service systemctl start kadmin.service",
"kinit principal",
"yum install krb5-server krb5-libs krb5-workstation",
"kadmin.local -r EXAMPLE.COM Authenticating as principal root/[email protected] with password. kadmin: add_principal -randkey host/masterkdc.example.com Principal \"host/[email protected]\" created. kadmin: ktadd host/masterkdc.example.com Entry for principal host/masterkdc.example.com with kvno 3, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/masterkdc.example.com with kvno 3, encryption type ArcFour with HMAC/md5 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/masterkdc.example.com with kvno 3, encryption type DES with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/masterkdc.example.com with kvno 3, encryption type DES cbc mode with RSA-MD5 added to keytab WRFILE:/etc/krb5.keytab. kadmin: quit",
"kadmin -p jsmith/[email protected] -r EXAMPLE.COM Authenticating as principal jsmith/[email protected] with password. Password for jsmith/[email protected]: kadmin: add_principal -randkey host/slavekdc.example.com Principal \"host/[email protected]\" created. kadmin: ktadd host/[email protected] Entry for principal host/slavekdc.example.com with kvno 3, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/slavekdc.example.com with kvno 3, encryption type ArcFour with HMAC/md5 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/slavekdc.example.com with kvno 3, encryption type DES with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/slavekdc.example.com with kvno 3, encryption type DES cbc mode with RSA-MD5 added to keytab WRFILE:/etc/krb5.keytab. kadmin: quit",
"echo host/[email protected] > /var/kerberos/krb5kdc/kpropd.acl",
"kdb5_util dump /var/kerberos/krb5kdc/slave_datatrans",
"kprop slavekdc.example.com",
"[realms] EXAMPLE.COM = { kdc = slavekdc.example.com.:88 admin_server = kdc.example.com default_domain = example.com }",
"ls -l /etc/httpd/conf.d/ipa-kdc-proxy.conf lrwxrwxrwx. 1 root root 36 Aug 15 09:37 /etc/httpd/conf.d/ipa-kdc-proxy.conf -> /etc/ipa/kdcproxy/ipa-kdc-proxy.conf",
"[global] use_dns = false",
"[AD. EXAMPLE.COM ] kerberos = kerberos+tcp://1.2.3.4:88 kerberos+tcp://5.6.7.8:88 kpasswd = kpasswd+tcp://1.2.3.4:464 kpasswd+tcp://5.6.7.8:464",
"ipactl restart",
"[global] configs = mit use_dns = true",
"[realms] AD. EXAMPLE.COM = { kdc = ad-server.ad.example.com kpasswd_server = ad-server.ad.example.com }",
"ipactl restart",
"[realms] AD. EXAMPLE.COM { kdc = https://ipa-server.example.com/KdcProxy kdc = https://ipa-server2.example.com/KdcProxy kpasswd_server = https://ipa-server.example.com/KdcProxy kpasswd_server = https://ipa-server2.example.com/KdcProxy }",
"[domain/ example.com ] krb5_use_kdcinfo = False",
"systemctl restart sssd.service"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/configuring_a_kerberos_5_server |
Assisted Installer for OpenShift Container Platform | Assisted Installer for OpenShift Container Platform Assisted Installer for OpenShift Container Platform 2023 Assisted Installer User Guide Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/assisted_installer_for_openshift_container_platform/index |
Chapter 2. Installing a cluster on Oracle Cloud Infrastructure (OCI) by using the Agent-based Installer | Chapter 2. Installing a cluster on Oracle Cloud Infrastructure (OCI) by using the Agent-based Installer In OpenShift Container Platform 4.14, you can use the Agent-based Installer to install a cluster on Oracle(R) Cloud Infrastructure (OCI), so that you can run cluster workloads on infrastructure that supports dedicated, hybrid, public, and multiple cloud environments. 2.1. The Agent-based Installer and OCI overview You can install an OpenShift Container Platform cluster on Oracle(R) Cloud Infrastructure (OCI) by using the Agent-based Installer. Both Red Hat and Oracle test, validate, and support running OCI and Oracle(R) Cloud VMware Solution (OCVS) workloads in an OpenShift Container Platform cluster on OCI. The Agent-based installer provides the ease of use of the Assisted Installation service, but with the capability to install a cluster in either a connected or disconnected environment. The following diagrams show workflows for connected and disconnected environments: Figure 2.1. Workflow for using the Agent-based installer in a connected environment to install a cluster on OCI Figure 2.2. Workflow for using the Agent-based installer in a disconnected environment to install a cluster on OCI OCI provides services that can meet your regulatory compliance, performance, and cost-effectiveness needs. OCI supports 64-bit x86 instances and 64-bit ARM instances. Additionally, OCI provides an OCVS service where you can move VMware workloads to OCI with minimal application re-architecture. Note Consider selecting a nonvolatile memory express (NVMe) drive or a solid-state drive (SSD) for your boot disk, because these drives offer low latency and high throughput capabilities for your boot disk. By running your OpenShift Container Platform cluster on OCI, you can access the following capabilities: Compute flexible shapes, where you can customize the number of Oracle(R) CPUs (OCPUs) and memory resources for your VM. With access to this capability, a cluster's workload can perform operations in a resource-balanced environment. You can find all RHEL-certified OCI shapes by going to the Oracle page on the Red Hat Ecosystem Catalog portal. Block Volume storage, where you can configure scaling and auto-tuning settings for your storage volume, so that the Block Volume service automatically adjusts the performance level to optimize performance. OCVS, where you can deploy a cluster in a public-cloud environment that operates on a VMware(R) vSphere software-defined data center (SDDC). You continue to retain full-administrative control over your VMware vSphere environment, but you can use OCI services to improve your applications on flexible, scalable, and secure infrastructure. Important To ensure the best performance conditions for your cluster workloads that operate on OCI and on the OCVS service, ensure volume performance units (VPUs) for your block volume is sized for your workloads. The following list provides some guidance in selecting the VPUs needed for specific performance needs: Test or proof of concept environment: 100 GB, and 20 to 30 VPUs. Basic environment: 500 GB, and 60 VPUs. Heavy production environment: More than 500 GB, and 100 or more VPUs. Consider reserving additional VPUs to provide sufficient capacity for updates and scaling activities. For more information about VPUs, see Volume Performance Units (Oracle documentation). Additional resources Installation process Internet access for OpenShift Container Platform Understanding the Agent-based Installer Overview of the Compute Service (Oracle documentation) Volume Performance Units (Oracle documentation) Instance Sizing Recommendations for OpenShift Container Platform on OCI Nodes (Oracle documentation) 2.2. Creating OCI infrastructure resources and services You must create an OCI environment on your virtual machine (VM) shape. By creating this environment, you can install OpenShift Container Platform and deploy a cluster on an infrastructure that supports a wide range of cloud options and strong security policies. Having prior knowledge of OCI components can help you with understanding the concept of OCI resources and how you can configure them to meet your organizational needs. The Agent-based installer method for installing an OpenShift Container Platform cluster on OCI requires that you manually create OCI resources and services. Important To ensure compatibility with OpenShift Container Platform, you must set A as the record type for each DNS record and name records as follows: api.<cluster_name>.<base_domain> , which targets the apiVIP parameter of the API load balancer. api-int.<cluster_name>.<base_domain> , which targets the apiVIP parameter of the API load balancer. *.apps.<cluster_name>.<base_domain> , which targets the ingressVIP parameter of the Ingress load balancer. The api.* and api-int.* DNS records relate to control plane machines, so you must ensure that all nodes in your installed OpenShift Container Platform cluster can access these DNS records. Prerequisites You configured an OCI account to host the OpenShift Container Platform cluster. See Prerequisites (Oracle documentation) . Procedure Create the required OCI resources and services. See OCI Resources Needed for Using the Agent-based Installer (Oracle documentation) . Additional resources Learn About Oracle Cloud Basics (Oracle documentation) 2.3. Creating configuration files for installing a cluster on OCI You need to create the install-config.yaml and the agent-config.yaml configuration files so that you can use the Agent-based Installer to generate a bootable ISO image. The Agent-based installation comprises a bootable ISO that has the Assisted discovery agent and the Assisted Service. Both of these components are required to perform the cluster installation, but the latter component runs on only one of the hosts. At a later stage, you must follow the steps in the Oracle documentation for uploading your generated agent ISO image to Oracle's default Object Storage bucket, which is the initial step for integrating your OpenShift Container Platform cluster on Oracle(R) Cloud Infrastructure (OCI). Note You can also use the Agent-based Installer to generate or accept Zero Touch Provisioning (ZTP) custom resources. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing the method for users. You have read the "Preparing to install with the Agent-based Installer" documentation. You downloaded the Agent-Based Installer and the command-line interface (CLI) from the Red Hat Hybrid Cloud Console. You have logged in to the OpenShift Container Platform with administrator privileges. Procedure For a disconnected environment, mirror the Mirror registry for Red Hat OpenShift to your local container image registry. Important Check that your openshift-install binary version relates to your local image container registry and not a shared registry, such as Red Hat Quay. USD ./openshift-install version Example output for a shared registry binary ./openshift-install 4.15.0 built from commit ae7977b7d1ca908674a0d45c5c243c766fa4b2ca release image registry.ci.openshift.org/origin/release:4.15ocp-release@sha256:0da6316466d60a3a4535d5fed3589feb0391989982fba59d47d4c729912d6363 release architecture amd64 Configure the install-config.yaml configuration file to meet the needs of your organization. Example install-config.yaml configuration file that demonstrates setting an external platform # install-config.yaml apiVersion: v1 baseDomain: <base_domain> 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 network type: OVNKubernetes machineNetwork: - cidr: <ip_address_from_cidr> 2 serviceNetwork: - 172.30.0.0/16 compute: - architecture: amd64 3 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 4 hyperthreading: Enabled name: master replicas: 3 platform: external: platformName: oci 5 cloudControllerManager: External sshKey: <public_ssh_key> 6 pullSecret: '<pull_secret>' 7 # ... 1 The base domain of your cloud provider. 2 The IP address from the virtual cloud network (VCN) that the CIDR allocates to resources and components that operate on your network. 3 4 Depending on your infrastructure, you can select either x86_64 , or amd64 . 5 Set OCI as the external platform, so that OpenShift Container Platform can integrate with OCI. 6 Specify your SSH public key. 7 The pull secret that you need for authenticate purposes when downloading container images for OpenShift Container Platform components and services, such as Quay.io. See Install OpenShift Container Platform 4 from the Red Hat Hybrid Cloud Console. Create a directory on your local system named openshift . Important Do not move the install-config.yaml and agent-config.yaml configuration files to the openshift directory. Complete the steps in the "Configuration Files" section of the Oracle documentation to download Oracle Cloud Controller Manager (CCM) and Oracle Container Storage Interface (CSI) manifests as an archive file and save the archive file in your openshift directory. You need the Oracle CCM manifests for deploying the Oracle CCM during cluster installation so that OpenShift Container Platform can connect to the external OCI platform. You need the Oracle CSI custom manifests for deploying the Oracle CSI driver during cluster installation so that OpenShift Container Platform can claim required objects from OCI. Access the custom manifest files that are provided in the "Configuration Files" section of the Oracle documentation. Change the oci-cloud-controller-manager secret that is defined in the oci-ccm.yml configuration file to match your organization's region, compartment OCID, VCN OCID, and the subnet OCID from the load balancer. Use the Agent-based Installer to generate a minimal ISO image, which excludes the rootfs image, by entering the following command in your OpenShift Container Platform CLI. You can use this image later in the process to boot all your cluster's nodes. USD ./openshift-install agent create image --log-level debug The command also completes the following actions: Creates a subdirectory, ./<installation_directory>/auth directory: , and places kubeadmin-password and kubeconfig files in the subdirectory. Creates a rendezvousIP file based on the IP address that you specified in the agent-config.yaml configuration file. Optional: Any modifications you made to agent-config.yaml and install-config.yaml configuration files get imported to the Zero Touch Provisioning (ZTP) custom resources. Important The Agent-based Installer uses Red Hat Enterprise Linux CoreOS (RHCOS). The rootfs image, which is mentioned in a later listed item, is required for booting, recovering, and repairing your operating system. Configure the agent-config.yaml configuration file to meet your organization's requirements. Example agent-config.yaml configuration file that sets values for an IPv4 formatted network. apiVersion: v1alpha1 metadata: name: <cluster_name> 1 namespace: <cluster_namespace> 2 rendezvousIP: <ip_address_from_CIDR> 3 bootArtifactsBaseURL: <server_URL> 4 # ... 1 The cluster name that you specified in your DNS record. 2 The namespace of your cluster on OpenShift Container Platform. 3 If you use IPv4 as the network IP address format, ensure that you set the rendezvousIP parameter to an IPv4 address that the VCN's Classless Inter-Domain Routing (CIDR) method allocates on your network. Also ensure that at least one instance from the pool of instances that you booted with the ISO matches the IP address value you set for rendezvousIP . 4 The URL of the server where you want to upload the rootfs image. Apply one of the following two updates to your agent-config.yaml configuration file: For a disconnected network: After you run the command to generate a minimal ISO Image, the Agent-based installer saves the rootfs image into the ./<installation_directory>/boot-artifacts directory on your local system. Use your preferred web server, such as any Hypertext Transfer Protocol daemon ( httpd ), to upload rootfs to the location stated in the bootArtifactsBaseURL parameter in the agent-config.yaml configuration file. For example, if the bootArtifactsBaseURL parameter states http://192.168.122.20 , you would upload the generated rootfs image to this location, so that the Agent-based installer can access the image from http://192.168.122.20/agent.x86_64-rootfs.img . After the Agent-based installer boots the minimal ISO for the external platform, the Agent-based Installer downloads the rootfs image from the http://192.168.122.20/agent.x86_64-rootfs.img location into the system memory. Note The Agent-based Installer also adds the value of the bootArtifactsBaseURL to the minimal ISO Image's configuration, so that when the Operator boots a cluster's node, the Agent-based Installer downloads the rootfs image into system memory. For a connected network: You do not need to specify the bootArtifactsBaseURL parameter in the agent-config.yaml configuration file. The default behavior of the Agent-based Installer reads the rootfs URL location from https://rhcos.mirror.openshift.com . After the Agent-based Installer boots the minimal ISO for the external platform, the Agent-based Installer then downloads the rootfs file into your system's memory from the default RHCOS URL. Important Consider that the full ISO image, which is in excess of 1 GB, includes the rootfs image. The image is larger than the minimal ISO Image, which is typically less than 150 MB. Additional resources About OpenShift Container Platform installation Selecting a cluster installation type Preparing to install with the Agent-based Installer Downloading the Agent-based Installer Mirroring the OpenShift Container Platform image repository Optional: Using ZTP manifests 2.4. Configuring your firewall for OpenShift Container Platform Before you install OpenShift Container Platform, you must configure your firewall to grant access to the sites that OpenShift Container Platform requires. When using a firewall, make additional configurations to the firewall so that OpenShift Container Platform can access the sites that it requires to function. For a disconnected environment, you must mirror content from both Red Hat and Oracle. This environment requires that you create firewall rules to expose your firewall to specific ports and registries. Note If your environment has a dedicated load balancer in front of your OpenShift Container Platform cluster, review the allowlists between your firewall and load balancer to prevent unwanted network restrictions to your cluster. Procedure Set the following registry URLs for your firewall's allowlist: URL Port Function registry.redhat.io 443 Provides core container images access.redhat.com 443 Hosts a signature store that a container client requires for verifying images pulled from registry.access.redhat.com . In a firewall environment, ensure that this resource is on the allowlist. registry.access.redhat.com 443 Hosts all the container images that are stored on the Red Hat Ecosystem Catalog, including core container images. quay.io 443 Provides core container images cdn.quay.io 443 Provides core container images cdn01.quay.io 443 Provides core container images cdn02.quay.io 443 Provides core container images cdn03.quay.io 443 Provides core container images cdn04.quay.io 443 Provides core container images cdn05.quay.io 443 Provides core container images cdn06.quay.io 443 Provides core container images sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com You can use the wildcards *.quay.io and *.openshiftapps.com instead of cdn.quay.io and cdn0[1-6].quay.io in your allowlist. You can use the wildcard *.access.redhat.com to simplify the configuration and ensure that all subdomains, including registry.access.redhat.com , are allowed. When you add a site, such as quay.io , to your allowlist, do not add a wildcard entry, such as *.quay.io , to your denylist. In most cases, image registries use a content delivery network (CDN) to serve images. If a firewall blocks access, image downloads are denied when the initial download request redirects to a hostname such as cdn01.quay.io . Set your firewall's allowlist to include any site that provides resources for a language or framework that your builds require. If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat Insights: URL Port Function cert-api.access.redhat.com 443 Required for Telemetry api.access.redhat.com 443 Required for Telemetry infogw.api.openshift.com 443 Required for Telemetry console.redhat.com 443 Required for Telemetry and for insights-operator Set your firewall's allowlist to include the following registry URLs: URL Port Function api.openshift.com 443 Required both for your cluster token and to check if updates are available for the cluster. rhcos.mirror.openshift.com 443 Required to download Red Hat Enterprise Linux CoreOS (RHCOS) images. Set your firewall's allowlist to include the following external URLs. Each repository URL hosts OCI containers. Consider mirroring images to as few repositories as possible to reduce any performance issues. URL Port Function k8s.gcr.io port A Kubernetes registry that hosts container images for a community-based image registry. This image registry is hosted on a custom Google Container Registry (GCR) domain. ghcr.io port A GitHub image registry where you can store and manage Open Container Initiative images. Requires an access token to publish, install, and delete private, internal, and public packages. storage.googleapis.com 443 A source of release image signatures, although the Cluster Version Operator needs only a single functioning source. registry.k8s.io port Replaces the k8s.gcr.io image registry because the k8s.gcr.io image registry does not support other platforms and vendors. 2.5. Running a cluster on OCI To run a cluster on Oracle(R) Cloud Infrastructure (OCI), you must upload the generated agent ISO image to the default Object Storage bucket on OCI. Additionally, you must create a compute instance from the supplied base image, so that your OpenShift Container Platform and OCI can communicate with each other for the purposes of running the cluster on OCI. Note OCI supports the following OpenShift Container Platform cluster topologies: Installing an OpenShift Container Platform cluster on a single node. A highly available cluster that has a minimum of three control plane instances and two compute instances. A compact three-node cluster that has a minimum of three control plane instances. Prerequisites You generated an agent ISO image. See the "Creating configuration files for installing a cluster on OCI" section. Procedure Upload the agent ISO image to Oracle's default Object Storage bucket and import the agent ISO image as a custom image to this bucket. Ensure you that you configure the custom image to boot in Unified Extensible Firmware Interface (UEFI) mode. For more information, see Creating the OpenShift Container Platform ISO Image (Oracle documentation) . Create a compute instance from the supplied base image for your cluster topology. See Creating the OpenShift Container Platform cluster on OCI (Oracle documentation) . Important Before you create the compute instance, check that you have enough memory and disk resources for your cluster. Additionally, ensure that at least one compute instance has the same IP address as the address stated under rendezvousIP in the agent-config.yaml file. Additional resources Recommended resources for topologies Instance Sizing Recommendations for OpenShift Container Platform on OCI Nodes (Oracle documentation) Troubleshooting OpenShift Container Platform on OCI (Oracle documentation) 2.6. Verifying that your Agent-based cluster installation runs on OCI Verify that your cluster was installed and is running effectively on Oracle(R) Cloud Infrastructure (OCI). Prerequisites You created all the required OCI resources and services. See the "Creating OCI infrastructure resources and services" section. You created install-config.yaml and agent-config.yaml configuration files. See the "Creating configuration files for installing a cluster on OCI" section. You uploaded the agent ISO image to Oracle's default Object Storage bucket, and you created a compute instance on OCI. For more information, see "Running a cluster on OCI". Procedure After you deploy the compute instance on a self-managed node in your OpenShift Container Platform cluster, you can monitor the cluster's status by choosing one of the following options: From the OpenShift Container Platform CLI, enter the following command: USD ./openshift-install agent wait-for install-complete --log-level debug Check the status of the rendezvous host node that runs the bootstrap node. After the host reboots, the host forms part of the cluster. Use the kubeconfig API to check the status of various OpenShift Container Platform components. For the KUBECONFIG environment variable, set the relative path of the cluster's kubeconfig configuration file: USD export KUBECONFIG=~/auth/kubeconfig Check the status of each of the cluster's self-managed nodes. CCM applies a label to each node to designate the node as running in a cluster on OCI. USD oc get nodes -A Output example NAME STATUS ROLES AGE VERSION main-0.private.agenttest.oraclevcn.com Ready control-plane, master 7m v1.27.4+6eeca63 main-1.private.agenttest.oraclevcn.com Ready control-plane, master 15m v1.27.4+d7fa83f main-2.private.agenttest.oraclevcn.com Ready control-plane, master 15m v1.27.4+d7fa83f Check the status of each of the cluster's Operators, with the CCM Operator status being a good indicator that your cluster is running. USD oc get co Truncated output example NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.15.0-0 True False False 6m18s baremetal 4.15.0-0 True False False 2m42s network 4.15.0-0 True True False 5m58s Progressing: ... ... Additional resources Gathering log data from a failed Agent-based installation | [
"./openshift-install version",
"./openshift-install 4.15.0 built from commit ae7977b7d1ca908674a0d45c5c243c766fa4b2ca release image registry.ci.openshift.org/origin/release:4.15ocp-release@sha256:0da6316466d60a3a4535d5fed3589feb0391989982fba59d47d4c729912d6363 release architecture amd64",
"install-config.yaml apiVersion: v1 baseDomain: <base_domain> 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 network type: OVNKubernetes machineNetwork: - cidr: <ip_address_from_cidr> 2 serviceNetwork: - 172.30.0.0/16 compute: - architecture: amd64 3 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 4 hyperthreading: Enabled name: master replicas: 3 platform: external: platformName: oci 5 cloudControllerManager: External sshKey: <public_ssh_key> 6 pullSecret: '<pull_secret>' 7",
"./openshift-install agent create image --log-level debug",
"apiVersion: v1alpha1 metadata: name: <cluster_name> 1 namespace: <cluster_namespace> 2 rendezvousIP: <ip_address_from_CIDR> 3 bootArtifactsBaseURL: <server_URL> 4",
"./openshift-install agent wait-for install-complete --log-level debug",
"export KUBECONFIG=~/auth/kubeconfig",
"oc get nodes -A",
"NAME STATUS ROLES AGE VERSION main-0.private.agenttest.oraclevcn.com Ready control-plane, master 7m v1.27.4+6eeca63 main-1.private.agenttest.oraclevcn.com Ready control-plane, master 15m v1.27.4+d7fa83f main-2.private.agenttest.oraclevcn.com Ready control-plane, master 15m v1.27.4+d7fa83f",
"oc get co",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.15.0-0 True False False 6m18s baremetal 4.15.0-0 True False False 2m42s network 4.15.0-0 True True False 5m58s Progressing: ... ..."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_oci/installing-oci-agent-based-installer |
15.6. Exposing GNOME Virtual File Systems to All Other Applications | 15.6. Exposing GNOME Virtual File Systems to All Other Applications In addition to applications built with the GIO library being able to access GVFS mounts, GVFS also provides a FUSE daemon which exposes active GVFS mounts. This means that any application can access active GVFS mounts using the standard POSIX APIs as though they were regular filesystems. Nevertheless, there are applications in which additional library dependency and new VFS subsystem specifics may be unsuitable or too complex. For such reasons and to boost compatibility, GVFS provides a FUSE ( Filesystem in Userspace ) daemon, which exposes active mounts through its mount for standard POSIX (Portable Operating System Interface) access. This daemon transparently translates incoming requests to imitate a local file system for applications. Important The translation coming from the different design is not 100% feature-compatible and you may experience difficulties with certain combinations of applications and GVFS back ends. The FUSE daemon starts automatically with the GVFS master daemon and places its mount either in the /run/user/ UID /gvfs or ~/.gvfs files as a fallback. Manual browsing shows that there individual directories for each GVFS mount. When you are opening documents from GVFS locations with non-native applications, a transformed path is passed as an argument. Note that native GIO applications automatically translate this path back to a native URI . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/exposing-gvfs |
Chapter 1. About | Chapter 1. About 1.1. About OpenShift Virtualization Learn about OpenShift Virtualization's capabilities and support scope. 1.1.1. What you can do with OpenShift Virtualization OpenShift Virtualization is an add-on to OpenShift Container Platform that allows you to run and manage virtual machine workloads alongside container workloads. OpenShift Virtualization adds new objects into your OpenShift Container Platform cluster by using Kubernetes custom resources to enable virtualization tasks. These tasks include: Creating and managing Linux and Windows virtual machines (VMs) Running pod and VM workloads alongside each other in a cluster Connecting to virtual machines through a variety of consoles and CLI tools Importing and cloning existing virtual machines Managing network interface controllers and storage disks attached to virtual machines Live migrating virtual machines between nodes An enhanced web console provides a graphical portal to manage these virtualized resources alongside the OpenShift Container Platform cluster containers and infrastructure. OpenShift Virtualization is designed and tested to work well with Red Hat OpenShift Data Foundation features. Important When you deploy OpenShift Virtualization with OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details. You can use OpenShift Virtualization with OVN-Kubernetes , OpenShift SDN , or one of the other certified network plugins listed in Certified OpenShift CNI Plug-ins . You can check your OpenShift Virtualization cluster for compliance issues by installing the Compliance Operator and running a scan with the ocp4-moderate and ocp4-moderate-node profiles . The Compliance Operator uses OpenSCAP, a NIST-certified tool , to scan and enforce security policies. 1.1.1.1. OpenShift Virtualization supported cluster version OpenShift Virtualization 4.16 is supported for use on OpenShift Container Platform 4.16 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform. 1.1.2. About volume and access modes for virtual machine disks If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode. For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons: ReadWriteMany (RWX) access mode is required for live migration. The Block volume mode performs significantly better than the Filesystem volume mode. This is because the Filesystem volume mode uses more storage layers, including a file system layer and a disk image file. These layers are not necessary for VM disk storage. For example, if you use Red Hat OpenShift Data Foundation, Ceph RBD volumes are preferable to CephFS volumes. Important You cannot live migrate virtual machines with the following configurations: Storage volume with ReadWriteOnce (RWO) access mode Passthrough features such as GPUs Set the evictionStrategy field to None for these virtual machines. The None strategy powers down VMs during node reboots. 1.1.3. Single-node OpenShift differences You can install OpenShift Virtualization on single-node OpenShift. However, you should be aware that Single-node OpenShift does not support the following features: High availability Pod disruption Live migration Virtual machines or templates that have an eviction strategy configured 1.1.4. Additional resources Glossary of common terms for OpenShift Container Platform storage About single-node OpenShift Assisted installer Pod disruption budgets About live migration Eviction strategies Tuning & Scaling Guide Supported limits for OpenShift Virtualization 4.x 1.2. Security policies Learn about OpenShift Virtualization security and authorization. Key points OpenShift Virtualization adheres to the restricted Kubernetes pod security standards profile, which aims to enforce the current best practices for pod security. Virtual machine (VM) workloads run as unprivileged pods. Security context constraints (SCCs) are defined for the kubevirt-controller service account. TLS certificates for OpenShift Virtualization components are renewed and rotated automatically. 1.2.1. About workload security By default, virtual machine (VM) workloads do not run with root privileges in OpenShift Virtualization, and there are no supported OpenShift Virtualization features that require root privileges. For each VM, a virt-launcher pod runs an instance of libvirt in session mode to manage the VM process. In session mode, the libvirt daemon runs as a non-root user account and only permits connections from clients that are running under the same user identifier (UID). Therefore, VMs run as unprivileged pods, adhering to the security principle of least privilege. 1.2.2. TLS certificates TLS certificates for OpenShift Virtualization components are renewed and rotated automatically. You are not required to refresh them manually. Automatic renewal schedules TLS certificates are automatically deleted and replaced according to the following schedule: KubeVirt certificates are renewed daily. Containerized Data Importer controller (CDI) certificates are renewed every 15 days. MAC pool certificates are renewed every year. Automatic TLS certificate rotation does not disrupt any operations. For example, the following operations continue to function without any disruption: Migrations Image uploads VNC and console connections 1.2.3. Authorization OpenShift Virtualization uses role-based access control (RBAC) to define permissions for human users and service accounts. The permissions defined for service accounts control the actions that OpenShift Virtualization components can perform. You can also use RBAC roles to manage user access to virtualization features. For example, an administrator can create an RBAC role that provides the permissions required to launch a virtual machine. The administrator can then restrict access by binding the role to specific users. 1.2.3.1. Default cluster roles for OpenShift Virtualization By using cluster role aggregation, OpenShift Virtualization extends the default OpenShift Container Platform cluster roles to include permissions for accessing virtualization objects. Table 1.1. OpenShift Virtualization cluster roles Default cluster role OpenShift Virtualization cluster role OpenShift Virtualization cluster role description view kubevirt.io:view A user that can view all OpenShift Virtualization resources in the cluster but cannot create, delete, modify, or access them. For example, the user can see that a virtual machine (VM) is running but cannot shut it down or gain access to its console. edit kubevirt.io:edit A user that can modify all OpenShift Virtualization resources in the cluster. For example, the user can create VMs, access VM consoles, and delete VMs. admin kubevirt.io:admin A user that has full permissions to all OpenShift Virtualization resources, including the ability to delete collections of resources. The user can also view and modify the OpenShift Virtualization runtime configuration, which is located in the HyperConverged custom resource in the openshift-cnv namespace. 1.2.3.2. RBAC roles for storage features in OpenShift Virtualization The following permissions are granted to the Containerized Data Importer (CDI), including the cdi-operator and cdi-controller service accounts. 1.2.3.2.1. Cluster-wide RBAC roles Table 1.2. Aggregated cluster roles for the cdi.kubevirt.io API group CDI cluster role Resources Verbs cdi.kubevirt.io:admin datavolumes , uploadtokenrequests * (all) datavolumes/source create cdi.kubevirt.io:edit datavolumes , uploadtokenrequests * datavolumes/source create cdi.kubevirt.io:view cdiconfigs , dataimportcrons , datasources , datavolumes , objecttransfers , storageprofiles , volumeimportsources , volumeuploadsources , volumeclonesources get , list , watch datavolumes/source create cdi.kubevirt.io:config-reader cdiconfigs , storageprofiles get , list , watch Table 1.3. Cluster-wide roles for the cdi-operator service account API group Resources Verbs rbac.authorization.k8s.io clusterrolebindings , clusterroles get , list , watch , create , update , delete security.openshift.io securitycontextconstraints get , list , watch , update , create apiextensions.k8s.io customresourcedefinitions , customresourcedefinitions/status get , list , watch , create , update , delete cdi.kubevirt.io * * upload.cdi.kubevirt.io * * admissionregistration.k8s.io validatingwebhookconfigurations , mutatingwebhookconfigurations create , list , watch admissionregistration.k8s.io validatingwebhookconfigurations Allow list: cdi-api-dataimportcron-validate, cdi-api-populator-validate, cdi-api-datavolume-validate, cdi-api-validate, objecttransfer-api-validate get , update , delete admissionregistration.k8s.io mutatingwebhookconfigurations Allow list: cdi-api-datavolume-mutate get , update , delete apiregistration.k8s.io apiservices get , list , watch , create , update , delete Table 1.4. Cluster-wide roles for the cdi-controller service account API group Resources Verbs "" (core) events create , patch "" (core) persistentvolumeclaims get , list , watch , create , update , delete , deletecollection , patch "" (core) persistentvolumes get , list , watch , update "" (core) persistentvolumeclaims/finalizers , pods/finalizers update "" (core) pods , services get , list , watch , create , delete "" (core) configmaps get , create storage.k8s.io storageclasses , csidrivers get , list , watch config.openshift.io proxies get , list , watch cdi.kubevirt.io * * snapshot.storage.k8s.io volumesnapshots , volumesnapshotclasses , volumesnapshotcontents get , list , watch , create , delete snapshot.storage.k8s.io volumesnapshots update , deletecollection apiextensions.k8s.io customresourcedefinitions get , list , watch scheduling.k8s.io priorityclasses get , list , watch image.openshift.io imagestreams get , list , watch "" (core) secrets create kubevirt.io virtualmachines/finalizers update 1.2.3.2.2. Namespaced RBAC roles Table 1.5. Namespaced roles for the cdi-operator service account API group Resources Verbs rbac.authorization.k8s.io rolebindings , roles get , list , watch , create , update , delete "" (core) serviceaccounts , configmaps , events , secrets , services get , list , watch , create , update , patch , delete apps deployments , deployments/finalizers get , list , watch , create , update , delete route.openshift.io routes , routes/custom-host get , list , watch , create , update config.openshift.io proxies get , list , watch monitoring.coreos.com servicemonitors , prometheusrules get , list , watch , create , delete , update , patch coordination.k8s.io leases get , create , update Table 1.6. Namespaced roles for the cdi-controller service account API group Resources Verbs "" (core) configmaps get , list , watch , create , update , delete "" (core) secrets get , list , watch batch cronjobs get , list , watch , create , update , delete batch jobs create , delete , list , watch coordination.k8s.io leases get , create , update networking.k8s.io ingresses get , list , watch route.openshift.io routes get , list , watch 1.2.3.3. Additional SCCs and permissions for the kubevirt-controller service account Security context constraints (SCCs) control permissions for pods. These permissions include actions that a pod, a collection of containers, can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system. The virt-controller is a cluster controller that creates the virt-launcher pods for virtual machines in the cluster. These pods are granted permissions by the kubevirt-controller service account. The kubevirt-controller service account is granted additional SCCs and Linux capabilities so that it can create virt-launcher pods with the appropriate permissions. These extended permissions allow virtual machines to use OpenShift Virtualization features that are beyond the scope of typical pods. The kubevirt-controller service account is granted the following SCCs: scc.AllowHostDirVolumePlugin = true This allows virtual machines to use the hostpath volume plugin. scc.AllowPrivilegedContainer = false This ensures the virt-launcher pod is not run as a privileged container. scc.AllowedCapabilities = []corev1.Capability{"SYS_NICE", "NET_BIND_SERVICE"} SYS_NICE allows setting the CPU affinity. NET_BIND_SERVICE allows DHCP and Slirp operations. Viewing the SCC and RBAC definitions for the kubevirt-controller You can view the SecurityContextConstraints definition for the kubevirt-controller by using the oc tool: USD oc get scc kubevirt-controller -o yaml You can view the RBAC definition for the kubevirt-controller clusterrole by using the oc tool: USD oc get clusterrole kubevirt-controller -o yaml 1.2.4. Additional resources Managing security context constraints Using RBAC to define and apply permissions Creating a cluster role Cluster role binding commands Enabling user permissions to clone data volumes across namespaces 1.3. OpenShift Virtualization Architecture The Operator Lifecycle Manager (OLM) deploys operator pods for each component of OpenShift Virtualization: Compute: virt-operator Storage: cdi-operator Network: cluster-network-addons-operator Scaling: ssp-operator OLM also deploys the hyperconverged-cluster-operator pod, which is responsible for the deployment, configuration, and life cycle of other components, and several helper pods: hco-webhook , and hyperconverged-cluster-cli-download . After all operator pods are successfully deployed, you should create the HyperConverged custom resource (CR). The configurations set in the HyperConverged CR serve as the single source of truth and the entrypoint for OpenShift Virtualization, and guide the behavior of the CRs. The HyperConverged CR creates corresponding CRs for the operators of all other components within its reconciliation loop. Each operator then creates resources such as daemon sets, config maps, and additional components for the OpenShift Virtualization control plane. For example, when the HyperConverged Operator (HCO) creates the KubeVirt CR, the OpenShift Virtualization Operator reconciles it and creates additional resources such as virt-controller , virt-handler , and virt-api . The OLM deploys the Hostpath Provisioner (HPP) Operator, but it is not functional until you create a hostpath-provisioner CR. Virtctl client commands 1.3.1. About the HyperConverged Operator (HCO) The HCO, hco-operator , provides a single entry point for deploying and managing OpenShift Virtualization and several helper operators with opinionated defaults. It also creates custom resources (CRs) for those operators. Table 1.7. HyperConverged Operator components Component Description deployment/hco-webhook Validates the HyperConverged custom resource contents. deployment/hyperconverged-cluster-cli-download Provides the virtctl tool binaries to the cluster so that you can download them directly from the cluster. KubeVirt/kubevirt-kubevirt-hyperconverged Contains all operators, CRs, and objects needed by OpenShift Virtualization. SSP/ssp-kubevirt-hyperconverged A Scheduling, Scale, and Performance (SSP) CR. This is automatically created by the HCO. CDI/cdi-kubevirt-hyperconverged A Containerized Data Importer (CDI) CR. This is automatically created by the HCO. NetworkAddonsConfig/cluster A CR that instructs and is managed by the cluster-network-addons-operator . 1.3.2. About the Containerized Data Importer (CDI) Operator The CDI Operator, cdi-operator , manages CDI and its related resources, which imports a virtual machine (VM) image into a persistent volume claim (PVC) by using a data volume. Table 1.8. CDI Operator components Component Description deployment/cdi-apiserver Manages the authorization to upload VM disks into PVCs by issuing secure upload tokens. deployment/cdi-uploadproxy Directs external disk upload traffic to the appropriate upload server pod so that it can be written to the correct PVC. Requires a valid upload token. pod/cdi-importer Helper pod that imports a virtual machine image into a PVC when creating a data volume. 1.3.3. About the Cluster Network Addons Operator The Cluster Network Addons Operator, cluster-network-addons-operator , deploys networking components on a cluster and manages the related resources for extended network functionality. Table 1.9. Cluster Network Addons Operator components Component Description deployment/kubemacpool-cert-manager Manages TLS certificates of Kubemacpool's webhooks. deployment/kubemacpool-mac-controller-manager Provides a MAC address pooling service for virtual machine (VM) network interface cards (NICs). daemonset/bridge-marker Marks network bridges available on nodes as node resources. daemonset/kube-cni-linux-bridge-plugin Installs Container Network Interface (CNI) plugins on cluster nodes, enabling the attachment of VMs to Linux bridges through network attachment definitions. 1.3.4. About the Hostpath Provisioner (HPP) Operator The HPP Operator, hostpath-provisioner-operator , deploys and manages the multi-node HPP and related resources. Table 1.10. HPP Operator components Component Description deployment/hpp-pool-hpp-csi-pvc-block-<worker_node_name> Provides a worker for each node where the HPP is designated to run. The pods mount the specified backing storage on the node. daemonset/hostpath-provisioner-csi Implements the Container Storage Interface (CSI) driver interface of the HPP. daemonset/hostpath-provisioner Implements the legacy driver interface of the HPP. 1.3.5. About the Scheduling, Scale, and Performance (SSP) Operator The SSP Operator, ssp-operator , deploys the common templates, the related default boot sources, the pipeline tasks, and the template validator. 1.3.6. About the OpenShift Virtualization Operator The OpenShift Virtualization Operator, virt-operator , deploys, upgrades, and manages OpenShift Virtualization without disrupting current virtual machine (VM) workloads. In addition, the OpenShift Virtualization Operator deploys the common instance types and common preferences. Table 1.11. virt-operator components Component Description deployment/virt-api HTTP API server that serves as the entry point for all virtualization-related flows. deployment/virt-controller Observes the creation of a new VM instance object and creates a corresponding pod. When the pod is scheduled on a node, virt-controller updates the VM with the node name. daemonset/virt-handler Monitors any changes to a VM and instructs virt-launcher to perform the required operations. This component is node-specific. pod/virt-launcher Contains the VM that was created by the user as implemented by libvirt and qemu . | [
"oc get scc kubevirt-controller -o yaml",
"oc get clusterrole kubevirt-controller -o yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/virtualization/about |
Chapter 1. New features | Chapter 1. New features This section highlights new features in Red Hat Developer Hub 1.4. 1.1. Added an individual mountPath This update adds an individual mountPath for extra ConfigMaps or Secrets . 1.2. PersistentVolumeClaims support is available With this update, PersistentVolumeClaims (PVC) support is available. 1.3. Enhanced use of kube-rbac-proxy This update removes the kube-rbac-proxy sidecar container from the RHDH Operator Pod. This sidecar container protected the operator metrics endpoint. However, the main container now provides this functionality out-of-the-box. Removing this sidecar container allows for reducing the resources required to run the Operator. 1.4. Identifying Backstage flavor for plugins by using the developerHub.flavor field With this update, you can use the developerHub.flavor field to identify whether plugins are running on RHDH, RHTAP, or vanilla Backstage, as shown in the following example: app-config.yaml fragment with the developerhub.flavor field developerHub: flavor: <flavor> flavor Identify the flavor of Backstage that is running. Default value: rhdh 1.5. Ability to manage Persistent Volume Claim (PVCs) in RHDH Operator You can now mount directories from pre-created PersistentVolumeClaims (PVCs) using the spec.application.extraFiles.pvcs field, while configuring RHDH Operator. For more information, see Configuring Red Hat Developer Hub deployment when using the Operator . 1.6. Authenticating with Red Hat Build of Keycloak With this update, you can use Red Hat Build of Keycloak as an authentication provider. The Keycloak plugin will now support ingesting users and groups with Red Hat Build of Keycloak. For more details, see Authenticating with Red Hat Build of Keycloak . 1.7. Ability to install third-party plugins in RHDH You can now install third-party plugins in Red Hat Developer Hub without rebuilding the RHDH application. For more information, see Installing third-party plugins in Red Hat Developer Hub . 1.8. The catalog backend module logs plugin is enabled With this update, the backstage-plugin-catalog-backend-module-logs is enabled and converted to a static plugin improving performance and stability. The dynamic plugin was disabled in version 1.3 . 1.9. Google Kubernetes Engine now supported Google Kubernetes Engine (GKE) is out of Developer Preview and is now fully supported as of RHDH 1.4. See the full list of supported platforms on the Life Cycle page . 1.10. Manage concurrent writing when installing dynamic plugins Previously, running multi-replica RHDH with a Persistent Volume for the Dynamic Plugins cache was not possible due to potential write conflicts. This update mitigates that risk. | [
"developerHub: flavor: <flavor>"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/release_notes/new-features |
Chapter 49. Additional resources | Chapter 49. Additional resources Getting started with case management Using the Showcase application for case management | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/additional_resources_2 |
Chapter 6. Optimizing LVM-VDO performance | Chapter 6. Optimizing LVM-VDO performance The VDO kernel driver speeds up tasks by using multiple threads. Instead of one thread doing everything for an I/O request, it splits the work into smaller parts assigned to different threads. These threads talk to each other as they handle the request. This way, one thread can handle shared data without constant locking and unlocking. When one thread finishes a task, VDO already has another task ready for it. This keeps the threads busy and reduces the time spent switching tasks. VDO also uses separate threads for slower tasks, such as adding I/O operations to the queue or handling messages to the deduplication index. 6.1. VDO thread types VDO uses various thread types to handle specific operations: Logical zone threads ( kvdo:logQ ) Maintain the mapping between the logical block numbers (LBNs) presented to the user of the VDO device and the physical block numbers (PBNs) in the underlying storage system. They also prevent concurrent writes to the same block. Logical threads are active during both read and write operations. Processing is generally evenly distributed, however, specific access patterns may occasionally concentrate work in one thread. For example, frequent access to LBNs in a specific block map page might make one logical thread handle all those operations. Physical zone threads ( kvdo:physQ ) Handle data block allocation and reference counts during write operations. I/O submission threads ( kvdo:bioQ ) Handle the transfer of block I/O ( bio ) operations from VDO to the storage system. They handle I/O requests from other VDO threads and pass them to the underlying device driver. These threads interact with device-related data structures, create requests for device driver kernel threads, and prevent delays when I/O requests get blocked due to a full device request queue. CPU-processing threads ( kvdo:cpuQ ) Handle CPU-intensive tasks that do not block or need exclusive access to data structures managed by other thread types. These tasks include calculating hash values and compressing data blocks. I/O acknowledgement threads ( kvdo:ackQ ) Signal the completion of I/O requests to higher-level components, such as the kernel page cache or application threads performing direct I/O. Their CPU usage and impact on memory contention are influenced by kernel-level code. Hash zone threads ( kvdo:hashQ) Coordinate I/O requests with matching hashes to handle potential deduplication tasks. Although they create and manage deduplication requests, they do not perform significant computations. A single hash zone thread is usually sufficient. Deduplication thread ( kvdo:dedupeQ ) Handles I/O requests and communicates with the deduplication index. This work is performed on a separate thread to prevent blocking. It also has a timeout mechanism to skip deduplication if the index does not respond quickly. There is only one deduplication thread per VDO device. Journal thread ( kvdo:journalQ ) Updates the recovery journal and schedules journal blocks for writing. This task cannot be divided among multiple threads. There is only one journal thread per VDO device. Packer thread ( kvdo:packerQ ) Works during write operations when the compression is enabled. It collects compressed data blocks from the CPU threads to reduce wasted space. There is only one packer thread per VDO device. 6.2. Identifying performance bottlenecks Identifying bottlenecks in VDO performance is crucial for optimizing system efficiency. One of the primary steps you can take is to determine whether the bottleneck lies in the CPU, memory, or the speed of the backing storage. After pinpointing the slowest component, you can develop strategies for enhancing performance. To ensure that the root cause of the low performance is not a hardware issue, run tests with and without VDO in the storage stack. The journalQ thread in VDO is a natural bottleneck, especially when the VDO volume is handling write operations. If you notice that another thread type has higher utilization than the journalQ thread, you can remediate this by adding more threads of that type. 6.2.1. Analyzing VDO performance with top You can examine the performance of VDO threads by using the top utility. Note Tools such as top cannot differentiate between productive CPU cycles and cycles stalled due to cache or memory delays. These tools interpret cache contention and slow memory access as actual work. Moving threads between nodes can appear like reduced CPU utilization while increasing operations per second. Procedure Display the individual threads: Press the f key to display the fields manager. Use the (v) key to navigate to the P = Last Used Cpu (SMP) field. Press the spacebar to select the P = Last Used Cpu (SMP) field. Press the q key to close the fields manager. The top utility now displays the CPU load for individual cores and indicates which CPU each process or thread recently used. You can switch to per-CPU statistics by pressing 1 . Additional resources top(1) man page on your system Interpretation of the top results 6.2.2. Interpretation of the top results While analyzing the performance of VDO threads, use the following table to interpret results of the top utility. Table 6.1. Interpreting top results Values Description Suggestions Thread or CPU usage surpasses 70%. The thread or CPU is overloaded. High usage can result from a VDO thread scheduled on a CPU with no actual work. This may happen due to excessive hardware interrupts, memory conflicts, or resource competition. Increase the number of threads of the type running this core. Low %id and %wa values The core is actively handling tasks. No action required. Low %hi value The core is performing standard processing work. Add more cores to improve the performance. Avoid NUMA conflicts. High %hi value [a] Only one thread is assigned to the core %id is zero %wa values is zero The core is over-committed. Reassign kernel threads and device interrupt handling to different cores. kvdo:bioQ threads frequently in D state. VDO is consistently keeping the storage system busy with I/O requests. [b] Reduce the number of I/O submission threads if the CPU utilization is very low. kvdo:bioQ threads frequently in S state. VDO has more kvdo:bioQ threads than it needs. Reduce the number of kvdo:bioQ threads. High CPU utilization per I/O request. CPU utilization per I/O request increases with more threads. Check for CPU, memory, or lock contention. [a] More than a few percent [b] This is good if the storage system can handle multiple requests or if request processing is efficient. 6.2.3. Analyzing VDO performance with perf You can check the CPU performance of VDO by using the perf utility. Prerequisites The perf package is installed. Procedure Display the performance profile: Analyze the CPU performance by interpreting perf results: Table 6.2. Interpreting perf results Values Description Suggestions kvdo:bioQ threads spend excessive cycles acquiring spin locks Too much contention might be occurring in the device driver below VDO Reduce the number of kvdo:bioQ threads High CPU usage Contention between NUMA nodes. Check counters such as stalled-cycles-backend , cache-misses , and node-load-misses if they are supported by your processor. High miss rates might cause stalls, resembling high CPU usage in other tools, indicating possible contention. Implement CPU affinity for the VDO kernel threads or IRQ affinity for interrupt handlers to restrict processing work to a single node. Additional resources perf-top(1) man page on your system 6.2.4. Analyzing VDO performance with sar You can create periodic reports on VDO performance by using the sar utility. Note Not all block device drivers can provide the data needed by the sar utility. For example, devices such as MD RAID do not report the %util value. Prerequisites Install the sysstat utility: Procedure Displays the disk I/O statistics at 1-second intervals: Analyze the VDO performance by interpreting sar results: Table 6.3. Interpreting sar results Values Description Suggestions The %util value for the underlying storage device is well under 100%. VDO is busy at 100%. bioQ threads are using a lot of CPU time. VDO has too few bioQ threads for a fast device. Add more bioQ threads. Note that certain storage drivers might slow down when you add bioQ threads due to spin lock contention. Additional resources sar(1) man page on your system 6.3. Redistributing VDO threads VDO uses various thread pools for different tasks when handling requests. Optimal performance depends on setting the right number of threads in each pool, which varies based on available storage, CPU resources, and the type of workload. You can spread out VDO work across multiple threads to improve VDO performance. VDO aims to maximize performance through parallelism. You can improve it by allocating more threads to a bottlenecked task, depending on factors such as available CPU resources and the root cause of the bottleneck. High thread utilization (above 70-80%) can lead to delays. Therefore, increasing thread count can help in such cases. However, excessive threads might hinder performance and incur extra costs. For optimal performance, carry out these actions: Test VDO with various expected workloads to evaluate and optimize its performance. Increase thread count for pools with more than 50% utilization. Increase the number of cores available to VDO if the overall utilization is greater than 50%, even if the individual thread utilization is lower. 6.3.1. Grouping VDO threads across NUMA nodes Accessing memory across NUMA nodes is slower than local memory access. On Intel processors where cores share the last-level cache within a node, cache problems are more significant when data is shared between nodes than when it is shared within a single node. While many VDO kernel threads manage exclusive data structures, they often exchange messages about I/O requests. VDO threads being spread across multiple nodes or the scheduler reassigning threads between nodes might cause contention, that is multiple nodes competing for the same resources. You can enhance VDO performance by grouping certain threads on the same NUMA nodes. Group related threads together on one NUMA node I/O acknowledgment ( ackQ ) threads Higher-level I/O submission threads: User-mode threads handling direct I/O Kernel page cache flush thread Optimize device access If device access timing varies across NUMA nodes, run bioQ threads on the node closest to the storage device controllers Minimize contention Run I/O submissions and storage device interrupt processing on the same node as logQ or physQ threads. Run other VDO-related work on the same node. If one node cannot handle all VDO work, consider memory contention when moving threads to other nodes. For example, move the device that interrupts handling and bioQ threads to another node. 6.3.2. Configuring the CPU affinity You can improve VDO performance on certain storage device drivers if you adjust the CPU affinity of VDO threads. When the interrupt (IRQ) handler of the storage device driver does substantial work and the driver does not use a threaded IRQ handler, it could limit the ability of the system scheduler to optimize VDO performance. For optimal performance, carry out these actions: Dedicate specific cores to IRQ handling and adjust VDO thread affinity if the core is overloaded. The core is overloaded if the %hi value is more than a few percent higher than on other cores. Avoid running singleton VDO threads, like the kvdo:journalQ thread, on busy IRQ cores. Keep other thread types off cores busy with IRQs only if the individual CPU use is high . Note The configuration does not persist across system reboots. Procedure Set the CPU affinity: Replace <cpu-numbers> with a comma-separated list of CPU numbers to which you want to assign the process. Replace <process-id> with the ID of the running process to which you want to set CPU affinity. Example 6.1. Setting CPU Affinity for kvdo processes on CPU cores 1 and 2 Verification Display the affinity set: Replace <cpu-numbers> with a comma-separated list of CPU numbers to which you want to assign the process. Replace <process-id> with the ID of the running process to which you want to set CPU affinity. Additional resources taskset(1) man page on your system 6.4. Increasing block map cache size to enhance performance You can enhance read and write performance by increasing the cache size for your LVM-VDO volume. If you have extended read and write latencies or a significant volume of data read from storage that does not align with application requirements, you might need to adjust the cache size. Warning When you increase a block map cache, the cache uses the amount of memory that you specified, plus an additional 15% of memory. Larger cache sizes use more RAM and affect overall system stability. The following example shows how to change the cache size from 128 MB to 640 Mb in your system. Procedure Check the current cache size of your LVM-VDO volume: Deactivate the LVM-VDO volume: Change the LVM-VDO setting: Replace 640 with your new cache size in megabytes. Note The cache size must be a multiple of 4096, within the range of 128 MB to 16 TB, and at least 16 MB per logical thread. Changes take effect the time the LVM-VDO device is started. Already running devices are not affected. Activate the LVM-VDO volume: Verification Check the current LVM-VDO volume configuration: Additional resources lvchange(8) man page on your system 6.5. Speeding up discard operations VDO sets a maximum allowed size of DISCARD (TRIM) sectors for all LVM-VDO devices on the system. The default size is 8 sectors, which corresponds to one 4-KiB block. Increasing the DISCARD size can significantly improve the speed of the discard operations. However, there is a tradeoff between improving discard performance and maintaining the speed of other write operations. The optimal DISCARD size varies depending on the storage stack. Both very large and very small DISCARD sectors can potentially degrade the performance. Conduct experiments with different values to discover one that delivers satisfactory results. For an LVM-VDO volume that stores a local file system, it is optimal to use a DISCARD size of 8 sectors, which is the default setting. For an LVM-VDO volume that serves as a SCSI target, a moderately large DISCARD size, such as 2048 sectors (corresponds to a 1 MB discard), works best. It is recommended that the maximum DISCARD size does not exceed 10240 sectors, which translates to 5 MB discard. When choosing the size, make sure it is a multiple of 8, because VDO may not handle discards effectively if they are smaller than 8 sectors. Procedure Set the new maximum size for the DISCARD sector: Replace <number-of-sectors> with the number of sectors. This setting persists until reboot. Optional: To make the persistent change to the DISCARD sector across reboot, create a custom systemd service: Create a new /etc/systemd/system/max_discard_sectors.service file with the following content: Replace <number-of-sectors> with the number of sectors. Save the file and exit. Reload the service file: Enable the new service: Verification Optional: If you made the scaling governor change persistent, check if the max_discard_sectors.service is enabled: 6.6. Optimizing CPU frequency scaling By default, RHEL uses CPU frequency scaling to save power and reduce heat when the CPU is not under heavy load. To prioritize performance over power savings, you can configure the CPU to operate at its maximum clock speed. This ensures that the CPU can handle data deduplication and compression processes with maximum efficiency. By running the CPU at its highest frequency, resource-intensive operations can be executed more quickly, potentially improving the overall performance of LVM-VDO in terms of data reduction and storage optimization. Warning Tuning CPU frequency scaling for higher performance can increase power consumption and heat generation. In inadequately cooled systems, this can cause overheating and might result in thermal throttling, which limits the performance gains. Procedure Display available CPU governors: Change the scaling governor to prioritize performance: This setting persists until reboot. Optional: To make the persistent change in scaling governor across reboot, create a custom systemd service: Create a new /etc/systemd/system/cpufreq.service file with the following content: Save the file and exit. Reload the service file: Enable the new service: Verification Display the currently used CPU frequency policy: Optional: If you made the scaling governor change persistent, check if the cpufreq.service is enabled: | [
"top -H",
"perf top",
"yum install sysstat",
"sar -d 1",
"taskset -c <cpu-numbers> -p <process-id>",
"for pid in `ps -eo pid,comm | grep kvdo | awk '{ print USD1 }'` do taskset -c \"1,2\" -p USDpid done",
"taskset -p <cpu-numbers> -p <process-id>",
"lvs -o vdo_block_map_cache_size VDOBlockMapCacheSize 128.00m 128.00m",
"lvchange -an vg_name/vdo_volume",
"lvchange --vdosettings \"block_map_cache_size_mb=640\" vg_name/vdo_volume",
"lvchange -ay vg_name/vdo_volume",
"lvs -o vdo_block_map_cache_size vg_name/vdo_volume VDOBlockMapCacheSize 640.00m",
"echo <number-of-sectors> > /sys/kvdo/max_discard_sectors",
"[Unit] Description=Set maximum DISCARD sector [Service] ExecStart=/usr/bin/echo <number-of-sectors> > /sys/kvdo/max_discard_sectors [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl enable max_discard_sectors.service",
"systemctl is-enabled max_discard_sectors.service",
"cpupower frequency-info -g",
"cpupower frequency-set -g performance",
"[Unit] Description=Set CPU scaling governor to performance [Service] ExecStart=/usr/bin/cpupower frequency-set -g performance [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl enable cpufreq.service",
"cpupower frequency-info -p",
"systemctl is-enabled cpufreq.service"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deduplicating_and_compressing_logical_volumes_on_rhel/optimizing-vdo-performance_deduplicating-and-compressing-logical-volumes-on-rhel |
Chapter 5. Uninstalling OpenShift Data Foundation | Chapter 5. Uninstalling OpenShift Data Foundation 5.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledge base article on Uninstalling OpenShift Data Foundation . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_on_any_platform/uninstalling_openshift_data_foundation |
2.2. Creating CacheManagers | 2.2. Creating CacheManagers 2.2.1. Create a New RemoteCacheManager Procedure 2.1. Configure a New RemoteCacheManager Use the ConfigurationBuilder() constructor to create a new configuration builder. The .addServer() method adds a remote server, configured via the .host(<hostname|ip>) and .port(<port>) properties. Create a new RemoteCacheManager using the supplied configuration. Retrieve the default cache from the remote server. Report a bug 2.2.2. Create a New Embedded Cache Manager Use the following instructions to create a new EmbeddedCacheManager without using CDI: Procedure 2.2. Create a New Embedded Cache Manager Create a configuration XML file. For example, create the my-config-file.xml file on the classpath (in the resources/ folder) and add the configuration information in this file. Use the following programmatic configuration to create a cache manager using the configuration file: The outlined procedure creates a new EmbeddedCacheManager using the configuration specified in my-config-file.xml . Report a bug 2.2.3. Create a New Embedded Cache Manager Using CDI Use the following steps to create a new EmbeddedCacheManager instance using CDI: Procedure 2.3. Use CDI to Create a New EmbeddedCacheManager Specify a default configuration: Inject the default cache manager. Report a bug | [
"import org.infinispan.client.hotrod.RemoteCache; import org.infinispan.client.hotrod.RemoteCacheManager; import org.infinispan.client.hotrod.configuration.Configuration; import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; Configuration conf = new ConfigurationBuilder().addServer().host(\"localhost\").port(11222).build(); RemoteCacheManager manager = new RemoteCacheManager(conf); RemoteCache defaultCache = manager.getCache();",
"EmbeddedCacheManager manager = new DefaultCacheManager(\"my-config-file.xml\"); Cache defaultCache = manager.getCache();",
"public class Config @Produces public EmbeddedCacheManager defaultCacheManager() { ConfigurationBuilder builder = new ConfigurationBuilder(); Configuration configuration = builder.eviction().strategy(EvictionStrategy.LRU).maxEntries(100).build(); return new DefaultCacheManager(configuration); } }",
"<!-- Additional configuration information here --> @Inject EmbeddedCacheManager cacheManager; <!-- Additional configuration information here -->"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-creating_cachemanagers |
Chapter 18. Managing technology preview features | Chapter 18. Managing technology preview features You can enable or disable features that are Technology Preview by using feature flags. Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 18.1. Managing feature flags 18.1.1. Prerequisites You have access to the environment where the RHACS component is deployed. You have permission to modify environment variables. You understand that the Technology Preview features might be incomplete and are provided with limited support. You know if the flag for the Technology Preview feature needs to be configured before the deployment. Check the installation manifests to see if they use the required flag. Procedure Identify the environment variable name associated with the feature flag. Consult the release notes or the /v1/featureflags API endpoint to identify the flag for the feature you want to enable or disable. Modify the feature flag by completing one of the following actions: To enable a feature, configure the environment variable associated with the flag by setting its value to true . Configure this directly on the Kubernetes deployment or during installation by using the Helm chart or the Operator custom resource (CR). To disable a feature, set the environment variable associated with the flag to false . After the application is restarted or redeployed, verify that the feature has been enabled or disabled by completing the following steps: Check the output of the /v1/featureflags API endpoint. Check the application functionality related to the feature. Review logs or monitoring tools for any errors or confirmation messages. 18.1.2. Best practices Follow these best practices for using the feature flag: Always test feature changes in a staging environment before applying them to production. Keep a record of all feature flags and their current status. Be prepared to revert the changes if the feature causes issues. 18.1.3. Troubleshooting Follow these troubleshooting guidelines: If the feature does not appear, ensure that the environment variable is correctly named and set. Check application logs for any errors related to feature flag parsing. If enabling a feature causes application errors, disable the feature and contact Red Hat Support. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/configuring/managing-preview-features |
Chapter 13. Scoping tokens | Chapter 13. Scoping tokens 13.1. About scoping tokens You can create scoped tokens to delegate some of your permissions to another user or service account. For example, a project administrator might want to delegate the power to create pods. A scoped token is a token that identifies as a given user but is limited to certain actions by its scope. Only a user with the cluster-admin role can create scoped tokens. Scopes are evaluated by converting the set of scopes for a token into a set of PolicyRules . Then, the request is matched against those rules. The request attributes must match at least one of the scope rules to be passed to the "normal" authorizer for further authorization checks. 13.1.1. User scopes User scopes are focused on getting information about a given user. They are intent-based, so the rules are automatically created for you: user:full - Allows full read/write access to the API with all of the user's permissions. user:info - Allows read-only access to information about the user, such as name and groups. user:check-access - Allows access to self-localsubjectaccessreviews and self-subjectaccessreviews . These are the variables where you pass an empty user and groups in your request object. user:list-projects - Allows read-only access to list the projects the user has access to. 13.1.2. Role scope The role scope allows you to have the same level of access as a given role filtered by namespace. role:<cluster-role name>:<namespace or * for all> - Limits the scope to the rules specified by the cluster-role, but only in the specified namespace . Note Caveat: This prevents escalating access. Even if the role allows access to resources like secrets, rolebindings, and roles, this scope will deny access to those resources. This helps prevent unexpected escalations. Many people do not think of a role like edit as being an escalating role, but with access to a secret it is. role:<cluster-role name>:<namespace or * for all>:! - This is similar to the example above, except that including the bang causes this scope to allow escalating access. 13.2. Adding unauthenticated groups to cluster roles As a cluster administrator, you can add unauthenticated users to the following cluster roles in OpenShift Container Platform by creating a cluster role binding. Unauthenticated users do not have access to non-public cluster roles. This should only be done in specific use cases when necessary. You can add unauthenticated users to the following cluster roles: system:scope-impersonation system:webhook system:oauth-token-deleter self-access-reviewer Important Always verify compliance with your organization's security standards when modifying unauthenticated access. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file named add-<cluster_role>-unauth.yaml and add the following content: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated Apply the configuration by running the following command: USD oc apply -f add-<cluster_role>.yaml | [
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated",
"oc apply -f add-<cluster_role>.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authentication_and_authorization/tokens-scoping |
23.2. Userspace Access | 23.2. Userspace Access Always take care to use properly aligned and sized I/O. This is especially important for Direct I/O access. Direct I/O should be aligned on a logical_block_size boundary, and in multiples of the logical_block_size . With native 4K devices (i.e. logical_block_size is 4K) it is now critical that applications perform direct I/O in multiples of the device's logical_block_size . This means that applications will fail with native 4k devices that perform 512-byte aligned I/O rather than 4k-aligned I/O. To avoid this, an application should consult the I/O parameters of a device to ensure it is using the proper I/O alignment and size. As mentioned earlier, I/O parameters are exposed through the both sysfs and block device ioctl interfaces. For more details, refer to man libblkid . This man page is provided by the libblkid-devel package. sysfs Interface /sys/block/ disk /alignment_offset /sys/block/ disk / partition /alignment_offset /sys/block/ disk /queue/physical_block_size /sys/block/ disk /queue/logical_block_size /sys/block/ disk /queue/minimum_io_size /sys/block/ disk /queue/optimal_io_size The kernel will still export these sysfs attributes for "legacy" devices that do not provide I/O parameters information, for example: Example 23.1. sysfs interface Block Device ioctls BLKALIGNOFF : alignment_offset BLKPBSZGET : physical_block_size BLKSSZGET : logical_block_size BLKIOMIN : minimum_io_size BLKIOOPT : optimal_io_size | [
"alignment_offset: 0 physical_block_size: 512 logical_block_size: 512 minimum_io_size: 512 optimal_io_size: 0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/iolimuserspace |
Installing on a single node | Installing on a single node OpenShift Container Platform 4.16 Installing OpenShift Container Platform on a single node Red Hat OpenShift Documentation Team | [
"example.com",
"<cluster_name>.example.com",
"export OCP_VERSION=<ocp_version> 1",
"export ARCH=<architecture> 1",
"curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz",
"tar zxf oc.tar.gz",
"chmod +x oc",
"curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz",
"tar zxvf openshift-install-linux.tar.gz",
"chmod +x openshift-install",
"export ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\\\" -f4)",
"curl -L USDISO_URL -o rhcos-live.iso",
"apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9",
"mkdir ocp",
"cp install-config.yaml ocp",
"./openshift-install --dir=ocp create single-node-ignition-config",
"alias coreos-installer='podman run --privileged --pull always --rm -v /dev:/dev -v /run/udev:/run/udev -v USDPWD:/data -w /data quay.io/coreos/coreos-installer:release'",
"coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ign rhcos-live.iso",
"./openshift-install --dir=ocp wait-for install-complete",
"export KUBECONFIG=ocp/auth/kubeconfig",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane.example.com Ready master,worker 10m v1.29.4",
"dd if=<path_to_iso> of=<path_to_usb> status=progress",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"Image\":\"<hosted_iso_file>\", \"Inserted\": true}' -H \"Content-Type: application/json\" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia",
"curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\", \"BootSourceOverrideEnabled\": \"Once\"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"ForceRestart\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"On\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset",
"variant: openshift version: 4.16.0 metadata: name: sshd labels: machineconfiguration.openshift.io/role: worker passwd: users: - name: core 1 ssh_authorized_keys: - '<ssh_key>'",
"butane -pr embedded.yaml -o embedded.ign",
"coreos-installer iso ignition embed -i embedded.ign rhcos-4.16.0-x86_64-live.x86_64.iso -o rhcos-sshd-4.16.0-x86_64-live.x86_64.iso",
"coreos-installer iso ignition show rhcos-sshd-4.16.0-x86_64-live.x86_64.iso",
"{ \"ignition\": { \"version\": \"3.2.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"sshAuthorizedKeys\": [ \"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCZnG8AIzlDAhpyENpK2qKiTT8EbRWOrz7NXjRzopbPu215mocaJgjjwJjh1cYhgPhpAp6M/ttTk7I4OI7g4588Apx4bwJep6oWTU35LkY8ZxkGVPAJL8kVlTdKQviDv3XX12l4QfnDom4tm4gVbRH0gNT1wzhnLP+LKYm2Ohr9D7p9NBnAdro6k++XWgkDeijLRUTwdEyWunIdW1f8G0Mg8Y1Xzr13BUo3+8aey7HLKJMDtobkz/C8ESYA/f7HJc5FxF0XbapWWovSSDJrr9OmlL9f4TfE+cQk3s+eoKiz2bgNPRgEEwihVbGsCN4grA+RzLCAOpec+2dTJrQvFqsD [email protected]\" ] } ] } }",
"OCP_VERSION=<ocp_version> 1",
"ARCH=<architecture> 1",
"curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz",
"tar zxf oc.tar.gz",
"chmod +x oc",
"curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz",
"tar zxvf openshift-install-linux.tar.gz",
"chmod +x openshift-install",
"apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9",
"mkdir ocp",
"cp install-config.yaml ocp",
"./openshift-install --dir=ocp create single-node-ignition-config",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 ignition.firstboot ignition.platform.id=metal ignition.config.url=http://<http_server>:8080/ignition/bootstrap-in-place-for-live-iso.ign \\ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 2 ip=<ip>::<gateway>:<mask>:<hostname>::none nameserver=<dns> \\ 3 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.dasd=0.0.4411 \\ 4 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \\ 5 zfcp.allow_lun_scan=0",
"cp ipl c",
"cp i <devno> clear loadparm prompt",
"cp vi vmsg 0 <kernel_parameters>",
"cp set loaddev portname <wwpn> lun <lun>",
"cp set loaddev bootprog <n>",
"cp set loaddev scpdata {APPEND|NEW} '<kernel_parameters>'",
"cp set loaddev scpdata 'rd.zfcp=0.0.8001,0x500507630a0350a4,0x4000409D00000000 ip=encbdd0:dhcp::02:00:00:02:34:02 rd.neednet=1'",
"cp i <devno>",
"OCP_VERSION=<ocp_version> 1",
"ARCH=<architecture> 1",
"curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz",
"tar zxf oc.tar.gz",
"chmod +x oc",
"curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz",
"tar zxvf openshift-install-linux.tar.gz",
"chmod +x openshift-install",
"apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9",
"mkdir ocp",
"cp install-config.yaml ocp",
"./openshift-install --dir=ocp create single-node-ignition-config",
"virt-install --name <vm_name> --autostart --memory=<memory_mb> --cpu host --vcpus <vcpus> --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \\ 1 --disk size=100 --network network=<virt_network_parm> --graphics none --noautoconsole --extra-args \"rd.neednet=1 ignition.platform.id=metal ignition.firstboot\" --extra-args \"ignition.config.url=http://<http_server>/bootstrap.ign\" \\ 2 --extra-args \"coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img\" \\ 3 --extra-args \"ip=<ip>::<gateway>:<mask>:<hostname>::none\" \\ 4 --extra-args \"nameserver=<dns>\" --extra-args \"console=ttysclp0\" --wait",
"OCP_VERSION=<ocp_version> 1",
"ARCH=<architecture> 1",
"curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz",
"tar zxvf oc.tar.gz",
"chmod +x oc",
"curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz",
"tar zxvf openshift-install-linux.tar.gz",
"chmod +x openshift-install",
"apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} pullSecret: '<pull_secret>' 7 sshKey: | <ssh_key> 8",
"mkdir ocp",
"cp install-config.yaml ocp",
"./openshift-install create manifests --dir <installation_directory> 1",
"spec: mastersSchedulable: true status: {}",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \\ 4 rd.znet=qeth,0.0.1140,0.0.1141,0.0.1142,layer2=1,portno=0 rd.dasd=0.0.4411 \\ 5 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \\ 6 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> coreos.inst.ignition_url=http://<http_server>/master.ign \\ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.1140,0.0.1141,0.0.1142,layer2=1,portno=0 rd.dasd=0.0.4411 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 zfcp.allow_lun_scan=0",
"grub2-mknetdir --net-directory=/var/lib/tftpboot",
"default=0 fallback=1 timeout=1 if [ USD{net_default_mac} == fa:b0:45:27:43:20 ]; then menuentry \"CoreOS (BIOS)\" { echo \"Loading kernel\" linux \"/rhcos/kernel\" ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot coreos.live.rootfs_url=http://192.168.10.5:8000/install/rootfs.img ignition.config.url=http://192.168.10.5:8000/ignition/sno.ign echo \"Loading initrd\" initrd \"/rhcos/initramfs.img\" } fi",
"export RHCOS_URL=https://mirror.openshift.com/pub/openshift-v4/ppc64le/dependencies/rhcos/4.12/latest/",
"cd /var/lib/tftpboot/rhcos",
"wget USD{RHCOS_URL}/rhcos-live-kernel-ppc64le -o kernel",
"wget USD{RHCOS_URL}/rhcos-live-initramfs.ppc64le.img -o initramfs.img",
"cd /var//var/www/html/install/",
"wget USD{RHCOS_URL}/rhcos-live-rootfs.ppc64le.img -o rootfs.img",
"mkdir -p ~/sno-work",
"cd ~/sno-work",
"apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9",
"wget https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/4.12.0/openshift-install-linux-4.12.0.tar.gz",
"tar xzvf openshift-install-linux-4.12.0.tar.gz",
"./openshift-install --dir=~/sno-work create create single-node-ignition-config",
"cp ~/sno-work/single-node-ignition-config.ign /var/www/html/ignition/sno.ign",
"restorecon -vR /var/www/html || true",
"lpar_netboot -i -D -f -t ent -m <sno_mac> -s auto -d auto -S <server_ip> -C <sno_ip> -G <gateway> <lpar_name> default_profile <cec_name>",
"./openshift-install wait-for bootstrap-complete",
"./openshift-install wait-for install-complete"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/installing_on_a_single_node/index |
Chapter 6. View OpenShift Data Foundation Topology | Chapter 6. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_ibm_z/viewing-odf-topology_mcg-verify |
Chapter 46. Getting Started with the Framework | Chapter 46. Getting Started with the Framework Abstract This chapter explains the basic principles of implementing a Camel component using the API component framework, based on code generated using the camel-archetype-api-component Maven archetype. 46.1. Generate Code with the Maven Archetype Maven archetypes A Maven archetype is analogous to a code wizard: given a few simple parameters, it generates a complete, working Maven project, populated with sample code. You can then use this project as a template, customizing the implementation to create your own application. The API component Maven archetype The API component framework provides a Maven archetype, camel-archetype-api-component , that can generate starting point code for your own API component implementation. This is the recommended approach to start creating your own API component. Prerequisites The only prerequisites for running the camel-archetype-api-component archetype are that Apache Maven is installed and the Maven settings.xml file is configured to use the standard Fuse repositories. Invoke the Maven archetype To create an Example component, which uses the example URI scheme, invoke the camel-archetype-api-component archetype to generate a new Maven project, as follows: Note The backslash character, \ , at the end of each line represents line continuation, which works only on Linux and UNIX platforms. On Windows platforms, remove the backslash and put the arguments all on a single line. Options Options are provided to the archetype generation command using the syntax, -D Name = Value . Most of the options should be set as shown in the preceding mvn archetype:generate command, but a few of the options can be modified, to customize the generated project. The following table shows the options that you can use to customize the generated API component project: Name Description groupId (Generic Maven option) Specifies the group ID of the generated Maven project. By default, this value also defines the Java package name for the generated classes. Hence, it is a good idea to choose this value to match the Java package name that you want. artifactId (Generic Maven option) Specifies the artifact ID of the generated Maven project. name The name of the API component. This value is used for generating class names in the generated code (hence, it is recommended that the name should start with a capital letter). scheme The default scheme to use in URIs for this component. You should make sure that this scheme does not conflict with the scheme of any existing Camel components. archetypeVersion (Generic Maven option) Ideally, this should be the Apache Camel version used by the container where you plan to deploy the component. If necessary, however, you can also modify the versions of Maven dependencies after you have generated the project. Structure of the generated project Assuming that the code generation step completes successfully, you should see a new directory, camel-api-example , which contains the new Maven project. If you look inside the camel-api-example directory, you will see that it has the following general structure: At the top level of the project is an aggregate POM, pom.xml , which is configured to build two sub-projects, as follows: camel-api-example-api The API sub-project (named as ArtifactId -api ) holds the Java API which you are about to turn into a component. If you are basing the API component on a Java API that you wrote yourself, you can put the Java API code directly into this project. The API sub-project can be used for one or more of the following purposes: To package up the Java API code (if it is not already available as a Maven package). To generate Javadoc for the Java API (providing the needed metadata for the API component framework). To generate the Java API code from an API description (for example, from a WADL description of a REST API). In some cases, however, you might not need to perform any of these tasks. For example, if the API component is based on a third-party API, which already provides the Java API and Javadoc in a Maven package. In such cases, you can delete the API sub-project. camel-api-example-component The component sub-project (named as ArtifactId -component ) holds the implementation of the new API component. This includes the component implementation classes and the configuration of the camel-api-component-maven plug-in (which generates the API mapping classes from the Java API). 46.2. Generated API Sub-Project Overview Assuming that you generated a new Maven project as described in Section 46.1, "Generate Code with the Maven Archetype" , you can now find a Maven sub-project for packaging the Java API under the camel-api-example/camel-api-example-api project directory. In this section, we take a closer look at the generated example code and describe how it works. Sample Java API The generated example code includes a sample Java API, on which the example API component is based. The sample Java API is relatively simple, consisting of just two Hello World classes: ExampleJavadocHello and ExampleFileHello . ExampleJavadocHello class Example 46.1, "ExampleJavadocHello class" shows the ExampleJavadocHello class from the sample Java API. As the name of the class suggests, this particular class is used to show how you can supply mapping metadata from Javadoc. Example 46.1. ExampleJavadocHello class ExampleFileHello class Example 46.2, "ExampleFileHello class" shows the ExampleFileHello class from the sample Java API. As the name of the class suggests, this particular class is used to show how you can supply mapping metadata from a signature file. Example 46.2. ExampleFileHello class Generating the Javadoc metadata for ExampleJavadocHello Because the metadata for ExampleJavadocHello is provided as Javadoc, it is necessary to generate Javadoc for the sample Java API and install it into the camel-api-example-api Maven artifact. The API POM file, camel-api-example-api/pom.xml , configures the maven-javadoc-plugin to perform this step automatically during the Maven build. 46.3. Generated Component Sub-Project Overview The Maven sub-project for building the new component is located under the camel-api-example/camel-api-example-component project directory. In this section, we take a closer look at the generated example code and describe how it works. Providing the Java API in the component POM The Java API must be provided as a dependency in the component POM. For example, the sample Java API is defined as a dependency in the component POM file, camel-api-example-component/pom.xml , as follows: Providing the Javadoc metadata in the component POM If you are using Javadoc metadata for all or part of the Java API, you must provide the Javadoc as a dependency in the component POM. There are two things to note about this dependency: The Maven coordinates for the Javadoc are almost the same as for the Java API, except that you must also specify a classifier element, as follows: You must declare the Javadoc to have provided scope, as follows: For example, in the component POM, the Javadoc dependency is defined as follows: Defining the file metadata for Example File Hello The metadata for ExampleFileHello is provided in a signature file. In general, this file must be created manually, but it has quite a simple format, which consists of a list of method signatures (one on each line). The example code provides the signature file, file-sig-api.txt , in the directory, camel-api-example-component/signatures , which has the following contents: For more details about the signature file format, see the section called "Signature file metadata" . Configuring the API mapping One of the key features of the API component framework is that it automatically generates the code to perform API mapping . That is, generating stub code that maps endpoint URIs to method invocations on the Java API. The basic inputs to the API mapping are: the Java API, the Javadoc metadata, and/or the signature file metadata. The component that performs the API mapping is the camel-api-component-maven-plugin Maven plug-in, which is configured in the component POM. The following extract from the component POM shows how the camel-api-component-maven-plugin plug-in is configured: The plug-in is configured by the configuration element, which contains a single apis child element to configure the classes of the Java API. Each API class is configured by an api element, as follows: apiName The API name is a short name for the API class and is used as the endpoint-prefix part of an endpoint URI. Note If the API consists of just a single Java class, you can leave the apiName element empty, so that the endpoint-prefix becomes redundant, and you can then specify the endpoint URI using the format shown in the section called "URI format for a single API class" . proxyClass The proxy class element specifies the fully-qualified name of the API class. fromJavadoc If the API class is accompanied by Javadoc metadata, you must indicate this by including the fromJavadoc element and the Javadoc itself must also be specified in the Maven file, as a provided dependency (see the section called "Providing the Javadoc metadata in the component POM" ). fromSignatureFile If the API class is accompanied by signature file metadata, you must indicate this by including the fromSignatureFile element, where the content of this element specifies the location of the signature file. Note The signature files do not get included in the final package built by Maven, because these files are needed only at build time, not at run time. Generated component implementation The API component consists of the following core classes (which must be implemented for every Camel component), under the camel-api-example-component/src/main/java directory: ExampleComponent Represents the component itself. This class acts as a factory for endpoint instances (for example, instances of ExampleEndpoint ). ExampleEndpoint Represents an endpoint URI. This class acts as a factory for consumer endpoints (for example, ExampleConsumer ) and as a factory for producer endpoints (for example, ExampleProducer ). ExampleConsumer Represents a concrete instance of a consumer endpoint, which is capable of consuming messages from the location specified in the endpoint URI. ExampleProducer Represents a concrete instance of a producer endpoint, which is capable of sending messages to the location specified in the endpoint URI. ExampleConfiguration Can be used to define endpoint URI options. The URI options defined by this configuration class are not tied to any specific API class. That is, you can combine these URI options with any of the API classes or methods. This can be useful, for example, if you need to declare username and password credentials in order to connect to the remote service. The primary purpose of the ExampleConfiguration class is to provide values for parameters required to instantiate API classes, or classes that implement API interfaces. For example, these could be constructor parameters, or parameter values for a factory method or class. To implement a URI option, option , in this class, all that you need to do is implement the pair of accessor methods, get Option and set Option . The component framework automatically parses the endpoint URI and injects the option values at run time. ExampleComponent class The generated ExampleComponent class is defined as follows: The important method in this class is createEndpoint , which creates new endpoint instances. Typically, you do not need to change any of the default code in the component class. If there are any other objects with the same life cycle as this component, however, you might want to make those objects available from the component class (for example, by adding a methods to create those objects or by injecting those objects into the component). ExampleEndpoint class The generated ExampleEndpoint class is defined as follows: In the context of the API component framework, one of the key steps performed by the endpoint class is to create an API proxy . The API proxy is an instance from the target Java API, whose methods are invoked by the endpoint. Because a Java API typically consists of many classes, it is necessary to pick the appropriate API class, based on the endpoint-prefix appearing in the URI (recall that a URI has the general form, scheme :// endpoint-prefix / endpoint ). ExampleConsumer class The generated ExampleConsumer class is defined as follows: ExampleProducer class The generated ExampleProducer class is defined as follows: ExampleConfiguration class The generated ExampleConfiguration class is defined as follows: To add a URI option, option , to this class, define a field of the appropriate type, and implement a corresponding pair of accessor methods, get Option and set Option . The component framework automatically parses the endpoint URI and injects the option values at run time. Note This class is used to define general URI options, which can be combined with any API method. To define URI options tied to a specific API method, configure extra options in the API component Maven plug-in. See Section 47.7, "Extra Options" for details. URI format Recall the general format of an API component URI: In general, a URI maps to a specific method invocation on the Java API. For example, suppose you want to invoke the API method, ExampleJavadocHello.greetMe("Jane Doe") , the URI would be constructed, as follows: scheme The API component scheme, as specified when you generated the code with the Maven archetype. In this case, the scheme is example . endpoint-prefix The API name, which maps to the API class defined by the camel-api-component-maven-plugin Maven plug-in configuration. For the ExampleJavadocHello class, the relevant configuration is: Which shows that the required endpoint-prefix is hello-javadoc . endpoint The endpoint maps to the method name, which is greetMe . Option1=Value1 The URI options specify method parameters. The greetMe(String name) method takes the single parameter, name , which can be specified as name=Jane%20Doe . If you want to define default values for options, you can do this by overriding the interceptProperties method (see Section 46.4, "Programming Model" ). Putting together the pieces of the URI, we see that we can invoke ExampleJavadocHello.greetMe("Jane Doe") with the following URI: Default component instance In order to map the example URI scheme to the default component instance, the Maven archetype creates the following file under the camel-api-example-component sub-project: This resource file is what enables the Camel core to identify the component associated with the example URI scheme. Whenever you use an example:// URI in a route, Camel searches the classpath to look for the corresponding example resource file. The example file has the following contents: This enables the Camel core to create a default instance of the ExampleComponent component. The only time you would need to edit this file is if you refactor the name of the component class. 46.4. Programming Model Overview In the context of the API component framework, the main component implementation classes are derived from base classes in the org.apache.camel.util.component package. These base classes define some methods which you can (optionally) override when you are implementing your component. In this section, we provide a brief description of those methods and how you might use them in your own component implementation. Component methods to implement In addition to the generated method implementations (which you usually do not need to modify), you can optionally override some of the following methods in the Component class: doStart() (Optional) A callback to create resources for the component during a cold start. An alternative approach is to adopt the strategy of lazy initialization (creating resources only when they are needed). In fact, lazy initialization is often the best strategy, so the doStart method is often not needed. doStop() (Optional) A callback to invoke code while the component is stopping. Stopping a component means that all of its resources are shut down, internal state is deleted, caches are cleared, and so on. Note Camel guarantees that doStop is always called when the current CamelContext shuts down, even if the corresponding doStart was never called. doShutdown (Optional) A callback to invoke code while the CamelContext is shutting down. Whereas a stopped component can be restarted (with the semantics of a cold start), a component that gets shut down is completely finished. Hence, this callback represents the last chance to free up any resources belonging to the component. What else to implement in the Component class? The Component class is the natural place to hold references to objects that have the same (or similar) life cycle to the component object itself. For example, if a component uses OAuth security, it would be natural to hold references to the required OAuth objects in the Component class and to define methods in the Component class for creating the OAuth objects. Endpoint methods to implement You can modify some of the generated methods and, optionally, override some inherited methods in the Endpoint class, as follows: afterConfigureProperties() The main thing you need to do in this method is to create the appropriate type of proxy class (API class), to match the API name. The API name (which has already been extracted from the endpoint URI) is available either through the inherited apiName field or through the getApiName accessor. Typically, you would do a switch on the apiName field to create the corresponding proxy class. For example: getApiProxy(ApiMethod method, Map<String, Object> args) Override this method to return the proxy instance that you created in afterConfigureProperties . For example: In special cases, you might want to make the choice of proxy dependent on the API method and arguments. The getApiProxy gives you the flexibility to take this approach, if required. doStart() (Optional) A callback to create resources during a cold start. Has the same semantics as Component.doStart() . doStop() (Optional) A callback to invoke code while the component is stopping. Has the same semantics as Component.doStop() . doShutdown (Optional) A callback to invoke code while the component is shutting down. Has the same semantics as Component.doShutdown() . interceptPropertyNames(Set<String> propertyNames) (Optional) The API component framework uses the endpoint URI and supplied option values to determine which method to invoke (ambiguity could be due to overloading and aliases). If the component internally adds options or method parameters, however, the framework might need help in order to determine the right method to invoke. In this case, you must override the interceptPropertyNames method and add the extra (hidden or implicit) options to the propertyNames set. When the complete list of method parameters are provided in the propertyNames set, the framework will be able to identify the right method to invoke. Note You can override this method at the level of the Endpoint , Producer or Consumer class. The basic rule is, if an option affects both producer endpoints and consumer endpoints, override the method in the Endpoint class. interceptProperties(Map<String,Object> properties) (Optional) By overriding this method, you can modify or set the actual values of the options, before the API method is invoked. For example, you could use this method to set default values for some options, if necessary. In practice, it is often necessary to override both the interceptPropertyNames method and the interceptProperty method. Note You can override this method at the level of the Endpoint , Producer or Consumer class. The basic rule is, if an option affects both producer endpoints and consumer endpoints, override the method in the Endpoint class. Consumer methods to implement You can optionally override some inherited methods in the Consumer class, as follows: interceptPropertyNames(Set<String> propertyNames) (Optional) The semantics of this method are similar to Endpoint.interceptPropertyNames interceptProperties(Map<String,Object> properties) (Optional) The semantics of this method are similar to Endpoint.interceptProperties doInvokeMethod(Map<String, Object> args) (Optional) Overriding this method enables you to intercept the invocation of the Java API method. The most common reason for overriding this method is to customize the error handling around the method invocation. For example, a typical approach to overriding doInvokeMethod is shown in the following code fragment: You should invoke doInvokeMethod on the super-class, at some point in this implementation, to ensure that the Java API method gets invoked. interceptResult(Object methodResult, Exchange resultExchange) (Optional) Do some additional processing on the result of the API method invocation. For example, you could add custom headers to the Camel exchange object, resultExchange , at this point. Object splitResult(Object result) (Optional) By default, if the result of the method API invocation is a java.util.Collection object or a Java array, the API component framework splits the result into multiple exchange objects (so that a single invocation result is converted into multiple messages). If you want to change the default behaviour, you can override the splitResult method in the consumer endpoint. The result argument contains the result of the API message invocation. If you want to split the result, you should return an array type. Note You can also switch off the default splitting behaviour by setting consumer.splitResult=false on the endpoint URI. Producer methods to implement You can optionally override some inherited methods in the Producer class, as follows: interceptPropertyNames(Set<String> propertyNames) (Optional) The semantics of this method are similar to Endpoint.interceptPropertyNames interceptProperties(Map<String,Object> properties) (Optional) The semantics of this method are similar to Endpoint.interceptProperties doInvokeMethod(Map<String, Object> args) (Optional) The semantics of this method are similar to Consumer.doInvokeMethod . interceptResult(Object methodResult, Exchange resultExchange) (Optional) The semantics of this method are similar to Consumer.interceptResult . Note The Producer.splitResult() method is never called, so it is not possible to split an API method result in the same way as you can for a consumer endpoint. To get a similar effect for a producer endpoint, you can use Camel's split() DSL command (one of the standard enterprise integration patterns) to split Collection or array results. Consumer polling and threading model The default threading model for consumer endpoints in the API component framework is scheduled poll consumer . This implies that the API method in a consumer endpoint is invoked at regular, scheduled time intervals. For more details, see the section called "Scheduled poll consumer implementation" . 46.5. Sample Component Implementations Overview Several of the components distributed with Apache Camel have been implemented with the aid of the API component framework. If you want to learn more about the techniques for implementing Camel components using the framework, it is a good idea to study the source code of these component implementations. Box.com The Camel Box component shows how to model and invoke the third party Box.com Java SDK using the API component framework. It also demonstrates how the framework can be adapted to customize consumer polling, in order to support Box.com's long polling API. GoogleDrive The Camel GoogleDrive component demonstrates how the API component framework can handle even Method Object style Google APIs. In this case, URI options are mapped to a method object, which is then invoked by overriding the doInvoke method in the consumer and the producer. Olingo2 The Camel Olingo2 component demonstrates how a callback-based Asynchronous API can be wrapped using the API component framework. This example shows how asynchronous processing can be pushed into underlying resources, like HTTP NIO connections, to make Camel endpoints more resource efficient. | [
"mvn archetype:generate -DarchetypeGroupId=org.apache.camel.archetypes -DarchetypeArtifactId=camel-archetype-api-component -DarchetypeVersion=2.23.2.fuse-7_13_0-00013-redhat-00001 -DgroupId=org.jboss.fuse.example -DartifactId=camel-api-example -Dname=Example -Dscheme=example -Dversion=1.0-SNAPSHOT -DinteractiveMode=false",
"camel-api-example/ pom.xml camel-api-example-api/ camel-api-example-component/",
"// Java package org.jboss.fuse.example.api; /** * Sample API used by Example Component whose method signatures are read from Javadoc. */ public class ExampleJavadocHello { public String sayHi() { return \"Hello!\"; } public String greetMe(String name) { return \"Hello \" + name; } public String greetUs(String name1, String name2) { return \"Hello \" + name1 + \", \" + name2; } }",
"// Java package org.jboss.fuse.example.api; /** * Sample API used by Example Component whose method signatures are read from File. */ public class ExampleFileHello { public String sayHi() { return \"Hello!\"; } public String greetMe(String name) { return \"Hello \" + name; } public String greetUs(String name1, String name2) { return \"Hello \" + name1 + \", \" + name2; } }",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\"> <dependencies> <dependency> <groupId>org.jboss.fuse.example</groupId> <artifactId>camel-api-example-api</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> </project>",
"<classifier>javadoc</classifier>",
"<scope>provided</scope>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\"> <dependencies> <!-- Component API javadoc in provided scope to read API signatures --> <dependency> <groupId>org.jboss.fuse.example</groupId> <artifactId>camel-api-example-api</artifactId> <version>1.0-SNAPSHOT</version> <classifier>javadoc</classifier> <scope>provided</scope> </dependency> </dependencies> </project>",
"public String sayHi(); public String greetMe(String name); public String greetUs(String name1, String name2);",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\"> <build> <defaultGoal>install</defaultGoal> <plugins> <!-- generate Component source and test source --> <plugin> <groupId>org.apache.camel</groupId> <artifactId>camel-api-component-maven-plugin</artifactId> <executions> <execution> <id>generate-test-component-classes</id> <goals> <goal>fromApis</goal> </goals> <configuration> <apis> <api> <apiName>hello-file</apiName> <proxyClass>org.jboss.fuse.example.api.ExampleFileHello</proxyClass> <fromSignatureFile>signatures/file-sig-api.txt</fromSignatureFile> </api> <api> <apiName>hello-javadoc</apiName> <proxyClass>org.jboss.fuse.example.api.ExampleJavadocHello</proxyClass> <fromJavadoc/> </api> </apis> </configuration> </execution> </executions> </plugin> </plugins> </build> </project>",
"// Java package org.jboss.fuse.example; import org.apache.camel.CamelContext; import org.apache.camel.Endpoint; import org.apache.camel.spi.UriEndpoint; import org.apache.camel.util.component.AbstractApiComponent; import org.jboss.fuse.example.internal.ExampleApiCollection; import org.jboss.fuse.example.internal.ExampleApiName; /** * Represents the component that manages {@link ExampleEndpoint}. */ @UriEndpoint(scheme = \"example\", consumerClass = ExampleConsumer.class, consumerPrefix = \"consumer\") public class ExampleComponent extends AbstractApiComponent<ExampleApiName, ExampleConfiguration, ExampleApiCollection> { public ExampleComponent() { super(ExampleEndpoint.class, ExampleApiName.class, ExampleApiCollection.getCollection()); } public ExampleComponent(CamelContext context) { super(context, ExampleEndpoint.class, ExampleApiName.class, ExampleApiCollection.getCollection()); } @Override protected ExampleApiName getApiName(String apiNameStr) throws IllegalArgumentException { return ExampleApiName.fromValue(apiNameStr); } @Override protected Endpoint createEndpoint(String uri, String methodName, ExampleApiName apiName, ExampleConfiguration endpointConfiguration) { return new ExampleEndpoint(uri, this, apiName, methodName, endpointConfiguration); } }",
"// Java package org.jboss.fuse.example; import java.util.Map; import org.apache.camel.Consumer; import org.apache.camel.Processor; import org.apache.camel.Producer; import org.apache.camel.spi.UriEndpoint; import org.apache.camel.util.component.AbstractApiEndpoint; import org.apache.camel.util.component.ApiMethod; import org.apache.camel.util.component.ApiMethodPropertiesHelper; import org.jboss.fuse.example.api.ExampleFileHello; import org.jboss.fuse.example.api.ExampleJavadocHello; import org.jboss.fuse.example.internal.ExampleApiCollection; import org.jboss.fuse.example.internal.ExampleApiName; import org.jboss.fuse.example.internal.ExampleConstants; import org.jboss.fuse.example.internal.ExamplePropertiesHelper; /** * Represents a Example endpoint. */ @UriEndpoint(scheme = \"example\", consumerClass = ExampleConsumer.class, consumerPrefix = \"consumer\") public class ExampleEndpoint extends AbstractApiEndpoint<ExampleApiName, ExampleConfiguration> { // TODO create and manage API proxy private Object apiProxy; public ExampleEndpoint(String uri, ExampleComponent component, ExampleApiName apiName, String methodName, ExampleConfiguration endpointConfiguration) { super(uri, component, apiName, methodName, ExampleApiCollection.getCollection().getHelper(apiName), endpointConfiguration); } public Producer createProducer() throws Exception { return new ExampleProducer(this); } public Consumer createConsumer(Processor processor) throws Exception { // make sure inBody is not set for consumers if (inBody != null) { throw new IllegalArgumentException(\"Option inBody is not supported for consumer endpoint\"); } final ExampleConsumer consumer = new ExampleConsumer(this, processor); // also set consumer.* properties configureConsumer(consumer); return consumer; } @Override protected ApiMethodPropertiesHelper<ExampleConfiguration> getPropertiesHelper() { return ExamplePropertiesHelper.getHelper(); } protected String getThreadProfileName() { return ExampleConstants.THREAD_PROFILE_NAME; } @Override protected void afterConfigureProperties() { // TODO create API proxy, set connection properties, etc. switch (apiName) { case HELLO_FILE: apiProxy = new ExampleFileHello(); break; case HELLO_JAVADOC: apiProxy = new ExampleJavadocHello(); break; default: throw new IllegalArgumentException(\"Invalid API name \" + apiName); } } @Override public Object getApiProxy(ApiMethod method, Map<String, Object> args) { return apiProxy; } }",
"// Java package org.jboss.fuse.example; import org.apache.camel.Processor; import org.apache.camel.util.component.AbstractApiConsumer; import org.jboss.fuse.example.internal.ExampleApiName; /** * The Example consumer. */ public class ExampleConsumer extends AbstractApiConsumer<ExampleApiName, ExampleConfiguration> { public ExampleConsumer(ExampleEndpoint endpoint, Processor processor) { super(endpoint, processor); } }",
"// Java package org.jboss.fuse.example; import org.apache.camel.util.component.AbstractApiProducer; import org.jboss.fuse.example.internal.ExampleApiName; import org.jboss.fuse.example.internal.ExamplePropertiesHelper; /** * The Example producer. */ public class ExampleProducer extends AbstractApiProducer<ExampleApiName, ExampleConfiguration> { public ExampleProducer(ExampleEndpoint endpoint) { super(endpoint, ExamplePropertiesHelper.getHelper()); } }",
"// Java package org.jboss.fuse.example; import org.apache.camel.spi.UriParams; /** * Component configuration for Example component. */ @UriParams public class ExampleConfiguration { // TODO add component configuration properties }",
"scheme :// endpoint-prefix / endpoint ? Option1 = Value1 &...& OptionN = ValueN",
"<configuration> <apis> <api> <apiName> hello-javadoc </apiName> <proxyClass>org.jboss.fuse.example.api.ExampleJavadocHello</proxyClass> <fromJavadoc/> </api> </apis> </configuration>",
"example://hello-javadoc/greetMe?name=Jane%20Doe",
"src/main/resources/META-INF/services/org/apache/camel/component/example",
"class=org.jboss.fuse.example.ExampleComponent",
"// Java private Object apiProxy; @Override protected void afterConfigureProperties() { // TODO create API proxy, set connection properties, etc. switch (apiName) { case HELLO_FILE: apiProxy = new ExampleFileHello(); break; case HELLO_JAVADOC: apiProxy = new ExampleJavadocHello(); break; default: throw new IllegalArgumentException(\"Invalid API name \" + apiName); } }",
"@Override public Object getApiProxy(ApiMethod method, Map<String, Object> args) { return apiProxy; }",
"// Java @Override protected Object doInvokeMethod(Map<String, Object> args) { try { return super.doInvokeMethod(args); } catch (RuntimeCamelException e) { // TODO - Insert custom error handling here! } }"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/GetStart |
Chapter 1. Red Hat build of MicroShift 4.18 release notes | Chapter 1. Red Hat build of MicroShift 4.18 release notes Red Hat build of MicroShift (MicroShift) provides developers and IT organizations with small-form-factor and edge computing delivered as an application that customers can deploy on top of their managed Red Hat Enterprise Linux (RHEL) devices at the edge. Built on OpenShift Container Platform and Kubernetes, MicroShift provides an efficient way to operate single-node clusters in low-resource edge environments. MicroShift is designed to make control plane restarts economical and be lifecycle-managed as a single unit by the operating system. Updates, roll-backs, and configuration changes consist of simply staging another version in parallel and then - without relying on a network - flipping to and from that version and restarting. 1.1. About this release Version 4.18 of MicroShift includes new features and enhancements. Update to the latest version of MicroShift to receive all of the latest features, bug fixes, and security updates. MicroShift is derived from OpenShift Container Platform 4.18 and uses the CRI-O container runtime. New features, changes, and known issues that pertain to MicroShift are included in this topic. You can deploy MicroShift clusters to on-premise, cloud, disconnected, and offline environments. MicroShift 4.18 is supported on Red Hat Enterprise Linux (RHEL) 9.4. For lifecycle information, see the Red Hat build of MicroShift Life Cycle Policy . 1.2. New features and enhancements This release adds improvements related to the following components and concepts. 1.2.1. Updating Updating two minor EUS versions in a single step is supported in 4.18. Updates for both single-version minor releases and patch releases are also supported. See Update options with Red Hat build of MicroShift and Red Hat Device Edge for details. 1.2.2. Configuring 1.2.2.1. Drop-in configuration snippets now available With this release, make configuring your MicroShift instances easier by using drop-in configuration snippets. See Using configuration snippets for details. 1.2.2.2. Control ingress for your use case with additional parameters With this update, you have greater control over ingress to your MicroShift cluster by configuring more parameters. You can use these parameters to define secure connections and the number of connections per pod, plus more. See Using ingress control for a MicroShift cluster for details. 1.2.3. Running applications 1.2.3.1. Deleting or updating Kustomize manifest resources now documented With this release, you can delete or upgrade the Kustomize manifest resources. For more information, see Deleting or updating Kustomize manifest resources . 1.2.3.2. Greenboot example outputs for image mode for RHEL available With this release, you can see detailed explanations and example outputs to check whether greenboot workload scripts are running properly. For more information, see Testing a workload health check script . 1.2.4. Backup and restore 1.2.4.1. Automated recovery from manual backups With this release, you can automatically restore data from manual backups when MicroShift fails to start by using the auto-recovery feature. For more information, see Automated recovery from manual backups . 1.2.5. Documentation enhancements 1.2.5.1. Updated content in the RHEL image mode section With this release, the contents of the "Installing with RHEL image mode" section, which were previously developer-focused, are now updated to be MicroShift administrator-focused. For more information, see Using image mode for RHEL with MicroShift . 1.3. Technology Preview features Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features: Technology Preview Features Support Scope 1.3.1. Red Hat Enterprise Linux (RHEL) image mode Technology Preview feature You can install MicroShift using a bootc container image. Image mode for RHEL is a Technology Preview deployment method that uses a container-native approach to build, deploy and manage the operating system as a bootc container image. See Using image mode for RHEL with MicroShift for more information. 1.4. Deprecated and removed features With this release, the CSI snapshot webhook feature available in releases is removed. The webhook is replaced by CEL validation rules for snapshot objects. 1.5. Bug fixes Installation Previously, the greenboot health check marked some healthy Image mode for RHEL systems as unhealthy due to a 300-second timeout, which was insufficient for systems with slow networks. Beginning in Red Hat build of MicroShift 4.18, the default greenboot wait timeout has been extended to 600 seconds to provide more time to download images and ensure accurate system checks. ( OCPBUGS-47463 ) 1.6. Additional release notes Release notes for related components and products are available in the following documentation: Note The following release notes are for downstream Red Hat products only; upstream or community release notes for related products are not included. 1.6.1. GitOps release notes See Red Hat OpenShift GitOps 1.15: Highlights of what is new and what has changed with this OpenShift GitOps release for more information. You can also go to the Red Hat package download page and search for "gitops" if you just need the latest package, Red Hat packages . 1.6.2. OpenShift Container Platform release notes See the OpenShift Container Platform Release Notes for information about the Operator Lifecycle Manager and other components. Not all of the changes to OpenShift Container Platform apply to MicroShift. See the specific MicroShift implementation of an Operator or function for more information. 1.6.3. Red Hat Enterprise Linux (RHEL) release notes See the Release Notes for Red Hat Enterprise Linux 9.4 for more information about RHEL. 1.7. Asynchronous errata updates Security, bug fix, and enhancement updates for MicroShift 4.18 are released as asynchronous errata through the Red Hat Network. All MicroShift 4.18 errata are available on the Red Hat Customer Portal . For more information about asynchronous errata, read the MicroShift Life Cycle . Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, you are notified through email whenever new errata relevant to your registered systems are released. Note Red Hat Customer Portal user accounts must have systems registered and consuming MicroShift entitlements for MicroShift errata notification emails to generate. This section is updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of MicroShift 4.18. Versioned asynchronous releases, for example with the form MicroShift 4.18.z, are detailed in the following subsections. MicroShift uses a priority-based release cadence to provide the most important updates with the least disruption. Important For any Red Hat build of MicroShift release, always review the Update options with Red Hat build of MicroShift and Red Hat Device Edge documentation before proceeding with an update. 1.7.1. RHEA-2024:6124 - MicroShift 4.18.1 bug fix and security update advisory Issued: 25 February 2025 Red Hat build of MicroShift release 4.18.1 is now available. Bug fixes and enhancements are listed in the RHEA-2024:6124 advisory. Release notes for bug fixes and enhancements are provided in this documentation. The images that are included in the update are provided by the OpenShift Container Platform RHSA-2024:6122 advisory. See the latest images included with MicroShift by listing the contents of the MicroShift RPM release package . 1.7.1.1. Known issue A failed greenboot health check flag does not clear when an RPM-install-based MicroShift service is updated without rebooting the system. As a consequence, greenboot health checks for optional components continue to fail because the systemctl restart greenboot-healthcheck.service command fails while the flag is present. ( OCPBUGS-51198 ) 1.7.2. RHBA-2025:1953 - MicroShift 4.18.2 bug fix and enhancement advisory Issued: 4 March 2025 Red Hat build of MicroShift release 4.18.2 is now available. Bug fixes and enhancements are listed in the RHBA-2025:1953 advisory. Release notes for bug fixes and enhancements are provided in this documentation. The images that are included in the update are provided by the OpenShift Container Platform RHBA-2025:1904 advisory. See the latest images included with MicroShift by listing the contents of the MicroShift RPM release package . 1.7.2.1. Bug fix Previously, a failed greenboot health check flag did not clear when an RPM-install-based MicroShift service was updated without rebooting the system. As a consequence, greenboot health checks for optional components continue to fail because the systemctl restart greenboot-healthcheck.service command fails while the flag is present. With this release, the primary MicroShift greenboot health check clears the condition that causes optional component health checks to continue to fail. ( OCPBUGS-51198 ) | null | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/red_hat_build_of_microshift_release_notes/microshift-4-18-release-notes |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue . Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_microsoft_azure_data_into_cost_management/proc-providing-feedback-on-redhat-documentation |
Chapter 3. Requirements for upgrading OpenShift AI | Chapter 3. Requirements for upgrading OpenShift AI When upgrading OpenShift AI, you must complete the following tasks. Checking the components in the DataScienceCluster object When you upgrade Red Hat OpenShift AI, the upgrade process automatically uses the values from the DataScienceCluster object. After the upgrade, you should inspect the DataScienceCluster object and optionally update the status of any components as described in Updating the installation status of Red Hat OpenShift AI components by using the web console . Note New components are not automatically added to the DataScienceCluster object during upgrade. If you want to use a new component, you must manually edit the DataScienceCluster object to add the component entry. Migrating data science pipelines Previously, data science pipelines in OpenShift AI were based on KubeFlow Pipelines v1. Data science pipelines are now based on KubeFlow Pipelines v2, which uses a different workflow engine. Data science pipelines 2.0 is enabled and deployed by default in OpenShift AI. Data science pipelines 1.0 resources are no longer supported or managed by OpenShift AI. It is no longer possible to deploy, view, or edit the details of pipelines that are based on data science pipelines 1.0 from either the dashboard or the KFP API server. OpenShift AI does not automatically migrate existing data science pipelines 1.0 instances to 2.0. Before upgrading OpenShift AI, you must manually migrate your existing data science pipelines 1.0 instances. For more information, see Migrating to data science pipelines 2.0 . Important Data science pipelines 2.0 contains an installation of Argo Workflows. OpenShift AI does not support direct usage of this installation of Argo Workflows. If you upgrade to OpenShift AI with data science pipelines 2.0 and an Argo Workflows installation that is not installed by data science pipelines exists on your cluster, OpenShift AI components will not be upgraded. To complete the component upgrade, disable data science pipelines or remove the separate installation of Argo Workflows. The component upgrade will complete automatically. Addressing KServe requirements For the KServe component, which is used by the single-model serving platform to serve large models, you must meet the following requirements: To fully install and use KServe, you must also install Operators for Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh and perform additional configuration. For more information, see Serving large models . If you want to add an authorization provider for the single-model serving platform, you must install the Red Hat - Authorino Operator. For information, see Adding an authorization provider for the single-model serving platform . Updating workflows interacting with OdhDashboardConfig resource Previously, cluster administrators used the groupsConfig option in the OdhDashboardConfig resource to manage the OpenShift groups (both administrators and non-administrators) that can access the OpenShift AI dashboard. Starting with OpenShift AI 2.17, this functionality has moved to the Auth resource. If you have workflows (such as GitOps workflows) that interact with OdhDashboardConfig , you must update them to reference the Auth resource instead. Table 3.1. User management resource update OpenShift AI 2.16 and earlier OpenShift AI 2.17 and later apiVersion opendatahub.io/v1alpha services.platform.opendatahub.io/v1alpha1 kind OdhDashboardConfig Auth name odh-dashboard-config auth Admin groups spec.groupsConfig.adminGroups spec.adminGroups User groups spec.groupsConfig.allowedGroups spec.allowedGroups | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/upgrading_openshift_ai_cloud_service/requirements-for-upgrading-openshift-ai_upgrade |
Chapter 8. Understanding and creating service accounts | Chapter 8. Understanding and creating service accounts 8.1. Service accounts overview A service account is an OpenShift Dedicated account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. When you use the OpenShift Dedicated CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. Each service account's user name is derived from its project and name: system:serviceaccount:<project>:<name> Every service account is also a member of two groups: Group Description system:serviceaccounts Includes all service accounts in the system. system:serviceaccounts:<project> Includes all service accounts in the specified project. 8.1.1. Automatically generated image pull secrets By default, OpenShift Dedicated creates an image pull secret for each service account. Note Prior to OpenShift Dedicated 4.16, a long-lived service account API token secret was also generated for each service account that was created. Starting with OpenShift Dedicated 4.16, this service account API token secret is no longer created. After upgrading to 4, any existing long-lived service account API token secrets are not deleted and will continue to function. For information about detecting long-lived API tokens that are in use in your cluster or deleting them if they are not needed, see the Red Hat Knowledgebase article Long-lived service account API tokens in OpenShift Container Platform . This image pull secret is necessary to integrate the OpenShift image registry into the cluster's user authentication and authorization system. However, if you do not enable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, an image pull secret is not generated for each service account. When the integrated OpenShift image registry is disabled on a cluster that previously had it enabled, the previously generated image pull secrets are deleted automatically. 8.2. Creating service accounts You can create a service account in a project and grant it permissions by binding it to a role. Procedure Optional: To view the service accounts in the current project: USD oc get sa Example output NAME SECRETS AGE builder 1 2d default 1 2d deployer 1 2d To create a new service account in the current project: USD oc create sa <service_account_name> 1 1 To create a service account in a different project, specify -n <project_name> . Example output serviceaccount "robot" created Tip You can alternatively apply the following YAML to create the service account: apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project> Optional: View the secrets for the service account: USD oc describe sa robot Example output Name: robot Namespace: project1 Labels: <none> Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: <none> Events: <none> 8.3. Granting roles to service accounts You can grant roles to service accounts in the same way that you grant roles to a regular user account. You can modify the service accounts for the current project. For example, to add the view role to the robot service account in the top-secret project: USD oc policy add-role-to-user view system:serviceaccount:top-secret:robot Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret You can also grant access to a specific service account in a project. For example, from the project to which the service account belongs, use the -z flag and specify the <service_account_name> USD oc policy add-role-to-user <role_name> -z <service_account_name> Important If you want to grant access to a specific service account in a project, use the -z flag. Using this flag helps prevent typos and ensures that access is granted to only the specified service account. Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name> To modify a different namespace, you can use the -n option to indicate the project namespace it applies to, as shown in the following examples. For example, to allow all service accounts in all projects to view resources in the my-project project: USD oc policy add-role-to-group view system:serviceaccounts -n my-project Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts To allow all service accounts in the managers project to edit resources in the my-project project: USD oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers | [
"system:serviceaccount:<project>:<name>",
"oc get sa",
"NAME SECRETS AGE builder 1 2d default 1 2d deployer 1 2d",
"oc create sa <service_account_name> 1",
"serviceaccount \"robot\" created",
"apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>",
"oc describe sa robot",
"Name: robot Namespace: project1 Labels: <none> Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: <none> Events: <none>",
"oc policy add-role-to-user view system:serviceaccount:top-secret:robot",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret",
"oc policy add-role-to-user <role_name> -z <service_account_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name>",
"oc policy add-role-to-group view system:serviceaccounts -n my-project",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts",
"oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/authentication_and_authorization/understanding-and-creating-service-accounts |
Chapter 2. Enhancements | Chapter 2. Enhancements The enhancements added in this release are outlined below. 2.1. Kafka 2.8.0 enhancements For an overview of the enhancements introduced with Kafka 2.8.0, refer to the Kafka 2.8.0 Release Notes . 2.2. OAuth 2.0 authentication enhancements Configure audience and scope You can now configure the oauth.audience and oauth.scope properties and pass their values as parameters when obtaining a token. Both properties are configured in the OAuth 2.0 authentication listener configuration. Use these properties in the following scenarios: When obtaining an access token for inter-broker authentication In the name of a client for OAuth 2.0 over PLAIN client authentication, using a clientId and secret These properties affect whether a client can obtain a token and the content of the token. They do not affect token validation rules imposed by the listener. Example configuration for oauth.audience and oauth.scope properties listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ # ... oauth.token.endpoint.uri=" https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token " \ oauth.scope="" SCOPE "" \ oauth.audience=" AUDIENCE " \ oauth.check.audience="true" \ # ... Your authorization server might provide aud (audience) claims in JWT access tokens. When audience checks are enabled by setting oauth.check.audience="true" , the Kafka broker rejects tokens that do not contain the broker's clientId in their aud claims. Audience checks are disabled by default. See Configuring OAuth 2.0 support for Kafka brokers Token endpoint not required with OAuth 2.0 over PLAIN The oauth.token.endpoint.uri parameter is no longer required when using the "client ID and secret" method for OAuth 2.0 over PLAIN authentication. Example OAuth 2.0 over PLAIN listener configuration with token endpoint URI specified listener.name.client.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ oauth.valid.issuer.uri="https://__AUTH-SERVER-ADDRESS__" \ oauth.jwks.endpoint.uri="https://__AUTH-SERVER-ADDRESS__/jwks" \ oauth.username.claim="preferred_username" \ oauth.token.endpoint.uri="http://__AUTH_SERVER__/auth/realms/__REALM__/protocol/openid-connect/token" ; If the oauth.token.endpoint.uri is not specified, the listener treats the: username parameter as the account name password parameter as the raw access token, which is passed to the authorization server for validation (the same behavior as for OAUTHBEARER authentication) The behavior of the "long-lived access token" method for OAuth 2.0 over PLAIN authentication is unchanged. The oauth.token.endpoint.uri is not required when using this method. See OAuth 2.0 Kafka broker configuration | [
"listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required # oauth.token.endpoint.uri=\" https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token \" oauth.scope=\"\" SCOPE \"\" oauth.audience=\" AUDIENCE \" oauth.check.audience=\"true\" #",
"listener.name.client.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required oauth.valid.issuer.uri=\"https://__AUTH-SERVER-ADDRESS__\" oauth.jwks.endpoint.uri=\"https://__AUTH-SERVER-ADDRESS__/jwks\" oauth.username.claim=\"preferred_username\" oauth.token.endpoint.uri=\"http://__AUTH_SERVER__/auth/realms/__REALM__/protocol/openid-connect/token\" ;"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/release_notes_for_amq_streams_1.8_on_rhel/enhancements-str |
Chapter 102. FHIR JSon DataFormat | Chapter 102. FHIR JSon DataFormat Available as of Camel version 2.21 Available as of Camel version 2.21 The FHIR-JSON Data Format leverages HAPI-FHIR's JSON parser to parse to/from JSON format to/from a HAPI-FHIR's IBaseResource . 102.1. FHIR JSON Format Options The FHIR JSon dataformat supports 14 options, which are listed below. Name Default Java Type Description fhirVersion DSTU3 String The version of FHIR to use. Possible values are: DSTU2,DSTU2_HL7ORG,DSTU2_1,DSTU3,R4 prettyPrint false Boolean Sets the pretty print flag, meaning that the parser will encode resources with human-readable spacing and newlines between elements instead of condensing output as much as possible. serverBaseUrl String Sets the server's base URL used by this parser. If a value is set, resource references will be turned into relative references if they are provided as absolute URLs but have a base matching the given base. omitResourceId false Boolean If set to true (default is false) the ID of any resources being encoded will not be included in the output. Note that this does not apply to contained resources, only to root resources. In other words, if this is set to true, contained resources will still have local IDs but the outer/containing ID will not have an ID. encodeElementsAppliesToResourceTypes Set If provided, tells the parse which resource types to apply link #setEncodeElements(Set) encode elements to. Any resource types not specified here will be encoded completely, with no elements excluded. encodeElementsAppliesToChildResourcesOnly false Boolean If set to true (default is false), the values supplied to setEncodeElements(Set) will not be applied to the root resource (typically a Bundle), but will be applied to any sub-resources contained within it (i.e. search result resources in that bundle) encodeElements Set If provided, specifies the elements which should be encoded, to the exclusion of all others. Valid values for this field would include: Patient - Encode patient and all its children Patient.name - Encode only the patient's name Patient.name.family - Encode only the patient's family name .text - Encode the text element on any resource (only the very first position may contain a wildcard) .(mandatory) - This is a special case which causes any mandatory fields (min 0) to be encoded dontEncodeElements Set If provided, specifies the elements which should NOT be encoded. Valid values for this field would include: Patient - Don't encode patient and all its children Patient.name - Don't encode the patient's name Patient.name.family - Don't encode the patient's family name .text - Don't encode the text element on any resource (only the very first position may contain a wildcard) DSTU2 note: Note that values including meta, such as Patient.meta will work for DSTU2 parsers, but values with subelements on meta such as Patient.meta.lastUpdated will only work in DSTU3 mode. stripVersionsFromReferences false Boolean If set to true (which is the default), resource references containing a version will have the version removed when the resource is encoded. This is generally good behaviour because in most situations, references from one resource to another should be to the resource by ID, not by ID and version. In some cases though, it may be desirable to preserve the version in resource links. In that case, this value should be set to false. This method provides the ability to globally disable reference encoding. If finer-grained control is needed, use setDontStripVersionsFromReferencesAtPaths(List) overrideResourceIdWithBundleEntryFullUrl false Boolean If set to true (which is the default), the Bundle.entry.fullUrl will override the Bundle.entry.resource's resource id if the fullUrl is defined. This behavior happens when parsing the source data into a Bundle object. Set this to false if this is not the desired behavior (e.g. the client code wishes to perform additional validation checks between the fullUrl and the resource id). summaryMode false Boolean If set to true (default is false) only elements marked by the FHIR specification as being summary elements will be included. suppressNarratives false Boolean If set to true (default is false), narratives will not be included in the encoded values. dontStripVersionsFromReferencesAtPaths List If supplied value(s), any resource references at the specified paths will have their resource versions encoded instead of being automatically stripped during the encoding process. This setting has no effect on the parsing process. This method provides a finer-grained level of control than setStripVersionsFromReferences(Boolean) and any paths specified by this method will be encoded even if setStripVersionsFromReferences(Boolean) has been set to true (which is the default) contentTypeHeader false Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 102.2. Spring Boot Auto-Configuration The component supports 15 options, which are listed below. Name Description Default Type camel.dataformat.fhirjson.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.fhirjson.dont-encode-elements If provided, specifies the elements which should NOT be encoded. Valid values for this field would include: Patient - Don't encode patient and all its children Patient.name - Don't encode the patient's name Patient.name.family - Don't encode the patient's family name .text - Don't encode the text element on any resource (only the very first position may contain a wildcard) DSTU2 note: Note that values including meta, such as Patient.meta will work for DSTU2 parsers, but values with subelements on meta such as Patient.meta.lastUpdated will only work in DSTU3 mode. Set camel.dataformat.fhirjson.dont-strip-versions-from-references-at-paths If supplied value(s), any resource references at the specified paths will have their resource versions encoded instead of being automatically stripped during the encoding process. This setting has no effect on the parsing process. This method provides a finer-grained level of control than setStripVersionsFromReferences(Boolean) and any paths specified by this method will be encoded even if setStripVersionsFromReferences(Boolean) has been set to true (which is the default) List camel.dataformat.fhirjson.enabled Whether to enable auto configuration of the fhirJson data format. This is enabled by default. Boolean camel.dataformat.fhirjson.encode-elements If provided, specifies the elements which should be encoded, to the exclusion of all others. Valid values for this field would include: Patient - Encode patient and all its children Patient.name - Encode only the patient's name Patient.name.family - Encode only the patient's family name .text - Encode the text element on any resource (only the very first position may contain a wildcard) .(mandatory) - This is a special case which causes any mandatory fields (min 0) to be encoded Set camel.dataformat.fhirjson.encode-elements-applies-to-child-resources-only If set to true (default is false), the values supplied to setEncodeElements(Set) will not be applied to the root resource (typically a Bundle), but will be applied to any sub-resources contained within it (i.e. search result resources in that bundle) false Boolean camel.dataformat.fhirjson.encode-elements-applies-to-resource-types If provided, tells the parse which resource types to apply link #setEncodeElements(Set) encode elements to. Any resource types not specified here will be encoded completely, with no elements excluded. Set camel.dataformat.fhirjson.fhir-version The version of FHIR to use. Possible values are: DSTU2,DSTU2_HL7ORG,DSTU2_1,DSTU3,R4 DSTU3 String camel.dataformat.fhirjson.omit-resource-id If set to true (default is false) the ID of any resources being encoded will not be included in the output. Note that this does not apply to contained resources, only to root resources. In other words, if this is set to true, contained resources will still have local IDs but the outer/containing ID will not have an ID. false Boolean camel.dataformat.fhirjson.override-resource-id-with-bundle-entry-full-url If set to true (which is the default), the Bundle.entry.fullUrl will override the Bundle.entry.resource's resource id if the fullUrl is defined. This behavior happens when parsing the source data into a Bundle object. Set this to false if this is not the desired behavior (e.g. the client code wishes to perform additional validation checks between the fullUrl and the resource id). false Boolean camel.dataformat.fhirjson.pretty-print Sets the pretty print flag, meaning that the parser will encode resources with human-readable spacing and newlines between elements instead of condensing output as much as possible. false Boolean camel.dataformat.fhirjson.server-base-url Sets the server's base URL used by this parser. If a value is set, resource references will be turned into relative references if they are provided as absolute URLs but have a base matching the given base. String camel.dataformat.fhirjson.strip-versions-from-references If set to true (which is the default), resource references containing a version will have the version removed when the resource is encoded. This is generally good behaviour because in most situations, references from one resource to another should be to the resource by ID, not by ID and version. In some cases though, it may be desirable to preserve the version in resource links. In that case, this value should be set to false. This method provides the ability to globally disable reference encoding. If finer-grained control is needed, use setDontStripVersionsFromReferencesAtPaths(List) false Boolean camel.dataformat.fhirjson.summary-mode If set to true (default is false) only elements marked by the FHIR specification as being summary elements will be included. false Boolean camel.dataformat.fhirjson.suppress-narratives If set to true (default is false), narratives will not be included in the encoded values. false Boolean | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/fhirjson-dataformat |
6.3.4. Creating the New Logical Volume | 6.3.4. Creating the New Logical Volume After creating the new volume group, you can create the new logical volume yourlv . | [
"lvcreate -L5G -n yourlv yourvg Logical volume \"yourlv\" created"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/vol_create_ex3 |
Chapter 6. Uninstalling power monitoring | Chapter 6. Uninstalling power monitoring Important Power monitoring is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can uninstall power monitoring by deleting the Kepler instance and then the Power monitoring Operator in the OpenShift Container Platform web console. 6.1. Deleting Kepler You can delete Kepler by removing the Kepler instance of the Kepler custom resource definition (CRD) from the OpenShift Container Platform web console. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. Procedure In the Administrator perspective of the web console, go to Operators Installed Operators . Click Power monitoring for Red Hat OpenShift from the Installed Operators list and go to the Kepler tab. Locate the Kepler instance entry in the list. Click for this entry and select Delete Kepler . In the Delete Kepler? dialog, click Delete to delete the Kepler instance. 6.2. Uninstalling the Power monitoring Operator If you installed the Power monitoring Operator by using OperatorHub, you can uninstall it from the OpenShift Container Platform web console. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. Procedure Delete the Kepler instance. Warning Ensure that you have deleted the Kepler instance before uninstalling the Power monitoring Operator. Go to Operators Installed Operators . Locate the Power monitoring for Red Hat OpenShift entry in the list. Click for this entry and select Uninstall Operator . In the Uninstall Operator? dialog, click Uninstall to uninstall the Power monitoring Operator. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/power_monitoring/uninstalling-power-monitoring |
3.9. Considerations for Using Quorum Disk | 3.9. Considerations for Using Quorum Disk Quorum Disk is a disk-based quorum daemon, qdiskd , that provides supplemental heuristics to determine node fitness. With heuristics you can determine factors that are important to the operation of the node in the event of a network partition. For example, in a four-node cluster with a 3:1 split, ordinarily, the three nodes automatically "win" because of the three-to-one majority. Under those circumstances, the one node is fenced. With qdiskd however, you can set up heuristics that allow the one node to win based on access to a critical resource (for example, a critical network path). If your cluster requires additional methods of determining node health, then you should configure qdiskd to meet those needs. Note Configuring qdiskd is not required unless you have special requirements for node health. An example of a special requirement is an "all-but-one" configuration. In an all-but-one configuration, qdiskd is configured to provide enough quorum votes to maintain quorum even though only one node is working. Important Overall, heuristics and other qdiskd parameters for your deployment depend on the site environment and special requirements needed. To understand the use of heuristics and other qdiskd parameters, see the qdisk (5) man page. If you require assistance understanding and using qdiskd for your site, contact an authorized Red Hat support representative. If you need to use qdiskd , you should take into account the following considerations: Cluster node votes When using Quorum Disk, each cluster node must have one vote. CMAN membership timeout value The qdiskd membership timeout value is automatically configured based on the CMAN membership timeout value (the time a node needs to be unresponsive before CMAN considers that node to be dead, and not a member). qdiskd also performs extra sanity checks to guarantee that it can operate within the CMAN timeout. If you find that you need to reset this value, you must take the following into account: The CMAN membership timeout value should be at least two times that of the qdiskd membership timeout value. The reason is because the quorum daemon must detect failed nodes on its own, and can take much longer to do so than CMAN. Other site-specific conditions may affect the relationship between the membership timeout values of CMAN and qdiskd . For assistance with adjusting the CMAN membership timeout value, contact an authorized Red Hat support representative. Fencing To ensure reliable fencing when using qdiskd , use power fencing. While other types of fencing can be reliable for clusters not configured with qdiskd , they are not reliable for a cluster configured with qdiskd . Maximum nodes A cluster configured with qdiskd supports a maximum of 16 nodes. The reason for the limit is because of scalability; increasing the node count increases the amount of synchronous I/O contention on the shared quorum disk device. Quorum disk device A quorum disk device should be a shared block device with concurrent read/write access by all nodes in a cluster. The minimum size of the block device is 10 Megabytes. Examples of shared block devices that can be used by qdiskd are a multi-port SCSI RAID array, a Fibre Channel RAID SAN, or a RAID-configured iSCSI target. You can create a quorum disk device with mkqdisk , the Cluster Quorum Disk Utility. For information about using the utility see the mkqdisk(8) man page. Note Using JBOD as a quorum disk is not recommended. A JBOD cannot provide dependable performance and therefore may not allow a node to write to it quickly enough. If a node is unable to write to a quorum disk device quickly enough, the node is falsely evicted from a cluster. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-qdisk-considerations-ca |
Appendix F. Object Storage Daemon (OSD) configuration options | Appendix F. Object Storage Daemon (OSD) configuration options The following are Ceph Object Storage Daemon (OSD) configuration options that can be set during deployment. You can set these configuration options with the ceph config set osd CONFIGURATION_OPTION VALUE command. osd_uuid Description The universally unique identifier (UUID) for the Ceph OSD. Type UUID Default The UUID. Note The osd uuid applies to a single Ceph OSD. The fsid applies to the entire cluster. osd_data Description The path to the OSD's data. You must create the directory when deploying Ceph. Mount a drive for OSD data at this mount point. Type String Default /var/lib/ceph/osd/USDcluster-USDid osd_max_write_size Description The maximum size of a write in megabytes. Type 32-bit Integer Default 90 osd_client_message_size_cap Description The largest client data message allowed in memory. Type 64-bit Integer Unsigned Default 500MB default. 500*1024L*1024L osd_class_dir Description The class path for RADOS class plug-ins. Type String Default USDlibdir/rados-classes osd_max_scrubs Description The maximum number of simultaneous scrub operations for a Ceph OSD. Type 32-bit Int Default 1 osd_scrub_thread_timeout Description The maximum time in seconds before timing out a scrub thread. Type 32-bit Integer Default 60 osd_scrub_finalize_thread_timeout Description The maximum time in seconds before timing out a scrub finalize thread. Type 32-bit Integer Default 60*10 osd_scrub_begin_hour Description This restricts scrubbing to this hour of the day or later. Use osd_scrub_begin_hour = 0 and osd_scrub_end_hour = 0 to allow scrubbing the entire day. Along with osd_scrub_end_hour , they define a time window, in which the scrubs can happen. But a scrub is performed no matter whether the time window allows or not, as long as the placement group's scrub interval exceeds osd_scrub_max_interval . Type Integer Default 0 Allowed range [0,23] osd_scrub_end_hour Description This restricts scrubbing to the hour earlier than this. Use osd_scrub_begin_hour = 0 and osd_scrub_end_hour = 0 to allow scrubbing for the entire day. Along with osd_scrub_begin_hour , they define a time window, in which the scrubs can happen. But a scrub is performed no matter whether the time window allows or not, as long as the placement group's scrub interval exceeds osd_scrub_max_interval . Type Integer Default 0 Allowed range [0,23] osd_scrub_load_threshold Description The maximum load. Ceph will not scrub when the system load (as defined by the getloadavg() function) is higher than this number. Default is 0.5 . Type Float Default 0.5 osd_scrub_min_interval Description The minimum interval in seconds for scrubbing the Ceph OSD when the Red Hat Ceph Storage cluster load is low. Type Float Default Once per day. 60*60*24 osd_scrub_max_interval Description The maximum interval in seconds for scrubbing the Ceph OSD irrespective of cluster load. Type Float Default Once per week. 7*60*60*24 osd_scrub_interval_randomize_ratio Description Takes the ratio and randomizes the scheduled scrub between osd scrub min interval and osd scrub max interval . Type Float Default 0.5 . mon_warn_not_scrubbed Description Number of seconds after osd_scrub_interval to warn about any PGs that were not scrubbed. Type Integer Default 0 (no warning). osd_scrub_chunk_min Description The object store is partitioned into chunks which end on hash boundaries. For chunky scrubs, Ceph scrubs objects one chunk at a time with writes blocked for that chunk. The osd scrub chunk min setting represents the minimum number of chunks to scrub. Type 32-bit Integer Default 5 osd_scrub_chunk_max Description The maximum number of chunks to scrub. Type 32-bit Integer Default 25 osd_scrub_sleep Description The time to sleep between deep scrub operations. Type Float Default 0 (or off). osd_scrub_during_recovery Description Allows scrubbing during recovery. Type Bool Default false osd_scrub_invalid_stats Description Forces extra scrub to fix stats marked as invalid. Type Bool Default true osd_scrub_priority Description Controls queue priority of scrub operations versus client I/O. Type Unsigned 32-bit Integer Default 5 osd_requested_scrub_priority Description The priority set for user requested scrub on the work queue. If this value were to be smaller than osd_client_op_priority , it can be boosted to the value of osd_client_op_priority when scrub is blocking client operations. Type Unsigned 32-bit Integer Default 120 osd_scrub_cost Description Cost of scrub operations in megabytes for queue scheduling purposes. Type Unsigned 32-bit Integer Default 52428800 osd_deep_scrub_interval Description The interval for deep scrubbing, that is fully reading all data. The osd scrub load threshold parameter does not affect this setting. Type Float Default Once per week. 60*60*24*7 osd_deep_scrub_stride Description Read size when doing a deep scrub. Type 32-bit Integer Default 512 KB. 524288 mon_warn_not_deep_scrubbed Description Number of seconds after osd_deep_scrub_interval to warn about any PGs that were not scrubbed. Type Integer Default 0 (no warning) osd_deep_scrub_randomize_ratio Description The rate at which scrubs will randomly become deep scrubs (even before osd_deep_scrub_interval has passed). Type Float Default 0.15 or 15% osd_deep_scrub_update_digest_min_age Description How many seconds old objects must be before scrub updates the whole-object digest. Type Integer Default 7200 (120 hours) osd_deep_scrub_large_omap_object_key_threshold Description Warning when you encounter an object with more OMAP keys than this. Type Integer Default 200000 osd_deep_scrub_large_omap_object_value_sum_threshold Description Warning when you encounter an object with more OMAP key bytes than this. Type Integer Default 1 G osd_delete_sleep Description Time in seconds to sleep before the removal transaction. This throttles the placement group deletion process. Type Float Default 0.0 osd_delete_sleep_hdd Description Time in seconds to sleep before the removal transaction for HDDs. Type Float Default 5.0 osd_delete_sleep_ssd Description Time in seconds to sleep before the removal transaction for SSDs. Type Float Default 1.0 osd_delete_sleep_hybrid Description Time in seconds to sleep before the removal transaction when Ceph OSD data is on HDD and OSD journal or WAL and DB is on SSD. Type Float Default 1.0 osd_op_num_shards Description The number of shards for client operations. Type 32-bit Integer Default 0 osd_op_num_threads_per_shard Description The number of threads per shard for client operations. Type 32-bit Integer Default 0 osd_op_num_shards_hdd Description The number of shards for HDD operations. Type 32-bit Integer Default 5 osd_op_num_threads_per_shard_hdd Description The number of threads per shard for HDD operations. Type 32-bit Integer Default 1 osd_op_num_shards_ssd Description The number of shards for SSD operations. Type 32-bit Integer Default 8 osd_op_num_threads_per_shard_ssd Description The number of threads per shard for SSD operations. Type 32-bit Integer Default 2 osd_op_queue Description Sets the type of queue to be used for operation prioritizing within Ceph OSDs. Requires a restart of the OSD daemons. Type String Default wpq Valid choices wpq , mclock_scheduler , debug_random Important The mClock OSD scheduler is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details. osd_op_queue_cut_off Description Selects which priority operations are sent to the strict queue and which are sent to the normal queue. Requires a restart of the OSD daemons. The low setting sends all replication and higher operations to the strict queue, while the high option sends only replication acknowledgment operations and higher to the strict queue. The high setting helps when some Ceph OSDs in the cluster are very busy, especially when combined with the wpq option in the osd_op_queue setting. Ceph OSDs that are very busy handling replication traffic can deplete primary client traffic on these OSDs without these settings. Type String Default high Valid choices low , high , debug_random osd_client_op_priority Description The priority set for client operations. It is relative to osd recovery op priority . Type 32-bit Integer Default 63 Valid Range 1-63 osd_recovery_op_priority Description The priority set for recovery operations. It is relative to osd client op priority . Type 32-bit Integer Default 3 Valid Range 1-63 osd_op_thread_timeout Description The Ceph OSD operation thread timeout in seconds. Type 32-bit Integer Default 15 osd_op_complaint_time Description An operation becomes complaint worthy after the specified number of seconds have elapsed. Type Float Default 30 osd_disk_threads Description The number of disk threads, which are used to perform background disk intensive OSD operations such as scrubbing and snap trimming. Type 32-bit Integer Default 1 osd_op_history_size Description The maximum number of completed operations to track. Type 32-bit Unsigned Integer Default 20 osd_op_history_duration Description The oldest completed operation to track. Type 32-bit Unsigned Integer Default 600 osd_op_log_threshold Description How many operations logs to display at once. Type 32-bit Integer Default 5 osd_op_timeout Description The time in seconds after which running OSD operations time out. Type Integer Default 0 Important Do not set the osd op timeout option unless your clients can handle the consequences. For example, setting this parameter on clients running in virtual machines can lead to data corruption because the virtual machines interpret this timeout as a hardware failure. osd_max_backfills Description The maximum number of backfill operations allowed to or from a single OSD. Type 64-bit Unsigned Integer Default 1 osd_backfill_scan_min Description The minimum number of objects per backfill scan. Type 32-bit Integer Default 64 osd_backfill_scan_max Description The maximum number of objects per backfill scan. Type 32-bit Integer Default 512 osd_backfill_full_ratio Description Refuse to accept backfill requests when the Ceph OSD's full ratio is above this value. Type Float Default 0.85 osd_backfill_retry_interval Description The number of seconds to wait before retrying backfill requests. Type Double Default 30.000000 osd_map_dedup Description Enable removing duplicates in the OSD map. Type Boolean Default true osd_map_cache_size Description The size of the OSD map cache in megabytes. Type 32-bit Integer Default 50 osd_map_cache_bl_size Description The size of the in-memory OSD map cache in OSD daemons. Type 32-bit Integer Default 50 osd_map_cache_bl_inc_size Description The size of the in-memory OSD map cache incrementals in OSD daemons. Type 32-bit Integer Default 100 osd_map_message_max Description The maximum map entries allowed per MOSDMap message. Type 32-bit Integer Default 40 osd_snap_trim_thread_timeout Description The maximum time in seconds before timing out a snap trim thread. Type 32-bit Integer Default 60*60*1 osd_pg_max_concurrent_snap_trims Description The max number of parallel snap trims/PG. This controls how many objects per PG to trim at once. Type 32-bit Integer Default 2 osd_snap_trim_sleep Description Insert a sleep between every trim operation a PG issues. Type 32-bit Integer Default 0 osd_snap_trim_sleep_hdd Description Time in seconds to sleep before the snapshot trimming for HDDs. Type Float Default 5.0 osd_snap_trim_sleep_ssd Description Time in seconds to sleep before the snapshot trimming operation for SSD OSDs, including NVMe. Type Float Default 0.0 osd_snap_trim_sleep_hybrid Description Time in seconds to sleep before the snapshot trimming operation when OSD data is on an HDD and the OSD journal or WAL and DB is on an SSD. Type Float Default 2.0 osd_max_trimming_pgs Description The max number of trimming PGs Type 32-bit Integer Default 2 osd_backlog_thread_timeout Description The maximum time in seconds before timing out a backlog thread. Type 32-bit Integer Default 60*60*1 osd_default_notify_timeout Description The OSD default notification timeout (in seconds). Type 32-bit Integer Unsigned Default 30 osd_check_for_log_corruption Description Check log files for corruption. Can be computationally expensive. Type Boolean Default false osd_remove_thread_timeout Description The maximum time in seconds before timing out a remove OSD thread. Type 32-bit Integer Default 60*60 osd_command_thread_timeout Description The maximum time in seconds before timing out a command thread. Type 32-bit Integer Default 10*60 osd_command_max_records Description Limits the number of lost objects to return. Type 32-bit Integer Default 256 osd_auto_upgrade_tmap Description Uses tmap for omap on old objects. Type Boolean Default true osd_tmapput_sets_users_tmap Description Uses tmap for debugging only. Type Boolean Default false osd_preserve_trimmed_log Description Preserves trimmed log files, but uses more disk space. Type Boolean Default false osd_recovery_delay_start Description After peering completes, Ceph delays for the specified number of seconds before starting to recover objects. Type Float Default 0 osd_recovery_max_active Description The number of active recovery requests per OSD at one time. More requests will accelerate recovery, but the requests place an increased load on the cluster. Type 32-bit Integer Default 0 osd_recovery_max_active_hdd Description The number of active recovery requests per Ceph OSD at one time, if the primary device is HDD. Type Integer Default 3 osd_recovery_max_active_ssd Description The number of active recovery requests per Ceph OSD at one time, if the primary device is SSD. Type Integer Default 10 osd_recovery_sleep Description Time in seconds to sleep before the recovery or backfill operation. Increasing this value slows down recovery operation while client operations are less impacted. Type Float Default 0.0 osd_recovery_sleep_hdd Description Time in seconds to sleep before the recovery or backfill operation for HDDs. Type Float Default 0.1 osd_recovery_sleep_ssd Description Time in seconds to sleep before the recovery or backfill operation for SSDs. Type Float Default 0.0 osd_recovery_sleep_hybrid Description Time in seconds to sleep before the recovery or backfill operation when Ceph OSD data is on HDD and OSD journal or WAL and DB is on SSD. Type Float Default 0.025 osd_recovery_max_chunk Description The maximum size of a recovered chunk of data to push. Type 64-bit Integer Unsigned Default 8388608 osd_recovery_threads Description The number of threads for recovering data. Type 32-bit Integer Default 1 osd_recovery_thread_timeout Description The maximum time in seconds before timing out a recovery thread. Type 32-bit Integer Default 30 osd_recover_clone_overlap Description Preserves clone overlap during recovery. Should always be set to true . Type Boolean Default true rados_osd_op_timeout Description Number of seconds that RADOS waits for a response from the OSD before returning an error from a RADOS operation. A value of 0 means no limit. Type Double Default 0 | [
"IMPORTANT: Red Hat does not recommend changing the default."
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/configuration_guide/osd-object-storage-daemon-configuration-options_conf |
Chapter 17. Installing a three-node cluster on AWS | Chapter 17. Installing a three-node cluster on AWS In OpenShift Container Platform version 4.13, you can install a three-node cluster on Amazon Web Services (AWS). A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster using either installer-provisioned or user-provisioned infrastructure. Note Deploying a three-node cluster using an AWS Marketplace image is not supported. 17.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 If you are deploying a cluster with user-provisioned infrastructure: After you create the Kubernetes manifest files, make sure that the spec.mastersSchedulable parameter is set to true in cluster-scheduler-02-config.yml file. You can locate this file in <installation_directory>/manifests . For more information, see "Creating the Kubernetes manifest and Ignition config files" in "Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates". Do not create additional worker nodes. Example cluster-scheduler-02-config.yml file for a three-node cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: "" status: {} 17.2. steps Installing a cluster on AWS with customizations Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates | [
"compute: - name: worker platform: {} replicas: 0",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_aws/installing-aws-three-node |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/replacing_devices/making-open-source-more-inclusive |
Chapter 13. Viewing Threads | Chapter 13. Viewing Threads You can view and monitor the state of threads. Procedure : Click the Runtime tab and then the Threads subtab. The Threads page lists active threads and stack trace details for each thread. By default, the thread list shows all threads in descending ID order. To sort the list by increasing ID, click the ID column label. Optionally, filter the list by thread state (for example, Blocked ) or by thread name. To drill down to detailed information for a specific thread, such as the lock class name and full stack trace for that thread, in the Actions column, click More . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/hawtio_diagnostic_console_guide/viewing-threads |
Chapter 2. Planning a deployment of AMQ Broker on OpenShift Container Platform | Chapter 2. Planning a deployment of AMQ Broker on OpenShift Container Platform This section describes how to plan an Operator-based deployment. Operators are programs that enable you to package, deploy, and manage OpenShift applications. Often, Operators automate common or complex tasks. Commonly, Operators are intended to provide: Consistent, repeatable installations Health checks of system components Over-the-air (OTA) updates Managed upgrades Operators enable you to make changes while your broker instances are running, because they are always listening for changes to the Custom Resource (CR) instances that you used to configure your deployment. When you make changes to a CR, the Operator reconciles the changes with the existing broker deployment and updates the deployment to reflect the changes. In addition, the Operator provides a message migration capability, which ensures the integrity of messaging data. When a broker in a clustered deployment shuts down due to failure or intentional scaledown of the deployment, this capability migrates messages to a broker Pod that is still running in the same broker cluster. 2.1. Overview of the AMQ Broker Operator Custom Resource Definitions In general, a Custom Resource Definition (CRD) is a schema of configuration items that you can modify for a custom OpenShift object deployed with an Operator. By creating a corresponding Custom Resource (CR) instance, you can specify values for configuration items in the CRD. If you are an Operator developer, what you expose through a CRD essentially becomes the API for how a deployed object is configured and used. You can directly access the CRD through regular HTTP curl commands, because the CRD gets exposed automatically through Kubernetes. You can install the AMQ Broker Operator using either the OpenShift command-line interface (CLI), or the Operator Lifecycle Manager, through the OperatorHub graphical interface. In either case, the AMQ Broker Operator includes the CRDs described below. Main broker CRD You deploy a CR instance based on this CRD to create and configure a broker deployment. Based on how you install the Operator, this CRD is: The broker_activemqartemis_crd file in the crds directory of the Operator installation archive (OpenShift CLI installation method) The ActiveMQArtemis CRD in the Custom Resource Definitions section of the OpenShift Container Platform web console (OperatorHub installation method) Address CRD You deploy a CR instance based on this CRD to create addresses and queues for a broker deployment. Based on how you install the Operator, this CRD is: The broker_activemqartemisaddress_crd file in the crds directory of the Operator installation archive (OpenShift CLI installation method) The ActiveMQArtemisAddresss CRD in the Custom Resource Definitions section of the OpenShift Container Platform web console (OperatorHub installation method) Security CRD You deploy a CR instance based on this CRD to create users and associate those users with security contexts. Based on how you install the Operator, this CRD is: The broker_activemqartemissecurity_crd file in the crds directory of the Operator installation archive (OpenShift CLI installation method) The ActiveMQArtemisSecurity CRD in the Custom Resource Definitions section of the OpenShift Container Platform web console (OperatorHub installation method). Scaledown CRD The Operator automatically creates a CR instance based on this CRD when instantiating a scaledown controller for message migration. Based on how you install the Operator, this CRD is: The broker_activemqartemisscaledown_crd file in the crds directory of the Operator installation archive (OpenShift CLI installation method) The ActiveMQArtemisScaledown CRD in the Custom Resource Definitions section of the OpenShift Container Platform web console (OperatorHub installation method). Additional resources To learn how to install the AMQ Broker Operator (and all included CRDs) using: The OpenShift CLI, see Section 3.2, "Installing the Operator using the CLI" The Operator Lifecycle Manager and OperatorHub graphical interface, see Section 3.3, "Installing the Operator using OperatorHub" . For complete configuration references to use when creating CR instances based on the main broker and address CRDs, see: Section 8.1.1, "Broker Custom Resource configuration reference" Section 8.1.2, "Address Custom Resource configuration reference" 2.2. Overview of the AMQ Broker Operator sample Custom Resources The AMQ Broker Operator archive that you download and extract during installation includes sample Custom Resource (CR) files in the deploy/crs directory. These sample CR files enable you to: Deploy a minimal broker without SSL or clustering. Define addresses. The broker Operator archive that you download and extract also includes CRs for example deployments in the deploy/examples directory, as listed below. artemis-basic-deployment.yaml Basic broker deployment. artemis-persistence-deployment.yaml Broker deployment with persistent storage. artemis-cluster-deployment.yaml Deployment of clustered brokers. artemis-persistence-cluster-deployment.yaml Deployment of clustered brokers with persistent storage. artemis-ssl-deployment.yaml Broker deployment with SSL security. artemis-ssl-persistence-deployment.yaml Broker deployment with SSL security and persistent storage. artemis-aio-journal.yaml Use of asynchronous I/O (AIO) with the broker journal. address-queue-create.yaml Address and queue creation. 2.3. Watch options for a Cluster Operator deployment When the Cluster Operator is running, it starts to watch for updates of AMQ Broker custom resources (CRs). You can choose to deploy the Cluster Operator to watch CRs from: A single namespace (the same namespace containing the Operator) All namespaces Note If you have already installed a version of the AMQ Broker Operator in a namespace on your cluster, Red Hat recommends that you do not install the AMQ Broker Operator 7.9 version to watch that namespace to avoid potential conflicts. 2.4. How the Operator chooses container images When you create a Custom Resource (CR) instance for a broker deployment based on at least version 7.9.4-opr-3 of the Operator, you do not need to explicitly specify broker or Init Container image names in the CR. By default, if you deploy a CR and do not explicitly specify container image values, the Operator automatically chooses the appropriate container images to use. Note If you install the Operator using the OpenShift command-line interface, the Operator installation archive includes a sample CR file called broker_activemqartemis_cr.yaml . In the sample CR, the spec.deploymentPlan.image property is included and set to its default value of placeholder . This value indicates that the Operator does not choose a broker container image until you deploy the CR. The spec.deploymentPlan.initImage property, which specifies the Init Container image, is not included in the broker_activemqartemis_cr.yaml sample CR file. If you do not explicitly include the spec.deploymentPlan.initImage property in your CR and specify a value, the Operator chooses an appropriate built-in Init Container image to use when you deploy the CR. How the Operator chooses these images is described in this section. To choose broker and Init Container images, the Operator first determines an AMQ Broker version to which the images should correspond. The Operator determines the version as follows: If the spec.upgrades.enabled property in the main CR is already set to true and the spec.version property specifies 7.7.0 , 7.8.0 , 7.8.1 , or 7.8.2 , the Operator uses that specified version. If spec.upgrades.enabled is not set to true , or spec.version is set to an AMQ Broker version earlier than 7.7.0 , the Operator uses the latest version of AMQ Broker (that is, 7.9.4 ). The Operator then detects your container platform. The AMQ Broker Operator can run on the following container platforms: OpenShift Container Platform (x86_64) OpenShift Container Platform on IBM Z (s390x) OpenShift Container Platform on IBM Power Systems (ppc64le) Based on the version of AMQ Broker and your container platform, the Operator then references two sets of environment variables in the operator.yaml configuration file. These sets of environment variables specify broker and Init Container images for various versions of AMQ Broker, as described in the following sub-sections. 2.4.1. Environment variables for broker container images The environment variables included in the operator.yaml configuration file for broker container images have the following naming convention: OpenShift Container Platform RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_ <AMQ_Broker_version_identifier> OpenShift Container Platform on IBM Z RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_ <AMQ_Broker_version_identifier> _s390x OpenShift Container Platform on IBM Power Systems RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_ <AMQ_Broker_version_identifier> _ppc64le Environment variable names for each supported container platform and specific AMQ Broker versions are shown in the table. Container platform Environment variable names OpenShift Container Platform RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_781 RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_782 RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_790 OpenShift Container Platform on IBM Z RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_781_s390x RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_782_s390x RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_790_s390x OpenShift Container Platform on IBM Power Systems RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_781_ppc64le RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_782_ppc64le RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_790_ppc64le The value of each environment variable specifies a broker container image that is available from Red Hat. For example: - name: RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_790 #value: registry.redhat.io/amq7/amq-broker-rhel8:7.9 value: registry.redhat.io/amq7/amq-broker-rhel8@sha256:71aef8faa1c661212ef8a7ef450656a250d95b51d33d1ce77f12ece27cdb9442 Therefore, based on an AMQ Broker version and your container platform, the Operator determines the applicable environment variable name. The Operator uses the corresponding image value when starting the broker container. Note In the operator.yaml file, the Operator uses an image that is represented by a Secure Hash Algorithm (SHA) value. The comment line, which begins with a number sign ( # ) symbol, denotes that the SHA value corresponds to a specific container image tag. 2.4.2. Environment variables for Init Container images The environment variables included in the operator.yaml configuration file for Init Container images have the following naming convention: RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_ <AMQ_Broker_version_identifier> Environment variable names for specific AMQ Broker versions are listed below. RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_781 RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_782 RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_790 The value of each environment variable specifies an Init Container image that is available from Red Hat. For example: - name: RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_790 #value: registry.redhat.io/amq7/amq-broker-init-rhel8:0.4-21 value: registry.redhat.io/amq7/amq-broker-init-rhel8@sha256:d327d358e6cfccac14becc486bce643e34970ecfc6c4d187a862425867a9ac8a Therefore, based on an AMQ Broker version, the Operator determines the applicable environment variable name. The Operator uses the corresponding image value when starting the Init Container. Note As shown in the example, the Operator uses an image that is represented by a Secure Hash Algorithm (SHA) value. The comment line, which begins with a number sign ( # ) symbol, denotes that the SHA value corresponds to a specific container image tag. Observe that the corresponding container image tag is not a floating tag in the form of 0.4-21 . This means that the container image used by the Operator remains fixed. The Operator does not automatically pull and use a new micro image version (that is, 0.4-21-n , where n is the latest micro version) when it becomes available from Red Hat. The environment variables included in the operator.yaml configuration file for Init Container images have the following naming convention: OpenShift Container Platform RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_ <AMQ_Broker_version_identifier> OpenShift Container Platform on IBM Z RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_s390x_ <AMQ_Broker_version_identifier> OpenShift Container Platform on IBM Power Systems RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_ppc64le_ <AMQ_Broker_version_identifier> Environment variable names for each supported container platform and specific AMQ Broker versions are shown in the table. Container platform Environment variable names OpenShift Container Platform RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_781 RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_782 RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_790 OpenShift Container Platform on IBM Z RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_s390x_781 RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_s390x_782 RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_s390x_790 OpenShift Container Platform on IBM Power Systems RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_ppc64le_781 RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_ppc64le_782 RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_ppc64le_790 The value of each environment variable specifies an Init Container image that is available from Red Hat. For example: - name: RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_790 #value: registry.redhat.io/amq7/amq-broker-init-rhel8:0.4-21-1 value: registry.redhat.io/amq7/amq-broker-init-rhel8@sha256:d327d358e6cfccac14becc486bce643e34970ecfc6c4d187a862425867a9ac8a Therefore, based on an AMQ Broker version and your container platform, the Operator determines the applicable environment variable name. The Operator uses the corresponding image value when starting the Init Container. Note As shown in the example, the Operator uses an image that is represented by a Secure Hash Algorithm (SHA) value. The comment line, which begins with a number sign ( # ) symbol, denotes that the SHA value corresponds to a specific container image tag. Observe that the corresponding container image tag is not a floating tag in the form of 0.4-21 . This means that the container image used by the Operator remains fixed. The Operator does not automatically pull and use a new micro image version (that is, 0.4-21-n , where n is the latest micro version) when it becomes available from Red Hat. Additional resources To learn how to use the AMQ Broker Operator to create a broker deployment, see Chapter 3, Deploying AMQ Broker on OpenShift Container Platform using the AMQ Broker Operator . For more information about how the Operator uses an Init Container to generate the broker configuration, see Section 4.1, "How the Operator generates the broker configuration" . To learn how to build and specify a custom Init Container image, see Section 4.6, "Specifying a custom Init Container image" . 2.5. Operator deployment notes This section describes some important considerations when planning an Operator-based deployment Deploying the Custom Resource Definitions (CRDs) that accompany the AMQ Broker Operator requires cluster administrator privileges for your OpenShift cluster. When the Operator is deployed, non-administrator users can create broker instances via corresponding Custom Resources (CRs). To enable regular users to deploy CRs, the cluster administrator must first assign roles and permissions to the CRDs. For more information, see Creating cluster roles for Custom Resource Definitions in the OpenShift Container Platform documentation. When you update your cluster with the CRDs for the latest Operator version, this update affects all projects in the cluster. Any broker Pods deployed from versions of the Operator might become unable to update their status. When you click the Logs tab of a running broker Pod in the OpenShift Container Platform web console, you see messages indicating that 'UpdatePodStatus' has failed. However, the broker Pods and Operator in that project continue to work as expected. To fix this issue for an affected project, you must also upgrade that project to use the latest version of the Operator. While you can create more than one broker deployment in a given OpenShift project by deploying multiple Custom Resource (CR) instances, typically, you create a single broker deployment in a project, and then deploy multiple CR instances for addresses. Red Hat recommends you create broker deployments in separate projects. If you intend to deploy brokers with persistent storage and do not have container-native storage in your OpenShift cluster, you need to manually provision Persistent Volumes (PVs) and ensure that these are available to be claimed by the Operator. For example, if you want to create a cluster of two brokers with persistent storage (that is, by setting persistenceEnabled=true in your CR), you need to have two persistent volumes available. By default, each broker instance requires storage of 2 GiB. If you specify persistenceEnabled=false in your CR, the deployed brokers uses ephemeral storage. Ephemeral storage means that that every time you restart the broker Pods, any existing data is lost. For more information about provisioning persistent storage in OpenShift Container Platform, see: Understanding persistent storage (OpenShift Container Platform 4.5) You must add configuration for the items listed below to the main broker CR instance before deploying the CR for the first time. You cannot add configuration for these items to a broker deployment that is already running. The size of the Persistent Volume Claim (PVC) required by each broker in a deployment for persistent storage Limits and requests for memory and CPU for each broker in a deployment The procedures in the section show you how to install the Operator and use Custom Resources (CRs) to create broker deployments on OpenShift Container Platform. When you have successfully completed the procedures, you will have the Operator running in an individual Pod. Each broker instance that you create will run as an individual Pod in a StatefulSet in the same project as the Operator. Later, you will you will see how to use a dedicated addressing CR to define addresses in your broker deployment. | [
"- name: RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_790 #value: registry.redhat.io/amq7/amq-broker-rhel8:7.9 value: registry.redhat.io/amq7/amq-broker-rhel8@sha256:71aef8faa1c661212ef8a7ef450656a250d95b51d33d1ce77f12ece27cdb9442",
"- name: RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_790 #value: registry.redhat.io/amq7/amq-broker-init-rhel8:0.4-21 value: registry.redhat.io/amq7/amq-broker-init-rhel8@sha256:d327d358e6cfccac14becc486bce643e34970ecfc6c4d187a862425867a9ac8a",
"- name: RELATED_IMAGE_ActiveMQ_Artemis_Broker_Init_790 #value: registry.redhat.io/amq7/amq-broker-init-rhel8:0.4-21-1 value: registry.redhat.io/amq7/amq-broker-init-rhel8@sha256:d327d358e6cfccac14becc486bce643e34970ecfc6c4d187a862425867a9ac8a"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/deploying_amq_broker_on_openshift/assembly-br-planning-a-deployment_broker-ocp |
Chapter 23. Removing a Subsystem | Chapter 23. Removing a Subsystem Removing a subsystem requires specifying the subsystem type and the name of the server in which the subsystem is running. This command removes all files associated with the subsystem (without removing the subsystem packages). The -s option specifies the subsystem to be removed (such as CA, KRA, OCSP, TKS, or TPS). The -i option specifies the instance name, such as pki-tomcat . Example 23.1. Removing a CA Subsystem The pkidestroy utility removes the subsystem and any related files, such as the certificate databases, certificates, keys, and associated users. It does not uninstall the subsystem packages. If the subsystem is the last subsystem on the server instance, the server instance is removed as well. | [
"pkidestroy -s subsystem_type -i instance_name",
"pkidestroy -s CA -i pki-tomcat Loading deployment configuration from /var/lib/pki/pki-tomcat/ca/registry/ca/deployment.cfg. Uninstalling CA from /var/lib/pki/pki-tomcat. Removed symlink /etc/systemd/system/multi-user.target.wants/pki-tomcatd.target. Uninstallation complete."
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/uninstalling_certificate_system_subsystems-removing_a_subsystem_instance |
Chapter 2. Setting up RHEL image builder | Chapter 2. Setting up RHEL image builder Use RHEL image builder to create your customized RHEL for Edge images. After you install RHEL image builder on a RHEL system, RHEL image builder is available as an application in RHEL web console. You can also access RHEL image builder on the command line by using the composer-cli tool. Note It is recommended to install RHEL image builder on a virtual machine. 2.1. Image builder system requirements The environment where RHEL image builder runs, for example a virtual machine, must meet the requirements that are listed in the following table. Note Running RHEL image builder inside a container is not supported. Table 2.1. Image builder system requirements Parameter Minimal Required Value System type A dedicated virtual machine Processor 2 cores Memory 4 GiB Disk space 20 GiB Access privileges Administrator level (root) Network Connectivity to Internet Note The 20 GiB disk space requirement is enough to install and run RHEL image builder in the host. To build and deploy image builds, you must allocate additional dedicated disk space. 2.2. Installing RHEL image builder To install RHEL image builder on a dedicated virtual machine, follow these steps: Prerequisites The virtual machine is created and is powered on. You have installed RHEL and you have subscribed to RHSM or Red Hat Satellite. You have enabled the BaseOS and AppStream repositories to be able to install the RHEL image builder packages. Procedure Install the following packages on the virtual machine. osbuild-composer composer-cli cockpit-composer bash-completion firewalld RHEL image builder is installed as an application in RHEL web console. Reboot the virtual machine Configure the system firewall to allow access to the web console: Enable RHEL image builder. The osbuild-composer and cockpit services start automatically on first access. Load the shell configuration script so that the autocomplete feature for the composer-cli command starts working immediately without reboot: Additional resources Managing repositories | [
"dnf install osbuild-composer composer-cli cockpit-composer bash-completion firewalld",
"firewall-cmd --add-service=cockpit && firewall-cmd --add-service=cockpit --permanent",
"systemctl enable osbuild-composer.socket cockpit.socket --now",
"source /etc/bash_completion.d/composer-cli"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/composing_installing_and_managing_rhel_for_edge_images/setting-up-image-builder_composing-installing-managing-rhel-for-edge-images |
Chapter 9. Removing the Red Hat build of OpenTelemetry | Chapter 9. Removing the Red Hat build of OpenTelemetry The steps for removing the Red Hat build of OpenTelemetry from an OpenShift Container Platform cluster are as follows: Shut down all Red Hat build of OpenTelemetry pods. Remove any OpenTelemetryCollector instances. Remove the Red Hat build of OpenTelemetry Operator. 9.1. Removing an OpenTelemetry Collector instance by using the web console You can remove an OpenTelemetry Collector instance in the Administrator view of the web console. Prerequisites You are logged in to the web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. Procedure Go to Operators Installed Operators Red Hat build of OpenTelemetry Operator OpenTelemetryInstrumentation or OpenTelemetryCollector . To remove the relevant instance, select Delete ... Delete . Optional: Remove the Red Hat build of OpenTelemetry Operator. 9.2. Removing an OpenTelemetry Collector instance by using the CLI You can remove an OpenTelemetry Collector instance on the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run oc login : USD oc login --username=<your_username> Procedure Get the name of the OpenTelemetry Collector instance by running the following command: USD oc get deployments -n <project_of_opentelemetry_instance> Remove the OpenTelemetry Collector instance by running the following command: USD oc delete opentelemetrycollectors <opentelemetry_instance_name> -n <project_of_opentelemetry_instance> Optional: Remove the Red Hat build of OpenTelemetry Operator. Verification To verify successful removal of the OpenTelemetry Collector instance, run oc get deployments again: USD oc get deployments -n <project_of_opentelemetry_instance> 9.3. Additional resources Deleting Operators from a cluster Getting started with the OpenShift CLI | [
"oc login --username=<your_username>",
"oc get deployments -n <project_of_opentelemetry_instance>",
"oc delete opentelemetrycollectors <opentelemetry_instance_name> -n <project_of_opentelemetry_instance>",
"oc get deployments -n <project_of_opentelemetry_instance>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/red_hat_build_of_opentelemetry/dist-tracing-otel-removing |
8.3. Using ID Views to Define AD User Attributes | 8.3. Using ID Views to Define AD User Attributes With ID views, you can change the user attribute values defined in AD. For a complete list of the attributes, see Attributes an ID View Can Override . For example: If you are managing a mixed Linux-Windows environment and want to manually define POSIX attributes or SSH login attributes for an AD user, but the AD policy does not allow it, you can use ID views to override the attribute values. When the AD user authenticates to clients running SSSD or authenticates using a compat LDAP tree, the new values are used in the authentication process. Note Only IdM users can manage ID views. AD users cannot. The process for overriding the attribute values follows these steps: Create a new ID view. Add a user ID override in the ID view, and specify the require attribute value. Apply the ID view to a specific host. For details on how to perform these steps, see Defining a Different Attribute Value for a User Account on Different Hosts in the Linux Domain Identity, Authentication, and Policy Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/id-views-store-host-specific |
Chapter 4. Certificate types and descriptions | Chapter 4. Certificate types and descriptions 4.1. User-provided certificates for the API server 4.1.1. Purpose The API server is accessible by clients external to the cluster at api.<cluster_name>.<base_domain> . You might want clients to access the API server at a different hostname or without the need to distribute the cluster-managed certificate authority (CA) certificates to the clients. The administrator must set a custom default certificate to be used by the API server when serving content. 4.1.2. Location The user-provided certificates must be provided in a kubernetes.io/tls type Secret in the openshift-config namespace. Update the API server cluster configuration, the apiserver/cluster resource, to enable the use of the user-provided certificate. 4.1.3. Management User-provided certificates are managed by the user. 4.1.4. Expiration API server client certificate expiration is less than five minutes. User-provided certificates are managed by the user. 4.1.5. Customization Update the secret containing the user-managed certificate as needed. Additional resources Adding API server certificates 4.2. Proxy certificates 4.2.1. Purpose Proxy certificates allow users to specify one or more custom certificate authority (CA) certificates used by platform components when making egress connections. The trustedCA field of the Proxy object is a reference to a config map that contains a user-provided trusted certificate authority (CA) bundle. This bundle is merged with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle and injected into the trust store of platform components that make egress HTTPS calls. For example, image-registry-operator calls an external image registry to download images. If trustedCA is not specified, only the RHCOS trust bundle is used for proxied HTTPS connections. Provide custom CA certificates to the RHCOS trust bundle if you want to use your own certificate infrastructure. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from required key ca-bundle.crt and copying it to a config map named trusted-ca-bundle in the openshift-config-managed namespace. The namespace for the config map referenced by trustedCA is openshift-config : apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- Additional resources Configuring the cluster-wide proxy 4.2.2. Managing proxy certificates during installation The additionalTrustBundle value of the installer configuration is used to specify any proxy-trusted CA certificates during installation. For example: USD cat install-config.yaml Example output ... proxy: httpProxy: http://<username:[email protected]:123/> httpsProxy: http://<username:[email protected]:123/> noProxy: <123.example.com,10.88.0.0/16> additionalTrustBundle: | -----BEGIN CERTIFICATE----- <MY_HTTPS_PROXY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 4.2.3. Location The user-provided trust bundle is represented as a config map. The config map is mounted into the file system of platform components that make egress HTTPS calls. Typically, Operators mount the config map to /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem , but this is not required by the proxy. A proxy can modify or inspect the HTTPS connection. In either case, the proxy must generate and sign a new certificate for the connection. Complete proxy support means connecting to the specified proxy and trusting any signatures it has generated. Therefore, it is necessary to let the user specify a trusted root, such that any certificate chain connected to that trusted root is also trusted. If you use the RHCOS trust bundle, place CA certificates in /etc/pki/ca-trust/source/anchors . For more information, see Using shared system certificates in the Red Hat Enterprise Linux (RHEL) Securing networks document. 4.2.4. Expiration The user sets the expiration term of the user-provided trust bundle. The default expiration term is defined by the CA certificate itself. It is up to the CA administrator to configure this for the certificate before it can be used by OpenShift Container Platform or RHCOS. Note Red Hat does not monitor for when CAs expire. However, due to the long life of CAs, this is generally not an issue. However, you might need to periodically update the trust bundle. 4.2.5. Services By default, all platform components that make egress HTTPS calls will use the RHCOS trust bundle. If trustedCA is defined, it will also be used. Any service that is running on the RHCOS node is able to use the trust bundle of the node. 4.2.6. Management These certificates are managed by the system and not the user. 4.2.7. Customization Updating the user-provided trust bundle consists of either: updating the PEM-encoded certificates in the config map referenced by trustedCA, or creating a config map in the namespace openshift-config that contains the new trust bundle and updating trustedCA to reference the name of the new config map. The mechanism for writing CA certificates to the RHCOS trust bundle is exactly the same as writing any other file to RHCOS, which is done through the use of machine configs. When the Machine Config Operator (MCO) applies the new machine config that contains the new CA certificates, it runs the program update-ca-trust afterwards and restarts the CRI-O service on the RHCOS nodes. This update does not require a node reboot. Restarting the CRI-O service automatically updates the trust bundle with the new CA certificates. For example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-examplecorp-ca-cert spec: config: ignition: version: 3.1.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVORENDQXh5Z0F3SUJBZ0lKQU51bkkwRDY2MmNuTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUdsTVFzd0NRWUQKV1FRR0V3SlZVekVYTUJVR0ExVUVDQXdPVG05eWRHZ2dRMkZ5YjJ4cGJtRXhFREFPQmdOVkJBY01CMUpoYkdWcApBMmd4RmpBVUJnTlZCQW9NRFZKbFpDQklZWFFzSUVsdVl5NHhFekFSQmdOVkJBc01DbEpsWkNCSVlYUWdTVlF4Ckh6QVpCZ05WQkFNTUVsSmxaQ0JJWVhRZ1NWUWdVbTl2ZENCRFFURWhNQjhHQ1NxR1NJYjNEUUVKQVJZU2FXNW0KWGpDQnBURUxNQWtHQTFVRUJoTUNWVk14RnpBVkJnTlZCQWdNRGs1dmNuUm9JRU5oY205c2FXNWhNUkF3RGdZRApXUVFIREFkU1lXeGxhV2RvTVJZd0ZBWURWUVFLREExU1pXUWdTR0YwTENCSmJtTXVNUk13RVFZRFZRUUxEQXBTCkFXUWdTR0YwSUVsVU1Sc3dHUVlEVlFRRERCSlNaV1FnU0dGMElFbFVJRkp2YjNRZ1EwRXhJVEFmQmdrcWhraUcKMHcwQkNRRVdFbWx1Wm05elpXTkFjbVZrYUdGMExtTnZiVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUApCRENDQVFvQ2dnRUJBTFF0OU9KUWg2R0M1TFQxZzgwcU5oMHU1MEJRNHNaL3laOGFFVHh0KzVsblBWWDZNSEt6CmQvaTdsRHFUZlRjZkxMMm55VUJkMmZRRGsxQjBmeHJza2hHSUlaM2lmUDFQczRsdFRrdjhoUlNvYjNWdE5xU28KSHhrS2Z2RDJQS2pUUHhEUFdZeXJ1eTlpckxaaW9NZmZpM2kvZ0N1dDBaV3RBeU8zTVZINXFXRi9lbkt3Z1BFUwpZOXBvK1RkQ3ZSQi9SVU9iQmFNNzYxRWNyTFNNMUdxSE51ZVNmcW5obzNBakxRNmRCblBXbG82MzhabTFWZWJLCkNFTHloa0xXTVNGa0t3RG1uZTBqUTAyWTRnMDc1dkNLdkNzQ0F3RUFBYU5qTUdFd0hRWURWUjBPQkJZRUZIN1IKNXlDK1VlaElJUGV1TDhacXczUHpiZ2NaTUI4R0ExVWRJd1FZTUJhQUZIN1I0eUMrVWVoSUlQZXVMOFpxdzNQegpjZ2NaTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RGdZRFZSMFBBUUgvQkFRREFnR0dNQTBHQ1NxR1NJYjNEUUVCCkR3VUFBNElCQVFCRE52RDJWbTlzQTVBOUFsT0pSOCtlbjVYejloWGN4SkI1cGh4Y1pROGpGb0cwNFZzaHZkMGUKTUVuVXJNY2ZGZ0laNG5qTUtUUUNNNFpGVVBBaWV5THg0ZjUySHVEb3BwM2U1SnlJTWZXK0tGY05JcEt3Q3NhawpwU29LdElVT3NVSks3cUJWWnhjckl5ZVFWMnFjWU9lWmh0UzV3QnFJd09BaEZ3bENFVDdaZTU4UUhtUzQ4c2xqCjVlVGtSaml2QWxFeHJGektjbGpDNGF4S1Fsbk92VkF6eitHbTMyVTB4UEJGNEJ5ZVBWeENKVUh3MVRzeVRtZWwKU3hORXA3eUhvWGN3bitmWG5hK3Q1SldoMWd4VVp0eTMKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= mode: 0644 overwrite: true path: /etc/pki/ca-trust/source/anchors/examplecorp-ca.crt The trust store of machines must also support updating the trust store of nodes. 4.2.8. Renewal There are no Operators that can auto-renew certificates on the RHCOS nodes. Note Red Hat does not monitor for when CAs expire. However, due to the long life of CAs, this is generally not an issue. However, you might need to periodically update the trust bundle. 4.3. Service CA certificates 4.3.1. Purpose service-ca is an Operator that creates a self-signed CA when an OpenShift Container Platform cluster is deployed. 4.3.2. Expiration A custom expiration term is not supported. The self-signed CA is stored in a secret with qualified name service-ca/signing-key in fields tls.crt (certificate(s)), tls.key (private key), and ca-bundle.crt (CA bundle). Other services can request a service serving certificate by annotating a service resource with service.beta.openshift.io/serving-cert-secret-name: <secret name> . In response, the Operator generates a new certificate, as tls.crt , and private key, as tls.key to the named secret. The certificate is valid for two years. Other services can request that the CA bundle for the service CA be injected into API service or config map resources by annotating with service.beta.openshift.io/inject-cabundle: true to support validating certificates generated from the service CA. In response, the Operator writes its current CA bundle to the CABundle field of an API service or as service-ca.crt to a config map. As of OpenShift Container Platform 4.3.5, automated rotation is supported and is backported to some 4.2.z and 4.3.z releases. For any release supporting automated rotation, the service CA is valid for 26 months and is automatically refreshed when there is less than 13 months validity left. If necessary, you can manually refresh the service CA. The service CA expiration of 26 months is longer than the expected upgrade interval for a supported OpenShift Container Platform cluster, such that non-control plane consumers of service CA certificates will be refreshed after CA rotation and prior to the expiration of the pre-rotation CA. Warning A manually-rotated service CA does not maintain trust with the service CA. You might experience a temporary service disruption until the pods in the cluster are restarted, which ensures that pods are using service serving certificates issued by the new service CA. 4.3.3. Management These certificates are managed by the system and not the user. 4.3.4. Services Services that use service CA certificates include: cluster-autoscaler-operator cluster-monitoring-operator cluster-authentication-operator cluster-image-registry-operator cluster-ingress-operator cluster-kube-apiserver-operator cluster-kube-controller-manager-operator cluster-kube-scheduler-operator cluster-networking-operator cluster-openshift-apiserver-operator cluster-openshift-controller-manager-operator cluster-samples-operator cluster-storage-operator machine-config-operator console-operator insights-operator machine-api-operator operator-lifecycle-manager CSI driver operators This is not a comprehensive list. Additional resources Manually rotate service serving certificates Securing service traffic using service serving certificate secrets 4.4. Node certificates 4.4.1. Purpose Node certificates are signed by the cluster and allow the kubelet to communicate with the Kubernetes API server. They come from the kubelet CA certificate, which is generated by the bootstrap process. 4.4.2. Location The kubelet CA certificate is located in the kube-apiserver-to-kubelet-signer secret in the openshift-kube-apiserver-operator namespace. 4.4.3. Management These certificates are managed by the system and not the user. 4.4.4. Expiration Node certificates are automatically rotated after 292 days and expire after 365 days. 4.4.5. Renewal The Kubernetes API Server Operator automatically generates a new kube-apiserver-to-kubelet-signer CA certificate at 292 days. The old CA certificate is removed after 365 days. Nodes are not rebooted when a kubelet CA certificate is renewed or removed. Cluster administrators can manually renew the kubelet CA certificate by running the following command: USD oc annotate -n openshift-kube-apiserver-operator secret kube-apiserver-to-kubelet-signer auth.openshift.io/certificate-not-after- Additional resources Working with nodes 4.5. Bootstrap certificates 4.5.1. Purpose The kubelet, in OpenShift Container Platform 4 and later, uses the bootstrap certificate located in /etc/kubernetes/kubeconfig to initially bootstrap. This is followed by the bootstrap initialization process and authorization of the kubelet to create a CSR . In that process, the kubelet generates a CSR while communicating over the bootstrap channel. The controller manager signs the CSR, resulting in a certificate that the kubelet manages. 4.5.2. Management These certificates are managed by the system and not the user. 4.5.3. Expiration This bootstrap certificate is valid for 10 years. The kubelet-managed certificate is valid for one year and rotates automatically at around the 80 percent mark of that one year. Note OpenShift Lifecycle Manager (OLM) does not update the bootstrap certificate. 4.5.4. Customization You cannot customize the bootstrap certificates. 4.6. etcd certificates 4.6.1. Purpose etcd certificates are signed by the etcd-signer; they come from a certificate authority (CA) that is generated by the bootstrap process. 4.6.2. Expiration The CA certificates are valid for 10 years. The peer, client, and server certificates are valid for three years. 4.6.3. Management These certificates are only managed by the system and are automatically rotated. 4.6.4. Services etcd certificates are used for encrypted communication between etcd member peers, as well as encrypted client traffic. The following certificates are generated and used by etcd and other processes that communicate with etcd: Peer certificates: Used for communication between etcd members. Client certificates: Used for encrypted server-client communication. Client certificates are currently used by the API server only, and no other service should connect to etcd directly except for the proxy. Client secrets ( etcd-client , etcd-metric-client , etcd-metric-signer , and etcd-signer ) are added to the openshift-config , openshift-monitoring , and openshift-kube-apiserver namespaces. Server certificates: Used by the etcd server for authenticating client requests. Metric certificates: All metric consumers connect to proxy with metric-client certificates. Additional resources Restoring to a cluster state 4.7. OLM certificates 4.7.1. Management All certificates for Operator Lifecycle Manager (OLM) components ( olm-operator , catalog-operator , packageserver , and marketplace-operator ) are managed by the system. When installing Operators that include webhooks or API services in their ClusterServiceVersion (CSV) object, OLM creates and rotates the certificates for these resources. Certificates for resources in the openshift-operator-lifecycle-manager namespace are managed by OLM. OLM will not update the certificates of Operators that it manages in proxy environments. These certificates must be managed by the user using the subscription config. steps Configuring proxy support in Operator Lifecycle Manager 4.7.2. Additional resources Proxy certificates Replacing the default ingress certificate Updating the CA bundle 4.8. Aggregated API client certificates 4.8.1. Purpose Aggregated API client certificates are used to authenticate the KubeAPIServer when connecting to the Aggregated API Servers. 4.8.2. Management These certificates are managed by the system and not the user. 4.8.3. Expiration This CA is valid for 30 days. The managed client certificates are valid for 30 days. CA and client certificates are rotated automatically through the use of controllers. 4.8.4. Customization You cannot customize the aggregated API server certificates. 4.9. Machine Config Operator certificates 4.9.1. Purpose This certificate authority is used to secure connections from nodes to Machine Config Server (MCS) during initial provisioning. There are two certificates: . A self-signed CA, the MCS CA . A derived certificate, the MCS cert 4.9.1.1. Provisioning details OpenShift Container Platform installations that use Red Hat Enterprise Linux CoreOS (RHCOS) are installed by using Ignition. This process is split into two parts: An Ignition config is created that references a URL for the full configuration served by the MCS. For user-provisioned infrastucture installation methods, the Ignition config manifests as a worker.ign file created by the openshift-install command. For installer-provisioned infrastructure installation methods that use the Machine API Operator, this configuration appears as the worker-user-data secret. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Additional resources Understanding the Machine Config Operator . About the OpenShift SDN network plugin . 4.9.1.2. Provisioning chain of trust The MCS CA is injected into the Ignition configuration under the security.tls.certificateAuthorities configuration field. The MCS then provides the complete configuration using the MCS cert presented by the web server. The client validates that the MCS cert presented by the server has a chain of trust to an authority it recognizes. In this case, the MCS CA is that authority, and it signs the MCS cert. This ensures that the client is accessing the correct server. The client in this case is Ignition running on a machine in the initramfs. 4.9.1.3. Key material inside a cluster The MCS CA appears in the cluster as a config map in the kube-system namespace, root-ca object, with ca.crt key. The private key is not stored in the cluster and is discarded after the installation completes. The MCS cert appears in the cluster as a secret in the openshift-machine-config-operator namespace and machine-config-server-tls object with the tls.crt and tls.key keys. 4.9.2. Management At this time, directly modifying either of these certificates is not supported. 4.9.3. Expiration The MCS CA is valid for 10 years. The issued serving certificates are valid for 10 years. 4.9.4. Customization You cannot customize the Machine Config Operator certificates. 4.10. User-provided certificates for default ingress 4.10.1. Purpose Applications are usually exposed at <route_name>.apps.<cluster_name>.<base_domain> . The <cluster_name> and <base_domain> come from the installation config file. <route_name> is the host field of the route, if specified, or the route name. For example, hello-openshift-default.apps.username.devcluster.openshift.com . hello-openshift is the name of the route and the route is in the default namespace. You might want clients to access the applications without the need to distribute the cluster-managed CA certificates to the clients. The administrator must set a custom default certificate when serving application content. Warning The Ingress Operator generates a default certificate for an Ingress Controller to serve as a placeholder until you configure a custom default certificate. Do not use operator-generated default certificates in production clusters. 4.10.2. Location The user-provided certificates must be provided in a tls type Secret resource in the openshift-ingress namespace. Update the IngressController CR in the openshift-ingress-operator namespace to enable the use of the user-provided certificate. For more information on this process, see Setting a custom default certificate . 4.10.3. Management User-provided certificates are managed by the user. 4.10.4. Expiration User-provided certificates are managed by the user. 4.10.5. Services Applications deployed on the cluster use user-provided certificates for default ingress. 4.10.6. Customization Update the secret containing the user-managed certificate as needed. Additional resources Replacing the default ingress certificate 4.11. Ingress certificates 4.11.1. Purpose The Ingress Operator uses certificates for: Securing access to metrics for Prometheus. Securing access to routes. 4.11.2. Location To secure access to Ingress Operator and Ingress Controller metrics, the Ingress Operator uses service serving certificates. The Operator requests a certificate from the service-ca controller for its own metrics, and the service-ca controller puts the certificate in a secret named metrics-tls in the openshift-ingress-operator namespace. Additionally, the Ingress Operator requests a certificate for each Ingress Controller, and the service-ca controller puts the certificate in a secret named router-metrics-certs-<name> , where <name> is the name of the Ingress Controller, in the openshift-ingress namespace. Each Ingress Controller has a default certificate that it uses for secured routes that do not specify their own certificates. Unless you specify a custom certificate, the Operator uses a self-signed certificate by default. The Operator uses its own self-signed signing certificate to sign any default certificate that it generates. The Operator generates this signing certificate and puts it in a secret named router-ca in the openshift-ingress-operator namespace. When the Operator generates a default certificate, it puts the default certificate in a secret named router-certs-<name> (where <name> is the name of the Ingress Controller) in the openshift-ingress namespace. Warning The Ingress Operator generates a default certificate for an Ingress Controller to serve as a placeholder until you configure a custom default certificate. Do not use Operator-generated default certificates in production clusters. 4.11.3. Workflow Figure 4.1. Custom certificate workflow Figure 4.2. Default certificate workflow An empty defaultCertificate field causes the Ingress Operator to use its self-signed CA to generate a serving certificate for the specified domain. The default CA certificate and key generated by the Ingress Operator. Used to sign Operator-generated default serving certificates. In the default workflow, the wildcard default serving certificate, created by the Ingress Operator and signed using the generated default CA certificate. In the custom workflow, this is the user-provided certificate. The router deployment. Uses the certificate in secrets/router-certs-default as its default front-end server certificate. In the default workflow, the contents of the wildcard default serving certificate (public and private parts) are copied here to enable OAuth integration. In the custom workflow, this is the user-provided certificate. The public (certificate) part of the default serving certificate. Replaces the configmaps/router-ca resource. The user updates the cluster proxy configuration with the CA certificate that signed the ingresscontroller serving certificate. This enables components like auth , console , and the registry to trust the serving certificate. The cluster-wide trusted CA bundle containing the combined Red Hat Enterprise Linux CoreOS (RHCOS) and user-provided CA bundles or an RHCOS-only bundle if a user bundle is not provided. The custom CA certificate bundle, which instructs other components (for example, auth and console ) to trust an ingresscontroller configured with a custom certificate. The trustedCA field is used to reference the user-provided CA bundle. The Cluster Network Operator injects the trusted CA bundle into the proxy-ca config map. OpenShift Container Platform 4.14 and newer use default-ingress-cert . 4.11.4. Expiration The expiration terms for the Ingress Operator's certificates are as follows: The expiration date for metrics certificates that the service-ca controller creates is two years after the date of creation. The expiration date for the Operator's signing certificate is two years after the date of creation. The expiration date for default certificates that the Operator generates is two years after the date of creation. You cannot specify custom expiration terms on certificates that the Ingress Operator or service-ca controller creates. You cannot specify expiration terms when installing OpenShift Container Platform for certificates that the Ingress Operator or service-ca controller creates. 4.11.5. Services Prometheus uses the certificates that secure metrics. The Ingress Operator uses its signing certificate to sign default certificates that it generates for Ingress Controllers for which you do not set custom default certificates. Cluster components that use secured routes may use the default Ingress Controller's default certificate. Ingress to the cluster via a secured route uses the default certificate of the Ingress Controller by which the route is accessed unless the route specifies its own certificate. 4.11.6. Management Ingress certificates are managed by the user. See Replacing the default ingress certificate for more information. 4.11.7. Renewal The service-ca controller automatically rotates the certificates that it issues. However, it is possible to use oc delete secret <secret> to manually rotate service serving certificates. The Ingress Operator does not rotate its own signing certificate or the default certificates that it generates. Operator-generated default certificates are intended as placeholders for custom default certificates that you configure. 4.12. Monitoring and OpenShift Logging Operator component certificates 4.12.1. Expiration Monitoring components secure their traffic with service CA certificates. These certificates are valid for 2 years and are replaced automatically on rotation of the service CA, which is every 13 months. If the certificate lives in the openshift-monitoring or openshift-logging namespace, it is system managed and rotated automatically. 4.12.2. Management These certificates are managed by the system and not the user. 4.13. Control plane certificates 4.13.1. Location Control plane certificates are included in these namespaces: openshift-config-managed openshift-kube-apiserver openshift-kube-apiserver-operator openshift-kube-controller-manager openshift-kube-controller-manager-operator openshift-kube-scheduler 4.13.2. Management Control plane certificates are managed by the system and rotated automatically. In the rare case that your control plane certificates have expired, see Recovering from expired control plane certificates . | [
"apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE-----",
"cat install-config.yaml",
"proxy: httpProxy: http://<username:[email protected]:123/> httpsProxy: http://<username:[email protected]:123/> noProxy: <123.example.com,10.88.0.0/16> additionalTrustBundle: | -----BEGIN CERTIFICATE----- <MY_HTTPS_PROXY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-examplecorp-ca-cert spec: config: ignition: version: 3.1.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVORENDQXh5Z0F3SUJBZ0lKQU51bkkwRDY2MmNuTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUdsTVFzd0NRWUQKV1FRR0V3SlZVekVYTUJVR0ExVUVDQXdPVG05eWRHZ2dRMkZ5YjJ4cGJtRXhFREFPQmdOVkJBY01CMUpoYkdWcApBMmd4RmpBVUJnTlZCQW9NRFZKbFpDQklZWFFzSUVsdVl5NHhFekFSQmdOVkJBc01DbEpsWkNCSVlYUWdTVlF4Ckh6QVpCZ05WQkFNTUVsSmxaQ0JJWVhRZ1NWUWdVbTl2ZENCRFFURWhNQjhHQ1NxR1NJYjNEUUVKQVJZU2FXNW0KWGpDQnBURUxNQWtHQTFVRUJoTUNWVk14RnpBVkJnTlZCQWdNRGs1dmNuUm9JRU5oY205c2FXNWhNUkF3RGdZRApXUVFIREFkU1lXeGxhV2RvTVJZd0ZBWURWUVFLREExU1pXUWdTR0YwTENCSmJtTXVNUk13RVFZRFZRUUxEQXBTCkFXUWdTR0YwSUVsVU1Sc3dHUVlEVlFRRERCSlNaV1FnU0dGMElFbFVJRkp2YjNRZ1EwRXhJVEFmQmdrcWhraUcKMHcwQkNRRVdFbWx1Wm05elpXTkFjbVZrYUdGMExtTnZiVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUApCRENDQVFvQ2dnRUJBTFF0OU9KUWg2R0M1TFQxZzgwcU5oMHU1MEJRNHNaL3laOGFFVHh0KzVsblBWWDZNSEt6CmQvaTdsRHFUZlRjZkxMMm55VUJkMmZRRGsxQjBmeHJza2hHSUlaM2lmUDFQczRsdFRrdjhoUlNvYjNWdE5xU28KSHhrS2Z2RDJQS2pUUHhEUFdZeXJ1eTlpckxaaW9NZmZpM2kvZ0N1dDBaV3RBeU8zTVZINXFXRi9lbkt3Z1BFUwpZOXBvK1RkQ3ZSQi9SVU9iQmFNNzYxRWNyTFNNMUdxSE51ZVNmcW5obzNBakxRNmRCblBXbG82MzhabTFWZWJLCkNFTHloa0xXTVNGa0t3RG1uZTBqUTAyWTRnMDc1dkNLdkNzQ0F3RUFBYU5qTUdFd0hRWURWUjBPQkJZRUZIN1IKNXlDK1VlaElJUGV1TDhacXczUHpiZ2NaTUI4R0ExVWRJd1FZTUJhQUZIN1I0eUMrVWVoSUlQZXVMOFpxdzNQegpjZ2NaTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RGdZRFZSMFBBUUgvQkFRREFnR0dNQTBHQ1NxR1NJYjNEUUVCCkR3VUFBNElCQVFCRE52RDJWbTlzQTVBOUFsT0pSOCtlbjVYejloWGN4SkI1cGh4Y1pROGpGb0cwNFZzaHZkMGUKTUVuVXJNY2ZGZ0laNG5qTUtUUUNNNFpGVVBBaWV5THg0ZjUySHVEb3BwM2U1SnlJTWZXK0tGY05JcEt3Q3NhawpwU29LdElVT3NVSks3cUJWWnhjckl5ZVFWMnFjWU9lWmh0UzV3QnFJd09BaEZ3bENFVDdaZTU4UUhtUzQ4c2xqCjVlVGtSaml2QWxFeHJGektjbGpDNGF4S1Fsbk92VkF6eitHbTMyVTB4UEJGNEJ5ZVBWeENKVUh3MVRzeVRtZWwKU3hORXA3eUhvWGN3bitmWG5hK3Q1SldoMWd4VVp0eTMKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= mode: 0644 overwrite: true path: /etc/pki/ca-trust/source/anchors/examplecorp-ca.crt",
"oc annotate -n openshift-kube-apiserver-operator secret kube-apiserver-to-kubelet-signer auth.openshift.io/certificate-not-after-"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/security_and_compliance/certificate-types-and-descriptions |
20.7. Deauthorizing a Client | 20.7. Deauthorizing a Client To revoke the authorization of a client to access the Red Hat Gluster Storage trusted storage pool, you can do any of the following: Remove an authorized client from the allowed list Revoke SSL/TLS certificate authorization through a certificate revocation list (CRL) 20.7.1. To Remove an Authorized Client From the Allowed List Procedure 20.12. Removing an authorized client from the allowed list List currently authorized clients and servers For example, the following command shows that there are three authorized servers and five authorized clients. Remove clients to deauthorize from the output For example, if you want to deauthorize client2 and client4, copy the string and remove those clients from the list. Set the new list of authorized clients and servers Set the value of auth.ssl-allow to your updated string. For example, the updated list shows three servers and three clients. 20.7.2. To Revoke SSL/TLS Certificate Authorization Using a SSL Certificate Revocation List To protect the cluster from malicious or unauthorized network entities, you can specify a path to a directory containing SSL certificate revocation list (CRL) using the ssl.crl-path option. The path containing the list of revoked certificates enables server nodes to stop the nodes with revoked certificates from accessing the cluster. For example, you can provide the path to a directory containing CRL with the volume set command as follows: Note Only the CA signed certificates can be revoked and not the self-signed certificates To set up the CRL files, perform the following: Copy the CRL files to a directory. Change directory to the directory containing CRL files. Compute hashes to the CRL files using the c_rehash utility. The hash and symbolic linking can be done using the c_rehash utility, which is available through the openssl-perl RPM. The name of the symbolic link must be the hash of the Common Name. For more information, see the crl man page. Set the ssl.crl-path volume option. where, path-to-directory has to be an absolute name of the directory that hosts the CRL files. | [
"gluster volume get VOLNAME auth.ssl-allow",
"gluster volume get sample_volname auth.ssl-allow server1,server2,server3,client1,client2,client3,client4,client5",
"server1,server2,server3,client1,client3,client5",
"gluster volume set VOLNAME auth.ssl-allow <list_of_systems>",
"gluster volume set sample_volname auth.ssl-allow server1,server2,server3,client1,client3,client5",
"gluster volume set vm-images ssl.crl-path /etc/ssl/",
"c_rehash .",
"gluster volume set VOLNAME ssl.crl-path path-to-directory"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-deauthorize-client |
Chapter 15. Managing user groups in IdM Web UI | Chapter 15. Managing user groups in IdM Web UI This chapter introduces user groups management using the IdM web UI. A user group is a set of users with common privileges, password policies, and other characteristics. A user group in Identity Management (IdM) can include: IdM users other IdM user groups external users, which are users that exist outside of IdM 15.1. The different group types in IdM IdM supports the following types of groups: POSIX groups (the default) POSIX groups support Linux POSIX attributes for their members. Note that groups that interact with Active Directory cannot use POSIX attributes. POSIX attributes identify users as separate entities. Examples of POSIX attributes relevant to users include uidNumber , a user number (UID), and gidNumber , a group number (GID). Non-POSIX groups Non-POSIX groups do not support POSIX attributes. For example, these groups do not have a GID defined. All members of this type of group must belong to the IdM domain. External groups Use external groups to add group members that exist in an identity store outside of the IdM domain, such as: A local system An Active Directory domain A directory service External groups do not support POSIX attributes. For example, these groups do not have a GID defined. Table 15.1. User groups created by default Group name Default group members ipausers All IdM users admins Users with administrative privileges, including the default admin user editors This is a legacy group that no longer has any special privileges trust admins Users with privileges to manage the Active Directory trusts When you add a user to a user group, the user gains the privileges and policies associated with the group. For example, to grant administrative privileges to a user, add the user to the admins group. Warning Do not delete the admins group. As admins is a pre-defined group required by IdM, this operation causes problems with certain commands. In addition, IdM creates user private groups by default whenever a new user is created in IdM. For more information about private groups, see Adding users without a private group . 15.2. Direct and indirect group members User group attributes in IdM apply to both direct and indirect members: when group B is a member of group A, all users in group B are considered indirect members of group A. For example, in the following diagram: User 1 and User 2 are direct members of group A. User 3, User 4, and User 5 are indirect members of group A. Figure 15.1. Direct and Indirect Group Membership If you set a password policy for user group A, the policy also applies to all users in user group B. 15.3. Adding a user group using IdM Web UI Follow this procedure to add a user group using the IdM Web UI. Prerequisites You are logged in to the IdM Web UI. Procedure Click Identity Groups , and select User Groups in the left sidebar. Click Add to start adding the group. Fill out the information about the group. For more information about user group types, see The different group types in IdM . You can specify a custom GID for the group. If you do this, be careful to avoid ID conflicts. If you do not specify a custom GID, IdM automatically assigns a GID from the available ID range. Click Add to confirm. 15.4. Deleting a user group using IdM Web UI Follow this procedure to delete a user group using the IdM Web UI. Note that deleting a group does not delete the group members from IdM. Prerequisites You are logged in to the IdM Web UI. Procedure Click Identity Groups and select User Groups . Select the group to delete. Click Delete . Click Delete to confirm. 15.5. Adding a member to a user group using IdM Web UI You can add both users and user groups as members of a user group. For more information, see The different group types in IdM and Direct and indirect group members . Prerequisites You are logged in to the IdM Web UI. Procedure Click Identity Groups and select User Groups in the left sidebar. Click the name of the group. Select the type of group member you want to add: Users, User Groups, or External . Click Add . Select the check box to one or more members you want to add. Click the rightward arrow to move the selected members to the group. Click Add to confirm. 15.6. Adding users or groups as member managers to an IdM user group using the Web UI Follow this procedure to add users or groups as member managers to an IdM user group using the Web UI. Member managers can add users or groups to IdM user groups but cannot change the attributes of a group. Prerequisites You are logged in to the IdM Web UI. You must have the name of the user or group you are adding as member managers and the name of the group you want them to manage. Procedure Click Identity Groups and select User Groups in the left sidebar. Click the name of the group. Select the type of group member manager you want to add: Users or User Groups . Click Add . Select the check box to one or more members you want to add. Click the rightward arrow to move the selected members to the group. Click Add to confirm. Note After you add a member manager to a user group, the update may take some time to spread to all clients in your Identity Management environment. Verification Verify the newly added user or user group has been added to the member manager list of users or user groups: Additional resources See ipa group-add-member-manager --help for more information. 15.7. Viewing group members using IdM Web UI Follow this procedure to view members of a group using the IdM Web UI. You can view both direct and indirect group members. For more information, see Direct and indirect group members . Prerequisites You are logged in to the IdM Web UI. Procedure Select Identity Groups . Select User Groups in the left sidebar. Click the name of the group you want to view. Switch between Direct Membership and Indirect Membership . 15.8. Removing a member from a user group using IdM Web UI Follow this procedure to remove a member from a user group using the IdM Web UI. Prerequisites You are logged in to the IdM Web UI. Procedure Click Identity Groups and select User Groups in the left sidebar. Click the name of the group. Select the type of group member you want to remove: Users, User Groups , or External . Select the check box to the member you want to remove. Click Delete . Click Delete to confirm. 15.9. Removing users or groups as member managers from an IdM user group using the Web UI Follow this procedure to remove users or groups as member managers from an IdM user group using the Web UI. Member managers can remove users or groups from IdM user groups but cannot change the attributes of a group. Prerequisites You are logged in to the IdM Web UI. You must have the name of the existing member manager user or group you are removing and the name of the group they are managing. Procedure Click Identity Groups and select User Groups in the left sidebar. Click the name of the group. Select the type of member manager you want to remove: Users or User Groups . Select the check box to the member manager you want to remove. Click Delete . Click Delete to confirm. Note After you remove a member manager from a user group, the update may take some time to spread to all clients in your Identity Management environment. Verification Verify the user or user group has been removed from the member manager list of users or user groups: Additional resources See ipa group-add-member-manager --help for more details. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/managing-user-groups-in-idm-web-ui_configuring-and-managing-idm |
function::sprint_usyms | function::sprint_usyms Name function::sprint_usyms - Return stack for user addresses from string Synopsis Arguments callers String with list of hexadecimal (user) addresses Description Perform a symbolic lookup of the addresses in the given string, which are assumed to be the result of a prior calls to ustack , ucallers , and similar functions. Returns a simple backtrace from the given hex string. One line per address. Includes the symbol name (or hex address if symbol couldn't be resolved) and module name (if found), as obtained from usymdata . Includes the offset from the start of the function if found, otherwise the offset will be added to the module (if found, between brackets). Returns the backtrace as string (each line terminated by a newline character). Note that the returned stack will be truncated to MAXSTRINGLEN, to print fuller and richer stacks use print_usyms . | [
"sprint_usyms(callers:string)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sprint-usyms |
Chapter 4. Adding Servers to the Trusted Storage Pool | Chapter 4. Adding Servers to the Trusted Storage Pool A storage pool is a network of storage servers. When the first server starts, the storage pool consists of that server alone. Adding additional storage servers to the storage pool is achieved using the probe command from a running, trusted storage server. Important Before adding servers to the trusted storage pool, you must ensure that the ports specified in Chapter 3, Considerations for Red Hat Gluster Storage are open. On Red Hat Enterprise Linux 7, enable the glusterFS firewall service in the active zones for runtime and permanent mode using the following commands: To get a list of active zones, run the following command: To allow the firewall service in the active zones, run the following commands: For more information about using firewalls, see section Using Firewalls in the Red Hat Enterprise Linux 7 Security Guide : https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Using_Firewalls.html . Note When any two gluster commands are executed concurrently on the same volume, the following error is displayed: Another transaction is in progress. This behavior in the Red Hat Gluster Storage prevents two or more commands from simultaneously modifying a volume configuration, potentially resulting in an inconsistent state. Such an implementation is common in environments with monitoring frameworks such as the Red Hat Gluster Storage Console, and Red Hat Enterprise Virtualization Manager. For example, in a four node Red Hat Gluster Storage Trusted Storage Pool, this message is observed when gluster volume status VOLNAME command is executed from two of the nodes simultaneously. 4.1. Adding Servers to the Trusted Storage Pool The gluster peer probe [server] command is used to add servers to the trusted server pool. Note Probing a node from lower version to a higher version of Red Hat Gluster Storage node is not supported. Adding Three Servers to a Trusted Storage Pool Create a trusted storage pool consisting of three storage servers, which comprise a volume. Prerequisites The glusterd service must be running on all storage servers requiring addition to the trusted storage pool. See Chapter 22, Starting and Stopping the glusterd service for service start and stop commands. Server1 , the trusted storage server, is started. The host names of the target servers must be resolvable by DNS. Run gluster peer probe [server] from Server 1 to add additional servers to the trusted storage pool. Note Self-probing Server1 will result in an error because it is part of the trusted storage pool by default. All the servers in the Trusted Storage Pool must have RDMA devices if either RDMA or RDMA,TCP volumes are created in the storage pool. The peer probe must be performed using IP/hostname assigned to the RDMA device. Verify the peer status from all servers using the following command: Important If the existing trusted storage pool has a geo-replication session, then after adding the new server to the trusted storage pool, perform the steps listed at Section 10.6, "Starting Geo-replication on a Newly Added Brick, Node, or Volume" . Note Verify that time is synchronized on all Gluster nodes by using the following command: | [
"firewall-cmd --get-active-zones",
"firewall-cmd --zone= zone_name --add-service=glusterfs firewall-cmd --zone= zone_name --add-service=glusterfs --permanent",
"gluster peer probe server2 Probe successful gluster peer probe server3 Probe successful gluster peer probe server4 Probe successful",
"gluster peer status Number of Peers: 3 Hostname: server2 Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5 State: Peer in Cluster (Connected) Hostname: server3 Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7 State: Peer in Cluster (Connected) Hostname: server4 Uuid: 3e0caba-9df7-4f66-8e5d-cbc348f29ff7 State: Peer in Cluster (Connected)",
"for peer in `gluster peer status | grep Hostname | awk -F':' '{print USD2}' | awk '{print USD1}'`; do clockdiff USDpeer; done"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-Trusted_Storage_Pools |
Chapter 6. PersistentVolumeClaim [v1] | Chapter 6. PersistentVolumeClaim [v1] Description PersistentVolumeClaim is a user's request for and claim to a persistent volume Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes status object PersistentVolumeClaimStatus is the current status of a persistent volume claim. 6.1.1. .spec Description PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object VolumeResourceRequirements describes the storage resource requirements for a volume. selector LabelSelector selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 6.1.2. .spec.dataSource Description TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 6.1.3. .spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 6.1.4. .spec.resources Description VolumeResourceRequirements describes the storage resource requirements for a volume. Type object Property Type Description limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 6.1.5. .status Description PersistentVolumeClaimStatus is the current status of a persistent volume claim. Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResourceStatuses object (string) allocatedResourceStatuses stores status of resource being resized for the given PVC. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. ClaimResourceStatus can be in any of following states: - ControllerResizeInProgress: State set when resize controller starts resizing the volume in control-plane. - ControllerResizeFailed: State set when resize has failed in resize controller with a terminal error. - NodeResizePending: State set when resize controller has finished resizing the volume but further resizing of volume is needed on the node. - NodeResizeInProgress: State set when kubelet starts resizing the volume. - NodeResizeFailed: State set when resizing has failed in kubelet with a terminal error. Transient errors don't set NodeResizeFailed. For example: if expanding a PVC for more capacity - this field can be one of the following states: - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeFailed" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizePending" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeFailed" When this field is not set, it means that no resize operation is in progress for the given PVC. A controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. allocatedResources object (Quantity) allocatedResources tracks the resources allocated to a PVC including its capacity. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. Capacity reported here may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. A controller that receives PVC update with previously unknown resourceName should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity object (Quantity) capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'Resizing'. conditions[] object PersistentVolumeClaimCondition contains details about state of pvc currentVolumeAttributesClassName string currentVolumeAttributesClassName is the current name of the VolumeAttributesClass the PVC is using. When unset, there is no VolumeAttributeClass applied to this PersistentVolumeClaim This is a beta field and requires enabling VolumeAttributesClass feature (off by default). modifyVolumeStatus object ModifyVolumeStatus represents the status object of ControllerModifyVolume operation phase string phase represents the current phase of PersistentVolumeClaim. Possible enum values: - "Bound" used for PersistentVolumeClaims that are bound - "Lost" used for PersistentVolumeClaims that lost their underlying PersistentVolume. The claim was bound to a PersistentVolume and this volume does not exist any longer and all data on it was lost. - "Pending" used for PersistentVolumeClaims that are not yet bound 6.1.6. .status.conditions Description conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'Resizing'. Type array 6.1.7. .status.conditions[] Description PersistentVolumeClaimCondition contains details about state of pvc Type object Required type status Property Type Description lastProbeTime Time lastProbeTime is the time we probed the condition. lastTransitionTime Time lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "Resizing" that means the underlying persistent volume is being resized. status string type string 6.1.8. .status.modifyVolumeStatus Description ModifyVolumeStatus represents the status object of ControllerModifyVolume operation Type object Required status Property Type Description status string status is the status of the ControllerModifyVolume operation. It can be in any of following states: - Pending Pending indicates that the PersistentVolumeClaim cannot be modified due to unmet requirements, such as the specified VolumeAttributesClass not existing. - InProgress InProgress indicates that the volume is being modified. - Infeasible Infeasible indicates that the request has been rejected as invalid by the CSI driver. To resolve the error, a valid VolumeAttributesClass needs to be specified. Note: New statuses can be added in the future. Consumers should check for unknown statuses and fail appropriately. Possible enum values: - "InProgress" InProgress indicates that the volume is being modified - "Infeasible" Infeasible indicates that the request has been rejected as invalid by the CSI driver. To resolve the error, a valid VolumeAttributesClass needs to be specified - "Pending" Pending indicates that the PersistentVolumeClaim cannot be modified due to unmet requirements, such as the specified VolumeAttributesClass not existing targetVolumeAttributesClassName string targetVolumeAttributesClassName is the name of the VolumeAttributesClass the PVC currently being reconciled 6.2. API endpoints The following API endpoints are available: /api/v1/persistentvolumeclaims GET : list or watch objects of kind PersistentVolumeClaim /api/v1/watch/persistentvolumeclaims GET : watch individual changes to a list of PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/persistentvolumeclaims DELETE : delete collection of PersistentVolumeClaim GET : list or watch objects of kind PersistentVolumeClaim POST : create a PersistentVolumeClaim /api/v1/watch/namespaces/{namespace}/persistentvolumeclaims GET : watch individual changes to a list of PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/persistentvolumeclaims/{name} DELETE : delete a PersistentVolumeClaim GET : read the specified PersistentVolumeClaim PATCH : partially update the specified PersistentVolumeClaim PUT : replace the specified PersistentVolumeClaim /api/v1/watch/namespaces/{namespace}/persistentvolumeclaims/{name} GET : watch changes to an object of kind PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{namespace}/persistentvolumeclaims/{name}/status GET : read status of the specified PersistentVolumeClaim PATCH : partially update status of the specified PersistentVolumeClaim PUT : replace status of the specified PersistentVolumeClaim 6.2.1. /api/v1/persistentvolumeclaims HTTP method GET Description list or watch objects of kind PersistentVolumeClaim Table 6.1. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaimList schema 401 - Unauthorized Empty 6.2.2. /api/v1/watch/persistentvolumeclaims HTTP method GET Description watch individual changes to a list of PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead. Table 6.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /api/v1/namespaces/{namespace}/persistentvolumeclaims HTTP method DELETE Description delete collection of PersistentVolumeClaim Table 6.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PersistentVolumeClaim Table 6.5. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaimList schema 401 - Unauthorized Empty HTTP method POST Description create a PersistentVolumeClaim Table 6.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.7. Body parameters Parameter Type Description body PersistentVolumeClaim schema Table 6.8. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 202 - Accepted PersistentVolumeClaim schema 401 - Unauthorized Empty 6.2.4. /api/v1/watch/namespaces/{namespace}/persistentvolumeclaims HTTP method GET Description watch individual changes to a list of PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead. Table 6.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.5. /api/v1/namespaces/{namespace}/persistentvolumeclaims/{name} Table 6.10. Global path parameters Parameter Type Description name string name of the PersistentVolumeClaim HTTP method DELETE Description delete a PersistentVolumeClaim Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.12. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 202 - Accepted PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method GET Description read the specified PersistentVolumeClaim Table 6.13. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PersistentVolumeClaim Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.15. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PersistentVolumeClaim Table 6.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.17. Body parameters Parameter Type Description body PersistentVolumeClaim schema Table 6.18. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 401 - Unauthorized Empty 6.2.6. /api/v1/watch/namespaces/{namespace}/persistentvolumeclaims/{name} Table 6.19. Global path parameters Parameter Type Description name string name of the PersistentVolumeClaim HTTP method GET Description watch changes to an object of kind PersistentVolumeClaim. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.7. /api/v1/namespaces/{namespace}/persistentvolumeclaims/{name}/status Table 6.21. Global path parameters Parameter Type Description name string name of the PersistentVolumeClaim HTTP method GET Description read status of the specified PersistentVolumeClaim Table 6.22. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PersistentVolumeClaim Table 6.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.24. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PersistentVolumeClaim Table 6.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.26. Body parameters Parameter Type Description body PersistentVolumeClaim schema Table 6.27. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeClaim schema 201 - Created PersistentVolumeClaim schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/storage_apis/persistentvolumeclaim-v1 |
Chapter 84. KafkaClientAuthenticationScramSha256 schema reference | Chapter 84. KafkaClientAuthenticationScramSha256 schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationScramSha256 schema properties To configure SASL-based SCRAM-SHA-256 authentication, set the type property to scram-sha-256 . The SCRAM-SHA-256 authentication mechanism requires a username and password. Example SASL-based SCRAM-SHA-256 client authentication configuration for Kafka Connect authentication: type: scram-sha-256 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field In the passwordSecret property, specify a link to a Secret containing the password. You can use the secrets created by the User Operator. If required, you can create a text file that contains the password, in cleartext, to use for authentication: echo -n <password> > <my_password>.txt You can then create a Secret from the text file, setting your own field name (key) for the password: oc create secret generic <my-connect-secret-name> --from-file=<my_password_field_name>=./<my_password>.txt Example secret for SCRAM-SHA-256 client authentication for Kafka Connect apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm The secretName property contains the name of the Secret , and the password property contains the name of the key under which the password is stored inside the Secret . Important Do not specify the actual password in the password property. 84.1. KafkaClientAuthenticationScramSha256 schema properties Property Property type Description type string Must be scram-sha-256 . username string Username used for the authentication. passwordSecret PasswordSecretSource Reference to the Secret which holds the password. | [
"authentication: type: scram-sha-256 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field",
"echo -n <password> > <my_password>.txt",
"create secret generic <my-connect-secret-name> --from-file=<my_password_field_name>=./<my_password>.txt",
"apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaClientAuthenticationScramSha256-reference |
8.159. pyparted | 8.159. pyparted 8.159.1. RHBA-2013:1616 - pyparted bug fix update Updated pyparted packages that fix one bug are now available for Red Hat Enterprise Linux 6. The pyparted packages contain Python bindings for the libparted library. They are primarily used by the Red Hat Enterprise Linux installation software. Bug Fix BZ# 896024 Due to a bug in the underlying source code, an attempt to run the parted.version() function caused a system error to be returned. This bug has been fixed and parted.version() can now be executed as expected. Users of pyparted are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/pyparted |
Overcloud Parameters | Overcloud Parameters Red Hat OpenStack Platform 16.0 Parameters for customizing the core template collection for a Red Hat OpenStack Platform overcloud OpenStack Documentation Team [email protected] Abstract This guide provides parameters for customizing the overcloud in Red Hat OpenStack Platform. Use this guide in conjunction with the Advanced Overcloud Customization guide. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/overcloud_parameters/index |
A.3. Partition Naming Schemes and Mount Points | A.3. Partition Naming Schemes and Mount Points A common source of confusion for users unfamiliar with Linux is the matter of how partitions are used and accessed by the Linux operating system. In DOS/Windows, it is relatively simple: Each partition gets a "drive letter." You then use the correct drive letter to refer to files and directories on its corresponding partition. This is entirely different from how Linux deals with partitions and, for that matter, with disk storage in general. This section describes the main principles of partition naming scheme and the way how partitions are accessed in Red Hat Enterprise Linux. A.3.1. Partition Naming Scheme Red Hat Enterprise Linux uses a naming scheme that is file-based, with file names in the form of /dev/ xxyN . Device and partition names consist of the following: /dev/ This is the name of the directory in which all device files reside. Because partitions reside on hard disks, and hard disks are devices, the files representing all possible partitions reside in /dev/ . xx The first two letters of the partition name indicate the type of device on which the partition resides, usually sd . y This letter indicates which device the partition is on. For example, /dev/sda for the first hard disk, /dev/sdb for the second, and so on. N The final number denotes the partition. The first four (primary or extended) partitions are numbered 1 through 4 . Logical partitions start at 5 . So, for example, /dev/sda3 is the third primary or extended partition on the first hard disk, and /dev/sdb6 is the second logical partition on the second hard disk. Note Even if Red Hat Enterprise Linux can identify and refer to all types of disk partitions, it might not be able to read the file system and therefore access stored data on every partition type. However, in many cases, it is possible to successfully access data on a partition dedicated to another operating system. A.3.2. Disk Partitions and Mount Points In Red Hat Enterprise Linux each partition is used to form part of the storage necessary to support a single set of files and directories. This is done by associating a partition with a directory through a process known as mounting . Mounting a partition makes its storage available starting at the specified directory (known as a mount point ). For example, if partition /dev/sda5 is mounted on /usr/ , that would mean that all files and directories under /usr/ physically reside on /dev/sda5 . So the file /usr/share/doc/FAQ/txt/Linux-FAQ would be stored on /dev/sda5 , while the file /etc/gdm/custom.conf would not. Continuing the example, it is also possible that one or more directories below /usr/ would be mount points for other partitions. For instance, a partition (say, /dev/sda7 ) could be mounted on /usr/local/ , meaning that /usr/local/man/whatis would then reside on /dev/sda7 rather than /dev/sda5 . A.3.3. How Many Partitions? At this point in the process of preparing to install Red Hat Enterprise Linux, you must give some consideration to the number and size of the partitions to be used by your new operating system. However, there is no one right answer to this question. It depends on your needs and requirements. Keeping this in mind, Red Hat recommends that, unless you have a reason for doing otherwise, you should at least create the following partitions: swap , /boot/ , and / (root). For more information, see Section 8.14.4.4, "Recommended Partitioning Scheme" for AMD64 and Intel 64 systems, Section 13.15.4.4, "Recommended Partitioning Scheme" for IBM Power Systems servers, and Section 18.15.3.4, "Recommended Partitioning Scheme" for IBM Z. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-partitioning-naming-schemes-and-mount-points |
9.2. ia64 Architectures | 9.2. ia64 Architectures Bugzilla #453033 On some SGI Altix systems that feature the IOC4 multi-function device, you may encounter problems when using attached IDE devices (such as CD-ROM drives). This is caused by a bug in the sgiioc4 IDE driver, which prevents some devices from being detected properly on system boot. You can work around this bug by manually loading the driver, which in turn allows attached IDE devices to be detected properly. To do so, run the following command as root: /sbin/modprobe sgiioc4 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/4.8_release_notes/ar01s09s02 |
Chapter 2. OpenShift Data Foundation deployed using local storage devices | Chapter 2. OpenShift Data Foundation deployed using local storage devices 2.1. Replacing storage nodes on bare metal infrastructure To replace an operational node, see Section 2.1.1, "Replacing an operational node on bare metal user-provisioned infrastructure" . To replace a failed node, see Section 2.1.2, "Replacing a failed node on bare metal user-provisioned infrastructure" . 2.1.1. Replacing an operational node on bare metal user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure, resources, and disks to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the node, and get the labels on the node that you need to replace: <node_name> Specify the name of node that you need to replace. Identify the monitor pod (if any), and OSDs that are running in the node that you need to replace: Scale down the deployments of the pods identified in the step: For example: Mark the node as unschedulable: Drain the node: Delete the node: Get a new bare-metal machine with the required infrastructure. See Installing on bare metal . Important For information about how to replace a master node when you have installed OpenShift Data Foundation on a three-node OpenShift compact bare-metal cluster, see the Backup and Restore guide in the OpenShift Container Platform documentation. Create a new OpenShift Container Platform node using the new bare-metal machine. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes in the OpenShift Web Console. Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Identify the namespace where OpenShift local storage operator is installed, and assign it to the local_storage_project variable: For example: Example output: Add a new worker node to the localVolumeDiscovery and localVolumeSet . Update the localVolumeDiscovery definition to include the new node, and remove the failed node: Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed, and newnode.example.com is the new node. Determine the localVolumeSet to edit: Example output: Update the localVolumeSet definition to include the new node, and remove the failed node: Example output: Remember to save before exiting the editor. In the this example, server3.example.com is removed and newnode.example.com is the new node. Verify that the new localblock Persistent Volume (PV) is available: Example output: Navigate to the openshift-storage project: Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required: <failed_osd_id> Is the integer in the pod name immediately after the rook-ceph-osd prefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example, FAILED_OSD_IDS=0,1,2 . The FORCE_OSD_REMOVAL value must be changed to true in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails, and the pod is not in the expected Completed state, check the pod logs for further debugging: For example: Identify the Persistent Volume (PV) associated with the Persistent Volume Claim (PVC): Example output: If there is a PV in Released state, delete it: For example: Example output: Identify the crashcollector pod deployment: If there is an existing crashcollector pod deployment, delete it: Delete the ocs-osd-removal-job : Example output: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Ensure that the new incremental mon is created, and is in the Running state: Example output: OSD and monitor pod might take several minutes to get to the Running state. Verify that new OSD pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 2.1.2. Replacing a failed node on bare metal user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure, resources, and disks to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the node, and get the labels on the node that you need to replace: <node_name> Specify the name of node that you need to replace. Identify the monitor pod (if any), and OSDs that are running in the node that you need to replace: Scale down the deployments of the pods identified in the step: For example: Mark the node as unschedulable: Remove the pods which are in Terminating state: Drain the node: Delete the node: Get a new bare-metal machine with the required infrastructure. See Installing on bare metal . Important For information about how to replace a master node when you have installed OpenShift Data Foundation on a three-node OpenShift compact bare-metal cluster, see the Backup and Restore guide in the OpenShift Container Platform documentation. Create a new OpenShift Container Platform node using the new bare-metal machine. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes in the OpenShift Web Console. Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Identify the namespace where OpenShift local storage operator is installed, and assign it to the local_storage_project variable: For example: Example output: Add a new worker node to the localVolumeDiscovery and localVolumeSet . Update the localVolumeDiscovery definition to include the new node, and remove the failed node: Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed, and newnode.example.com is the new node. Determine the localVolumeSet to edit: Example output: Update the localVolumeSet definition to include the new node, and remove the failed node: Example output: Remember to save before exiting the editor. In the this example, server3.example.com is removed and newnode.example.com is the new node. Verify that the new localblock Persistent Volume (PV) is available: Example output: Navigate to the openshift-storage project: Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required: <failed_osd_id> Is the integer in the pod name immediately after the rook-ceph-osd prefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example, FAILED_OSD_IDS=0,1,2 . The FORCE_OSD_REMOVAL value must be changed to true in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails, and the pod is not in the expected Completed state, check the pod logs for further debugging: For example: Identify the Persistent Volume (PV) associated with the Persistent Volume Claim (PVC): Example output: If there is a PV in Released state, delete it: For example: Example output: Identify the crashcollector pod deployment: If there is an existing crashcollector pod deployment, delete it: Delete the ocs-osd-removal-job : Example output: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Ensure that the new incremental mon is created, and is in the Running state: Example output: OSD and monitor pod might take several minutes to get to the Running state. Verify that new OSD pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 2.2. Replacing storage nodes on IBM Z or IBM(R) LinuxONE infrastructure You can choose one of the following procedures to replace storage nodes: Section 2.2.1, "Replacing operational nodes on IBM Z or IBM(R) LinuxONE infrastructure" . Section 2.2.2, "Replacing failed nodes on IBM Z or IBM(R) LinuxONE infrastructure" . 2.2.1. Replacing operational nodes on IBM Z or IBM(R) LinuxONE infrastructure Use this procedure to replace an operational node on IBM Z or IBM(R) LinuxONE infrastructure. Procedure Identify the node and get labels on the node to be replaced. Make a note of the rack label. Identify the mon (if any) and object storage device (OSD) pods that are running in the node to be replaced. Scale down the deployments of the pods identified in the step. For example: Mark the nodes as unschedulable. Remove the pods which are in the Terminating state. Drain the node. Delete the node. Get a new IBM Z storage node as a replacement. Check for certificate signing requests (CSRs) related to OpenShift Data Foundation that are in Pending state: Approve all required OpenShift Data Foundation CSRs for the new node: Click Compute Nodes in OpenShift Web Console, confirm if the new node is in Ready state. Apply the openshift-storage label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels Add cluster.ocs.openshift.io/openshift-storage and click Save . From Command line interface Execute the following command to apply the OpenShift Data Foundation label to the new node: Add a new worker node to localVolumeDiscovery and localVolumeSet . Update the localVolumeDiscovery definition to include the new node and remove the failed node. Remember to save before exiting the editor. In the above example, server3.example.com was removed and newnode.example.com is the new node. Determine which localVolumeSet to edit. Replace local-storage-project in the following commands with the name of your local storage project. The default project name is openshift-local-storage in OpenShift Data Foundation 4.6 and later. versions use local-storage by default. Update the localVolumeSet definition to include the new node and remove the failed node. Remember to save before exiting the editor. In the above example, server3.example.com was removed and newnode.example.com is the new node. Verify that the new localblock PV is available. Change to the openshift-storage project. Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required. Identify the PVC as afterwards we need to delete PV associated with that specific PVC. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix . In this example, the deployment name is rook-ceph-osd-1 . Example output: In this example, the PVC name is ocs-deviceset-localblock-0-data-0-g2mmc . Remove the failed OSD from the cluster. You can remove more than one OSD by adding comma separated OSD IDs in the command. (For example: FAILED_OSD_IDS=0,1,2) Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal pod. A status of Completed confirms that the OSD removal job succeeded. Note If ocs-osd-removal fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: It may be necessary to manually cleanup the removed OSD as follows: Delete the PV associated with the failed node. Identify the PV associated with the PVC. The PVC name must be identical to the name that is obtained while removing the failed OSD from the cluster. If there is a PV in Released state, delete it. For example: Identify the crashcollector pod deployment. If there is an existing crashcollector pod deployment, delete it. Delete the ocs-osd-removal job. Example output: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that new Object Storage Device (OSD) pods are running on the replacement node: Optional: If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 2.2.2. Replacing failed nodes on IBM Z or IBM(R) LinuxONE infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the faulty node, and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining , and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created. Wait for new machine to start. Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that new Object Storage Device (OSD) pods are running on the replacement node: Optional: If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 2.3. Replacing storage nodes on IBM Power infrastructure For OpenShift Data Foundation, you can perform node replacement proactively for an operational node, and reactively for a failed node, for the deployments related to IBM Power. 2.3.1. Replacing an operational or failed storage node on IBM Power Prerequisites Ensure that the replacement nodes are configured with the similar infrastructure and resources to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the node, and get the labels on the node that you need to replace: <node_name> Specify the name of node that you need to replace. Identify the mon (if any), and Object Storage Device (OSD) pods that are running in the node that you need to replace: Scale down the deployments of the pods identified in the step: For example: Mark the node as unschedulable: Remove the pods which are in Terminating state: Drain the node: Delete the node: Get a new IBM Power machine with the required infrastructure. See Installing a cluster on IBM Power . Create a new OpenShift Container Platform node using the new IBM Power machine. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes in the OpenShift Web Console. Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Identify the namespace where OpenShift local storage operator is installed, and assign it to the local_storage_project variable: For example: Example output: Add a newly added worker node to the localVolume . Determine the localVolume you need to edit: Example output: Update the localVolume definition to include the new node, and remove the failed node: Example output: Remember to save before exiting the editor. In the this example, worker-0 is removed and worker-3 is the new node. Verify that the new localblock Persistent Volume (PV) is available: Example output: Navigate to the openshift-storage project: Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required. Identify the Persistent Volume Claim (PVC): where, <osd_id_to_remove> is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-1 . Example output: Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required: <failed_osd_id> Is the integer in the pod name immediately after the rook-ceph-osd prefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example, FAILED_OSD_IDS=0,1,2 . The FORCE_OSD_REMOVAL value must be changed to true in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in the OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job has succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails, and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: Delete the PV associated with the failed node. Identify the PV associated with the PVC: Example output: The PVC name must be identical to the name that is obtained while removing the failed OSD from the cluster. If there is a PV in Released state, delete it: For example: Example output: Identify the crashcollector pod deployment: If there is an existing crashcollector pod deployment, delete it: Delete the ocs-osd-removal-job : Example output: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Ensure that the new incremental mon is created and is in the Running state: Example output: The OSD and monitor pod might take several minutes to get to the Running state. Verify that the new OSD pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 2.4. Replacing storage nodes on VMware infrastructure To replace an operational node, see: Section 2.4.1, "Replacing an operational node on VMware user-provisioned infrastructure" . Section 2.4.2, "Replacing an operational node on VMware installer-provisioned infrastructure" . To replace a failed node,see: Section 2.4.3, "Replacing a failed node on VMware user-provisioned infrastructure" . Section 2.4.4, "Replacing a failed node on VMware installer-provisioned infrastructure" . 2.4.1. Replacing an operational node on VMware user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure, resources, and disks to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the node, and get the labels on the node that you need to replace: <node_name> Specify the name of node that you need to replace. Identify the monitor pod (if any), and OSDs that are running in the node that you need to replace: Scale down the deployments of the pods identified in the step: For example: Mark the node as unschedulable: Drain the node: Delete the node: Log in to VMware vSphere and terminate the Virtual Machine (VM) that you have identified. Create a new VM on VMware vSphere with the required infrastructure. See Infrastructure requirements . Create a new OpenShift Container Platform worker node using the new VM. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes in the OpenShift Web Console. Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Identify the namespace where OpenShift local storage operator is installed, and assign it to the local_storage_project variable: For example: Example output: Add a new worker node to the localVolumeDiscovery and localVolumeSet . Update the localVolumeDiscovery definition to include the new node, and remove the failed node: Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed, and newnode.example.com is the new node. Determine the localVolumeSet to edit: Example output: Update the localVolumeSet definition to include the new node, and remove the failed node: Example output: Remember to save before exiting the editor. In the this example, server3.example.com is removed and newnode.example.com is the new node. Verify that the new localblock Persistent Volume (PV) is available: Example output: Navigate to the openshift-storage project: Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required: <failed_osd_id> Is the integer in the pod name immediately after the rook-ceph-osd prefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example, FAILED_OSD_IDS=0,1,2 . The FORCE_OSD_REMOVAL value must be changed to true in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails, and the pod is not in the expected Completed state, check the pod logs for further debugging: For example: Identify the Persistent Volume (PV) associated with the Persistent Volume Claim (PVC): Example output: If there is a PV in Released state, delete it: For example: Example output: Identify the crashcollector pod deployment: If there is an existing crashcollector pod deployment, delete it: Delete the ocs-osd-removal-job : Example output: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Ensure that the new incremental mon is created, and is in the Running state: Example output: OSD and monitor pod might take several minutes to get to the Running state. Verify that new OSD pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 2.4.2. Replacing an operational node on VMware installer-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with the similar infrastructure, resources, and disks to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Get labels on the node: <node_name> Specify the name of node that you need to replace. Identify the mon (if any), and Object Storage Devices (OSDs) that are running in the node: Scale down the deployments of the pods that you identified in the step: For example: Mark the node as unschedulable: Drain the node: Click Compute Machines . Search for the required machine. Besides the required machine, click Action menu (...) Delete Machine . Click Delete to confirm the machine deletion. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes in the OpenShift Web Console. Confirm that the new node is in Ready state. Physically add a new device to the node. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Identify the namespace where the OpenShift local storage operator is installed, and assign it to the local_storage_project variable: For example: Example output: Add a new worker node to the localVolumeDiscovery and localVolumeSet . Update the localVolumeDiscovery definition to include the new node and remove the failed node. Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed, and newnode.example.com is the new node. Determine the localVolumeSet you need to edit: Example output: Update the localVolumeSet definition to include the new node and remove the failed node: Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed, and newnode.example.com is the new node. Verify that the new localblock Persistent Volume (PV) is available: Example output: Navigate to the openshift-storage project: Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required: <failed_osd_id> Is the integer in the pod name immediately after the rook-ceph-osd prefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example, FAILED_OSD_IDS=0,1,2 . The FORCE_OSD_REMOVAL value must be changed to true in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: Identify the PV associated with the Persistent Volume Claim (PVC): Example output: If there is a PV in Released state, delete it: For example: Example output: Identify the crashcollector pod deployment: If there is an existing crashcollector pod deployment, delete it: Delete the ocs-osd-removal-job : Example output: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Ensure that the new incremental mon is created and is in the Running state. Example output: OSD and monitor pod might take several minutes to get to the Running state. Verify that new OSD pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 2.4.3. Replacing a failed node on VMware user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure, resources, and disks to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the node, and get the labels on the node that you need to replace: <node_name> Specify the name of node that you need to replace. Identify the monitor pod (if any), and OSDs that are running in the node that you need to replace: Scale down the deployments of the pods identified in the step: For example: Mark the node as unschedulable: Remove the pods which are in Terminating state: Drain the node: Delete the node: Log in to VMware vSphere and terminate the Virtual Machine (VM) that you have identified. Create a new VM on VMware vSphere with the required infrastructure. See Infrastructure requirements . Create a new OpenShift Container Platform worker node using the new VM. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes in the OpenShift Web Console. Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Identify the namespace where OpenShift local storage operator is installed, and assign it to the local_storage_project variable: For example: Example output: Add a new worker node to the localVolumeDiscovery and localVolumeSet . Update the localVolumeDiscovery definition to include the new node, and remove the failed node: Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed, and newnode.example.com is the new node. Determine the localVolumeSet to edit: Example output: Update the localVolumeSet definition to include the new node, and remove the failed node: Example output: Remember to save before exiting the editor. In the this example, server3.example.com is removed and newnode.example.com is the new node. Verify that the new localblock Persistent Volume (PV) is available: Example output: Navigate to the openshift-storage project: Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required: <failed_osd_id> Is the integer in the pod name immediately after the rook-ceph-osd prefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example, FAILED_OSD_IDS=0,1,2 . The FORCE_OSD_REMOVAL value must be changed to true in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails, and the pod is not in the expected Completed state, check the pod logs for further debugging: For example: Identify the Persistent Volume (PV) associated with the Persistent Volume Claim (PVC): Example output: If there is a PV in Released state, delete it: For example: Example output: Identify the crashcollector pod deployment: If there is an existing crashcollector pod deployment, delete it: Delete the ocs-osd-removal-job : Example output: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Ensure that the new incremental mon is created, and is in the Running state: Example output: OSD and monitor pod might take several minutes to get to the Running state. Verify that new OSD pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 2.4.4. Replacing a failed node on VMware installer-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with the similar infrastructure, resources, and disks to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Get the labels on the node: <node_name> Specify the name of node that you need to replace. Identify the mon (if any) and Object Storage Devices (OSDs) that are running in the node: Scale down the deployments of the pods identified in the step: For example: Mark the node as unschedulable: Remove the pods which are in Terminating state: Drain the node: Click Compute Machines . Search for the required machine. Besides the required machine, click Action menu (...) Delete Machine . Click Delete to confirm the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes in the OpenShift Web Console. Confirm that the new node is in Ready state. Physically add a new device to the node. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Identify the namespace where the OpenShift local storage operator is installed, and assign it to the local_storage_project variable: For example: Example output: Add a new worker node to the localVolumeDiscovery and localVolumeSet . Update the localVolumeDiscovery definition to include the new node and remove the failed node: Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed and newnode.example.com is the new node. Determine the localVolumeSet you need to edit. Example output: Update the localVolumeSet definition to include the new node and remove the failed node: Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed and newnode.example.com is the new node. Verify that the new localblock PV is available: Example output: Navigate to the openshift-storage project: Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required: <failed_osd_id> Is the integer in the pod name immediately after the rook-ceph-osd prefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example, FAILED_OSD_IDS=0,1,2 . The FORCE_OSD_REMOVAL value must be changed to true in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging: For example: Identify the PV associated with the Persistent Volume Claim (PVC): Example output: If there is a PV in Released state, delete it: For example: Example output: Identify the crashcollector pod deployment: If there is an existing crashcollector pod deployment, delete it: Delete the ocs-osd-removal-job : Example output: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Ensure that the new incremental mon is created, and is in the Running state: Example output: OSD and monitor pod might take several minutes to get to the Running state. Verify that new OSD pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . | [
"oc get nodes --show-labels | grep <node_name>",
"oc get pods -n openshift-storage -o wide | grep -i <node_name>",
"oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage",
"oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage",
"oc scale deployment --selector=app=rook-ceph-crashcollector,node_name= <node_name> --replicas=0 -n openshift-storage",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc delete node <node_name>",
"oc get csr",
"oc adm certificate approve <certificate_name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)",
"local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)",
"echo USDlocal_storage_project",
"openshift-local-storage",
"oc edit -n USDlocal_storage_project localvolumediscovery auto-discover-devices",
"[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]",
"oc get -n USDlocal_storage_project localvolumeset",
"NAME AGE localblock 25h",
"oc edit -n USDlocal_storage_project localvolumeset localblock",
"[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]",
"USDoc get pv | grep localblock | grep Available",
"local-pv-551d950 512Gi RWO Delete Available localblock 26s",
"oc project openshift-storage",
"oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS= <failed_osd_id> | oc create -f -",
"oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'",
"2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1",
"oc get pv -L kubernetes.io/hostname | grep localblock | grep Released",
"local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1",
"oc delete pv <persistent_volume>",
"oc delete pv local-pv-d6bf175b",
"persistentvolume \"local-pv-d9c5cbd6\" deleted",
"oc get deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage",
"oc delete deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage",
"oc delete -n openshift-storage job ocs-osd-removal-job",
"job.batch \"ocs-osd-removal-job\" deleted",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pod -n openshift-storage | grep mon",
"rook-ceph-mon-a-cd575c89b-b6k66 2/2 Running 0 38m rook-ceph-mon-b-6776bc469b-tzzt8 2/2 Running 0 38m rook-ceph-mon-d-5ff5d488b5-7v8xh 2/2 Running 0 4m8s",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep <node_name>",
"oc get pods -n openshift-storage -o wide | grep -i <node_name>",
"oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage",
"oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage",
"oc scale deployment --selector=app=rook-ceph-crashcollector,node_name= <node_name> --replicas=0 -n openshift-storage",
"oc adm cordon <node_name>",
"oc get pods -A -o wide | grep -i <node_name> | awk '{if (USD4 == \"Terminating\") system (\"oc -n \" USD1 \" delete pods \" USD2 \" --grace-period=0 \" \" --force \")}'",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc delete node <node_name>",
"oc get csr",
"oc adm certificate approve <certificate_name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)",
"local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)",
"echo USDlocal_storage_project",
"openshift-local-storage",
"oc edit -n USDlocal_storage_project localvolumediscovery auto-discover-devices",
"[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]",
"oc get -n USDlocal_storage_project localvolumeset",
"NAME AGE localblock 25h",
"oc edit -n USDlocal_storage_project localvolumeset localblock",
"[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]",
"USDoc get pv | grep localblock | grep Available",
"local-pv-551d950 512Gi RWO Delete Available localblock 26s",
"oc project openshift-storage",
"oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS= <failed_osd_id> | oc create -f -",
"oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'",
"2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1",
"oc get pv -L kubernetes.io/hostname | grep localblock | grep Released",
"local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1",
"oc delete pv <persistent_volume>",
"oc delete pv local-pv-d6bf175b",
"persistentvolume \"local-pv-d9c5cbd6\" deleted",
"oc get deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage",
"oc delete deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage",
"oc delete -n openshift-storage job ocs-osd-removal-job",
"job.batch \"ocs-osd-removal-job\" deleted",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pod -n openshift-storage | grep mon",
"rook-ceph-mon-a-cd575c89b-b6k66 2/2 Running 0 38m rook-ceph-mon-b-6776bc469b-tzzt8 2/2 Running 0 38m rook-ceph-mon-d-5ff5d488b5-7v8xh 2/2 Running 0 4m8s",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep <node_name>",
"oc get pods -n openshift-storage -o wide | grep -i <node_name>",
"oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage",
"oc adm cordon <node_name>",
"oc get pods -A -o wide | grep -i <node_name> | awk '{if (USD4 == \"Terminating\") system (\"oc -n \" USD1 \" delete pods \" USD2 \" --grace-period=0 \" \" --force \")}'",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc delete node <node_name>",
"oc get csr",
"oc adm certificate approve <Certificate_Name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc edit -n local-storage-project localvolumediscovery auto-discover-devices [...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]",
"oc get -n local-storage-project localvolumeset NAME AGE localblock 25h",
"oc edit -n local-storage-project localvolumeset localblock [...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]",
"oc get pv | grep localblock CAPA- ACCESS RECLAIM STORAGE NAME CITY MODES POLICY STATUS CLAIM CLASS AGE local-pv- 931Gi RWO Delete Bound openshift-storage/ localblock 25h 3e8964d3 ocs-deviceset-2-0 -79j94 local-pv- 931Gi RWO Delete Bound openshift-storage/ localblock 25h 414755e0 ocs-deviceset-1-0 -959rp local-pv- 931Gi RWO Delete Available localblock 3m24s b481410 local-pv- 931Gi RWO Delete Bound openshift-storage/ localblock 25h d9c5cbd6 ocs-deviceset-0-0 -nvs68",
"oc project openshift-storage",
"osd_id_to_remove=1 oc get -n openshift-storage -o yaml deployment rook-ceph-osd-USD{osd_id_to_remove} | grep ceph.rook.io/pvc",
"ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmc ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmc",
"oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD{osd_id_to_remove} |oc create -f -",
"oc get pod -l job-name=ocs-osd-removal- osd_id_to_remove -n openshift-storage",
"oc logs -l job-name=ocs-osd-removal- osd_id_to_remove -n openshift-storage --tail=-1",
"ceph osd crush remove osd.osd_id_to_remove ceph osd rm osd_id_to_remove ceph auth del osd.osd_id_to_remove ceph osd crush rm osd_id_to_remove",
"oc get pv -L kubernetes.io/hostname | grep localblock | grep Released local-pv-5c9b8982 500Gi RWO Delete Released openshift-storage/ocs-deviceset-localblock-0-data-0-g2mmc localblock 24h worker-0",
"oc delete pv <persistent-volume>",
"oc delete pv local-pv-5c9b8982 persistentvolume \"local-pv-5c9b8982\" deleted",
"oc get deployment --selector=app=rook-ceph-crashcollector,node_name=<failed_node_name> -n openshift-storage",
"oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=<failed_node_name> -n openshift-storage",
"oc delete job ocs-osd-removal-USD{osd_id_to_remove}",
"job.batch \"ocs-osd-removal-0\" deleted",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= | cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep <node_name>",
"oc get pods -n openshift-storage -o wide | grep -i <node_name>",
"oc scale deployment rook-ceph-mon-a --replicas=0 -n openshift-storage",
"oc scale deployment rook-ceph-osd-1 --replicas=0 -n openshift-storage",
"oc scale deployment --selector=app=rook-ceph-crashcollector,node_name= <node_name> --replicas=0 -n openshift-storage",
"oc adm cordon <node_name>",
"oc get pods -A -o wide | grep -i <node_name> | awk '{if (USD4 == \"Terminating\") system (\"oc -n \" USD1 \" delete pods \" USD2 \" --grace-period=0 \" \" --force \")}'",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc delete node <node_name>",
"oc get csr",
"oc adm certificate approve <certificate_name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=''",
"local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)",
"local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)",
"echo USDlocal_storage_project",
"openshift-local-storage",
"oc get -n USDlocal_storage_project localvolume",
"NAME AGE localblock 25h",
"oc edit -n USDlocal_storage_project localvolume localblock",
"[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: #- worker-0 - worker-1 - worker-2 - worker-3 [...]",
"oc get pv | grep localblock",
"NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS AGE local-pv-3e8964d3 500Gi RWO Delete Bound ocs-deviceset-localblock-2-data-0-mdbg9 localblock 25h local-pv-414755e0 500Gi RWO Delete Bound ocs-deviceset-localblock-1-data-0-4cslf localblock 25h local-pv-b481410 500Gi RWO Delete Available localblock 3m24s local-pv-5c9b8982 500Gi RWO Delete Bound ocs-deviceset-localblock-0-data-0-g2mmc localblock 25h",
"oc project openshift-storage",
"osd_id_to_remove=1",
"oc get -n openshift-storage -o yaml deployment rook-ceph-osd-USD{ <osd_id_to_remove> } | grep ceph.rook.io/pvc",
"ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmc ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmc",
"oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS= <failed_osd_id> | oc create -f -",
"oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'",
"2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1",
"oc get pv -L kubernetes.io/hostname | grep localblock | grep Released",
"local-pv-5c9b8982 500Gi RWO Delete Released openshift-storage/ocs-deviceset-localblock-0-data-0-g2mmc localblock 24h worker-0",
"oc delete pv <persistent_volume>",
"oc delete pv local-pv-5c9b8982",
"persistentvolume \"local-pv-5c9b8982\" deleted",
"oc get deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage",
"oc delete deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage",
"oc delete -n openshift-storage job ocs-osd-removal-job",
"job.batch \"ocs-osd-removal-job\" deleted",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pod -n openshift-storage | grep mon",
"rook-ceph-mon-b-74f6dc9dd6-4llzq 1/1 Running 0 6h14m rook-ceph-mon-c-74948755c-h7wtx 1/1 Running 0 4h24m rook-ceph-mon-d-598f69869b-4bv49 1/1 Running 0 162m",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep <node_name>",
"oc get pods -n openshift-storage -o wide | grep -i <node_name>",
"oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage",
"oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage",
"oc scale deployment --selector=app=rook-ceph-crashcollector,node_name= <node_name> --replicas=0 -n openshift-storage",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc delete node <node_name>",
"oc get csr",
"oc adm certificate approve <certificate_name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)",
"local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)",
"echo USDlocal_storage_project",
"openshift-local-storage",
"oc edit -n USDlocal_storage_project localvolumediscovery auto-discover-devices",
"[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]",
"oc get -n USDlocal_storage_project localvolumeset",
"NAME AGE localblock 25h",
"oc edit -n USDlocal_storage_project localvolumeset localblock",
"[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]",
"USDoc get pv | grep localblock | grep Available",
"local-pv-551d950 512Gi RWO Delete Available localblock 26s",
"oc project openshift-storage",
"oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS= <failed_osd_id> | oc create -f -",
"oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'",
"2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1",
"oc get pv -L kubernetes.io/hostname | grep localblock | grep Released",
"local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1",
"oc delete pv <persistent_volume>",
"oc delete pv local-pv-d6bf175b",
"persistentvolume \"local-pv-d9c5cbd6\" deleted",
"oc get deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage",
"oc delete deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage",
"oc delete -n openshift-storage job ocs-osd-removal-job",
"job.batch \"ocs-osd-removal-job\" deleted",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pod -n openshift-storage | grep mon",
"rook-ceph-mon-a-cd575c89b-b6k66 2/2 Running 0 38m rook-ceph-mon-b-6776bc469b-tzzt8 2/2 Running 0 38m rook-ceph-mon-d-5ff5d488b5-7v8xh 2/2 Running 0 4m8s",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep <node_name>",
"oc get pods -n openshift-storage -o wide | grep -i <node_name>",
"oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage",
"oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage",
"oc scale deployment --selector=app=rook-ceph-crashcollector,node_name= <node_name> --replicas=0 -n openshift-storage",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)",
"local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)",
"echo USDlocal_storage_project",
"openshift-local-storage",
"oc edit -n USDlocal_storage_project localvolumediscovery auto-discover-devices",
"[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]",
"oc get -n USDlocal_storage_project localvolumeset",
"NAME AGE localblock 25h",
"oc edit -n USDlocal_storage_project localvolumeset localblock",
"[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]",
"oc get pv | grep localblock | grep Available",
"local-pv-551d950 512Gi RWO Delete Available localblock 26s",
"oc project openshift-storage",
"oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS= <failed_osd_id> | oc create -f -",
"oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'",
"2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1",
"oc get pv -L kubernetes.io/hostname | grep localblock | grep Released",
"local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1",
"oc delete pv <persistent_volume>",
"oc delete pv local-pv-d6bf175b",
"persistentvolume \"local-pv-d9c5cbd6\" deleted",
"oc get deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage",
"oc delete deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage",
"oc delete -n openshift-storage job ocs-osd-removal-job",
"job.batch \"ocs-osd-removal-job\" deleted",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pod -n openshift-storage | grep mon",
"rook-ceph-mon-a-cd575c89b-b6k66 2/2 Running 0 38m rook-ceph-mon-b-6776bc469b-tzzt8 2/2 Running 0 38m rook-ceph-mon-d-5ff5d488b5-7v8xh 2/2 Running 0 4m8s",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep <node_name>",
"oc get pods -n openshift-storage -o wide | grep -i <node_name>",
"oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage",
"oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage",
"oc scale deployment --selector=app=rook-ceph-crashcollector,node_name= <node_name> --replicas=0 -n openshift-storage",
"oc adm cordon <node_name>",
"oc get pods -A -o wide | grep -i <node_name> | awk '{if (USD4 == \"Terminating\") system (\"oc -n \" USD1 \" delete pods \" USD2 \" --grace-period=0 \" \" --force \")}'",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc delete node <node_name>",
"oc get csr",
"oc adm certificate approve <certificate_name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)",
"local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)",
"echo USDlocal_storage_project",
"openshift-local-storage",
"oc edit -n USDlocal_storage_project localvolumediscovery auto-discover-devices",
"[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]",
"oc get -n USDlocal_storage_project localvolumeset",
"NAME AGE localblock 25h",
"oc edit -n USDlocal_storage_project localvolumeset localblock",
"[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]",
"USDoc get pv | grep localblock | grep Available",
"local-pv-551d950 512Gi RWO Delete Available localblock 26s",
"oc project openshift-storage",
"oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS= <failed_osd_id> | oc create -f -",
"oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'",
"2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1",
"oc get pv -L kubernetes.io/hostname | grep localblock | grep Released",
"local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1",
"oc delete pv <persistent_volume>",
"oc delete pv local-pv-d6bf175b",
"persistentvolume \"local-pv-d9c5cbd6\" deleted",
"oc get deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage",
"oc delete deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage",
"oc delete -n openshift-storage job ocs-osd-removal-job",
"job.batch \"ocs-osd-removal-job\" deleted",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pod -n openshift-storage | grep mon",
"rook-ceph-mon-a-cd575c89b-b6k66 2/2 Running 0 38m rook-ceph-mon-b-6776bc469b-tzzt8 2/2 Running 0 38m rook-ceph-mon-d-5ff5d488b5-7v8xh 2/2 Running 0 4m8s",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep _<node_name>_",
"oc get pods -n openshift-storage -o wide | grep -i _<node_name>_",
"oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage",
"oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage",
"oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage",
"oc adm cordon _<node_name>_",
"oc get pods -A -o wide | grep -i _<node_name>_ | awk '{if (USD4 == \"Terminating\") system (\"oc -n \" USD1 \" delete pods \" USD2 \" --grace-period=0 \" \" --force \")}'",
"oc adm drain _<node_name>_ --force --delete-emptydir-data=true --ignore-daemonsets",
"oc label node _<new_node_name>_ cluster.ocs.openshift.io/openshift-storage=\"\"",
"local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)",
"local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)",
"echo USDlocal_storage_project",
"openshift-local-storage",
"oc edit -n USDlocal_storage_project localvolumediscovery auto-discover-devices",
"[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - **newnode.example.com** [...]",
"oc get -n USDlocal_storage_project localvolumeset",
"NAME AGE localblock 25h",
"oc edit -n USDlocal_storage_project localvolumeset localblock",
"[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - **newnode.example.com** [...]",
"oc get pv | grep localblock | grep Available",
"local-pv-551d950 512Gi RWO Delete Available localblock 26s",
"oc project openshift-storage",
"oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS= <failed_osd_id> | oc create -f -",
"oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'",
"2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1",
"oc get pv -L kubernetes.io/hostname | grep localblock | grep Released",
"local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1",
"oc delete pv _<persistent_volume>_",
"oc delete pv local-pv-d6bf175b",
"persistentvolume \"local-pv-d9c5cbd6\" deleted",
"oc get deployment --selector=app=rook-ceph-crashcollector,node_name=_<failed_node_name>_ -n openshift-storage",
"oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=_<failed_node_name>_ -n openshift-storage",
"oc delete -n openshift-storage job ocs-osd-removal-job",
"job.batch \"ocs-osd-removal-job\" deleted",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pod -n openshift-storage | grep mon",
"rook-ceph-mon-a-cd575c89b-b6k66 2/2 Running 0 38m rook-ceph-mon-b-6776bc469b-tzzt8 2/2 Running 0 38m rook-ceph-mon-d-5ff5d488b5-7v8xh 2/2 Running 0 4m8s",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/replacing_nodes/openshift_data_foundation_deployed_using_local_storage_devices |
F.3. Creating New Logical Volumes for an Existing Cluster | F.3. Creating New Logical Volumes for an Existing Cluster To create new volumes, either volumes need to be added to a managed volume group on the node where it is already activated by the service, or the volume_list must be temporarily bypassed or overridden to allow for creation of the volumes until they can be prepared to be configured by a cluster resource. Note New logical volumes can be added only to existing volume groups managed by a cluster lvm resource if lv_name is not specified. The lvm resource agent allows for only a single logical volume within a volume group if that resource is managing volumes individually, rather than at a volume group level. To create a new logical volume when the service containing the volume group where the new volumes will live is already active, use the following procedure. The volume group should already be tagged on the node owning that service, so simply create the volumes with a standard lvcreate command on that node. Determine the current owner of the relevant service. On the node where the service is started, create the logical volume. Add the volume into the service configuration in whatever way is necessary. To create a new volume group entirely, use the following procedure. Create the volume group on one node using that node's name as a tag, which should be included in the volume_list . Specify any desired settings for this volume group as normal and specify --addtag nodename , as in the following example: Create volumes within this volume group as normal, otherwise perform any necessary administration on the volume group. When the volume group activity is complete, deactivate the volume group and remove the tag. Add the volume into the service configuration in whatever way is necessary. | [
"clustat",
"lvcreate -l 100%FREE -n lv2 myVG",
"vgcreate myNewVG /dev/mapper/mpathb --addtag node1.example.com",
"vgchange -an myNewVg --deltag node1.example.com"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-halvm-newvols-CA |
4.43. cyrus-sasl | 4.43. cyrus-sasl 4.43.1. RHBA-2011:1687 - cyrus-sasl bug fix and enhancement update Updated cyrus-sasl packages that fix two bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The cyrus-sasl packages contain the Cyrus implementation of the Simple Authentication and Security Layer (SASL), a method for adding authentication support to connection-based protocols. Bug Fixes BZ# 720451 Prior to this update, the ntlm plug-in did not work due to a code error. This update modifies the source code so that the plug-in now works as expected. BZ# 730242 Prior to this update, creating the user ID and the group ID of the saslauth daemon caused conflicts. This update corrects this behavior and now the saslauth daemon works as expected. BZ# 730246 Prior to this update, cyrus-sasl displayed redundant warnings during the compilation. With this update, cyrus-sasl has been modified and now works as expected. Enhancement BZ# 727274 This update adds support of partial Relocation Read-Only (RELRO) for the cyrus-sasl libraries. All users of cyrus-sasl are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/cyrus-sasl |
Chapter 3. Important Changes to External Kernel Parameters | Chapter 3. Important Changes to External Kernel Parameters This chapter provides system administrators with a summary of significant changes in the kernel shipped with Red Hat Enterprise Linux 7.7. These changes include added or updated proc entries, sysctl , and sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes. New kernel parameters usbcore.quirks = [USB] This parameter provides a list of quirk entries to augment the built-in usb core quirk list. The entries are separated by commas. Each entry has the form VendorID:ProductID:Flags . The IDs are 4-digit hex numbers and Flags is a set of letters. Each letter will change the built-in quirk; setting it if it is clear and clearing it if it is set. The letters have the following meanings: a = USB_QUIRK_STRING_FETCH_255 (string descriptors must not be fetched using a 255-byte read); b = USB_QUIRK_RESET_RESUME (device cannot resume correctly so reset it instead); c = USB_QUIRK_NO_SET_INTF (device cannot handle Set-Interface requests); d = USB_QUIRK_CONFIG_INTF_STRINGS (device cannot handle its Configuration or Interface strings); e = USB_QUIRK_RESET (device cannot be reset (e.g morph devices), do not use reset); f = USB_QUIRK_HONOR_BNUMINTERFACES (device has more interface descriptions than the bNumInterfaces count, and cannot handle talking to these interfaces); g = USB_QUIRK_DELAY_INIT (device needs a pause during initialization, after we read the device descriptor); h = USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL (For high speed and super speed interrupt endpoints, the USB 2.0 and USB 3.0 spec require the interval in microframes (1 microframe = 125 microseconds) to be calculated as interval = 2 ^ ( bInterval -1). Devices with this quirk report their bInterval as the result of this calculation instead of the exponent variable used in the calculation); i = USB_QUIRK_DEVICE_QUALIFIER (device cannot handle device_qualifier descriptor requests); j = USB_QUIRK_IGNORE_REMOTE_WAKEUP (device generates spurious wakeup, ignore remote wakeup capability); k = USB_QUIRK_NO_LPM (device cannot handle Link Power Management); l = USB_QUIRK_LINEAR_FRAME_INTR_BINTERVAL (Device reports its bInterval as linear frames instead of the USB 2.0 calculation); m = USB_QUIRK_DISCONNECT_SUSPEND (Device needs to be disconnected before suspend to prevent spurious wakeup); n = USB_QUIRK_DELAY_CTRL_MSG (Device needs a pause after every control message); The example entry: ppc_tm = [PPC] Disables Hardware Transactional Memory. Format: {"off"} cgroup.memory = [KNL] Passes options to the cgroup memory controller. Format: <string> nokmem - This option disables kernel memory accounting. mds = [X86,INTEL] Controls mitigation for the Micro-architectural Data Sampling (MDS) vulnerability. Certain CPUs are vulnerable to an exploit against CPU internal buffers which can forward information to a disclosure gadget under certain conditions. In vulnerable processors, the speculatively forwarded data can be used in a cache side channel attack, to access data to which the attacker does not have direct access. The options are: full - Enable MDS mitigation on vulnerable CPUs. full,nosmt - Enable MDS mitigation and disable Simultaneous multithreading (SMT) on vulnerable CPUs. off - Unconditionally disable MDS mitigation. Not specifying this option is equivalent to mds=full . mitigations = [X86,PPC,S390] Controls optional mitigations for CPU vulnerabilities. This is a set of curated, arch-independent options, each of which is an aggregation of existing arch-specific options. The options are: off - Disable all optional CPU mitigations. This improves system performance, but it may also expose users to several CPU vulnerabilities. Equivalent to: nopti [X86,PPC] nospectre_v1 [PPC] nobp=0 [S390] nospectre_v2 [X86,PPC,S390] spec_store_bypass_disable=off [X86,PPC] l1tf=off [X86] mds=off [X86] auto (default) - Mitigate all CPU vulnerabilities, but leave Simultaneous multithreading (SMT) enabled, even if it's vulnerable. This is for users who do not want to be surprised by SMT getting disabled across kernel upgrades, or who have other ways of avoiding SMT-based attacks. Equivalent to: (default behavior) auto,nosmt - Mitigate all CPU vulnerabilities, disabling Simultaneous multithreading (SMT) if needed. This is for users who always want to be fully mitigated, even if it means losing SMT. Equivalent to: l1tf=flush,nosmt [X86] mds=full,nosmt [X86] watchdog_thresh = [KNL] Sets the hard lockup detector stall duration threshold in seconds. The soft lockup detector threshold is set to twice the value. A value of 0 disables both lockup detectors. Default is 10 seconds. novmcoredd [KNL,KDUMP] Disables device dump. The device dump allows drivers to append dump data to vmcore so you can collect driver specified debug info. Drivers can append the data without any limit and this data is stored in memory, so this may cause significant memory stress. Disabling device dump can help save memory but the driver debug data will be no longer available. This parameter is only available when CONFIG_PROC_VMCORE_DEVICE_DUMP is set. Updated kernel parameters resource_alignment Specifies alignment and device to reassign aligned memory resources. Format: [<order of align>@][<domain>:]<bus>:<slot>.<func>[; ... ] [<order of align>@]pci:<vendor>:<device>\[:<subvendor>:<subdevice>][; ... ] If <order of align> is not specified, PAGE_SIZE is used as alignment. PCI-PCI bridge can be specified, if resource windows need to be expanded. irqaffinity = [SMP] Sets the default irq affinity mask. Format: <cpu number>,... ,<cpu number> <cpu number>-<cpu number> drivers (must be a positive range in ascending order) mixture <cpu number>,... ,<cpu number>-<cpu number> Drivers will use drivers' affinity masks for default interrupt assignment instead of placing them all on CPU0. The options are: auto (default) - Mitigate all CPU vulnerabilities, but leave Simultaneous multithreading (SMT) enabled, even if it is vulnerable. This is for users who do not want to be surprised by SMT getting disabled across kernel upgrades, or who have other ways of avoiding SMT-based attacks. Equivalent to: (default behavior) auto,nosmt - Mitigate all CPU vulnerabilities, disabling Simultaneous multithreading (SMT) if needed. This is for users who always want to be fully mitigated, even if it means losing SMT. Equivalent to: l1tf=flush,nosmt [X86] mds=full,nosmt [X86] New /proc/sys/net/core parameters bpf_jit_kallsyms If Berkeley Packet Filter Just in Time compiler is enabled, the compiled images are unknown addresses to the kernel. It means they neither show up in traces nor in the /proc/kallsyms file. This enables export of these addresses, which can be used for debugging/tracing. If the bpf_jit_harden parameter is enabled, this feature is disabled. Possible values are: 0 - Disable Just in Time (JIT) kallsyms export (default value). 1 - Enable Just in Time (JIT) kallsyms export for privileged users only. Updated /proc/sys/fs parameters dentry-state Dentries are dynamically allocated and deallocated. From linux/include/linux/dcache.h : The nr_dentry number shows the total number of dentries allocated (active + unused). The nr_unused number shows the number of dentries that are not actively used, but are saved in the least recently used (LRU) list for future reuse. The age_limit number is the age in seconds after which dcache entries can be reclaimed when memory is short and the want_pages number is nonzero when the shrink_dcache_pages() function has been called and the dcache is not pruned yet. The nr_negative number shows the number of unused dentries that are also negative dentries which do not map to any files. Instead, they help speeding up rejection of non-existing files provided by the users. | [
"quirks=0781:5580:bk,0a5c:5834:gij",
"struct dentry_stat_t dentry_stat { int nr_dentry; int nr_unused; int age_limit; (age in seconds) int want_pages; (pages requested by system) int nr_negative; (# of unused negative dentries) int dummy; (Reserved for future use) };"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.7_release_notes/kernel_parameters_changes |
Chapter 15. Security | Chapter 15. Security The SELinux user space packages rebased to version 2.5 The SELinux user space packages have been upgraded to upstream version 2.5, which provides a number of enhancements, bug fixes, and performance improvements over the version. The most important new features in the SELinux userspace 2.5 include: The new SELinux module store supports priorities. The priority concept provides an ability to override a system module with a module of a higher priority. SELinux Common Intermediate Language (CIL) provides clear and simple syntax that is easy to read, parse, and to generate by high-level compilers, analysis tools, and policy generation tools. Time-consuming SELinux operations, such as policy installations or loading new policy modules, are now significantly faster. Note: The default location of the SELinux modules remains in the /etc/selinux/ directory in Red Hat Enterprise Linux 7, whereas the upstream version uses /var/lib/selinux/ . To change this location for migration, set the store-root= option in the /etc/selinux/semanage.conf file. (BZ#1297815) scap-workbench rebased to version 1.1.2 The scap-workbench package has been rebased to version 1.1.2, which provides a new SCAP Security Guide integration dialog. The dialog helps the administrator choose a product that needs to be scanned instead of choosing content files. The new version also offers a number of performance and user-experience improvements, including improved rule-searching in the tailoring window, the possibility to fetch remote resources in SCAP content using the GUI, and the dry-run feature. The dry-run feature enables to user to get oscap command-line arguments to the diagnostics window instead of running the scan. (BZ#1202854) openscap rebased to version 1.2.10 The OpenSCAP suite that enables integration of the Security Content Automation Protocol (SCAP) line of standards has been rebased to version 1.2.10, the latest upstream version. The openscap packages provide the OpenSCAP library and the oscap utility. Most notably, this update adds support for scanning containers using the atomic scan command. In addition, this update provides the following enhancements: oscap-vm , a tool for offline scanning of virtual machines oscap-chroot , a tool for offline scanning of file systems mounted at arbitrary paths Full support for Open Vulnerability and Assessment Language (OVAL) 5.11.1 Native support for remote .xml.bz2 files Grouping HTML report results according to various criteria HTML report improvements Verbose mode for debugging OVAL evaluation (BZ# 1278147 ) firewalld rebased to version 0.4.3.2 The firewalld packages have been upgraded to upstream version 0.4.3.2 which provides a number of enhancements and bug fixes over the version. Notable changes include the following: Performance improvements: firewalld starts and restarts significantly faster thanks to the new transaction model which groups together rules that are applied simultaneously. This model uses the iptables restore commands. Also, the firewall-cmd , firewall-offline-cmd , firewall-config , and firewall-applet tools have been improved with performance in mind. The improved management of connections, interfaces and sources: The user can now control zone settings for connections in NetworkManager . In addition, zone settings for interfaces are also controlled by firewalld and in the ifcfg file. Default logging option: With the new LogDenied setting, the user can easily debug and log denied packets. ipset support: firewalld now supports several IP sets as zone sources, within rich and direct rules. Note that in Red Hat Enterprise Linux 7.3, firewalld supports only the following ipset types: hash:net hash:ip (BZ# 1302802 ) audit rebased to version 2.6.5 The audit packages contain the user space utilities for storing and searching the audit records which have been generated by the audit subsystem in the Linux kernel. The audit packages have been upgraded to upstream version 2.6.5, which provides a number of enhancements and bug fixes over the version. Notable changes include the following: The audit daemon now includes a new flush technique called incremental_async , which improves its performance approximately 90 times. The audit system now has many more rules that can be composed into an audit policy. Some of these new rules include support for the Security Technical Implementation Guide (STIG), PCI Data Security Standard, and other capabilities such as auditing the occurrence of 32-bit syscalls, significant power usage, or module loading. The auditd.conf configuration file and the auditctl command now support many new options. The audit system now supports a new log format called enriched , which resolves UID, GID, syscall, architecture, and network addresses. This will aid in log analysis on a machine that differs from where the log was generated. (BZ# 1296204 ) MACsec (IEEE 802.1AE) is now supported With this update, the Media Access Control Security (MACsec) encryption over Ethernet is supported. MACsec encrypts and authenticates all traffic in LANs with the GCM-AES-128 algorithm. (BZ#1104151) The rsyslog RELP module now binds to a specific rule set With this update, the rsyslog Reliable Event-Logging Protocol (RELP) module is now capable of binding to specific rule set with each input instance. The input() instance rule set has higher priority than the module() rule set. (BZ# 1223566 ) rsyslog imfile module now supports a wildcard file name The rsyslog packages provide an enhanced, multi-threaded syslog daemon. With this update, the rsyslog imfile module supports using wildcards inside file names and adding the actual file name to the message's metadata. This is useful, when rsyslog needs to read logs under a directory and does not know the names of files in advance. (BZ# 1303617 ) Syscalls in audit.log are now converted to text With this update, auditd converts system call numbers to their names prior to forwarding them to syslog daemon through the audispd event multiplexor. (BZ#1127343) audit subsystem can now filter by process name The user can now audit by executable name (with the -F exe=<path-to-executable> option), which allows expression of many new audit rules. You can use this functionality to detect events such as the bash shell opening a network connection. (BZ#1135562) mod_security_crs rebased to version 2.2.9 The mod_security_crs package has been upgraded to upstream version 2.2.9, which provides a number of bug fixes and enhancements over the version. Notable changes include: A new PHP rule (958977) to detect PHP exploits. A JS overrides file to identify successful XSS probes. New XSS detection rules. Fixed session-hijacking rules. (BZ# 1150614 ) opencryptoki rebased to version 3.5 The opencryptoki packages have been upgraded to version 3.5, which provides a number of bug fixes and enhancements over the version. Notable changes include: The openCryptoki service automatically creates lock/ and log/ directories, if not present. The PKCS#11 API supports hash-based message authentication code (HMAC) with SHA hashes in all tokens. The openCryptoki library provides dynamic tracing set by the OPENCRYPTOKI_TRACE_LEVEL environment variable. (BZ#1185421) gnutls now uses the central certificate store The gnutls packages provide the GNU Transport Layer Security (GnuTLS) library, which implements cryptographic algorithms and protocols such as SSL, TLS, and DTLS. With this update, GnuTLS uses the central certificate store of Red Hat Enterprise Linux through the p11-kit packages. Certificate Authority (CA) updates, as well as certificate black lists, are now visible to applications at runtime. (BZ# 1110750 ) The firewall-cmd command can now provide additional details With this update, firewalld shows details of a service, zone, and ICMP type. Additionally, the user can list the full path to the source XML file. The new options for firewall-cmd are: [--permanent] --info-zone=zone [--permanent] --info-service=service [--permanent] --info-icmptype=icmptype (BZ# 1147500 ) pam_faillock can be now configured with unlock_time=never The pam_faillock module now allows specifying using the unlock_time=never option that the user authentication lock caused by multiple authentication failures should never expire. (BZ# 1273373 ) libica rebased to version 2.6.2 The libica packages have been updated to upstream version 2.6.2, which provides a number of bug fixes and enhancements over the version. Notably, this update adds support for generation of pseudo random numbers, including enhanced support for Deterministic Random Bit Generator (DRBG), according to updated security specification NIST SP 800-90A. (BZ#1274390) New lastlog options The lastlog utility now has the new --clear and --set options, which allow the system administrator to reset a user's lastlog entry to the never logged in value or to the current time. This means you can now re-enable user accounts previously locked due to inactivity. (BZ#1114081) libreswan rebased to version 3.15 Libreswan is an implementation of Internet Protocol Security (IPsec) and Internet Key Exchange (IKE) for Linux. The libreswan packages have been upgraded to upstream version 3.15, which provides a number of enhancements and bug fixes over the version. Notable changes include the following: The nonce size is increased to meet the RFC requirements when using the SHA2 algorithms. Libreswan now calls the NetworkManager helper in case of a connection error. All CRLdistributionpoints in a certificate are now processed. Libreswan no longer tries to delete non-existing IPsec Security Associations (SAs). The pluto IKE daemon now has the CAP_DAC_READ_SEARCH capability. pluto no longer crashes when on-demand tunnels are used. pam_acct_mgmt is now properly set. The regression was fixed so tunnels with keyingtries=0 try to establish the tunnel indefinitely. The delay before re-establishing the deleted tunnel that is configured to remain up is now less than one second. (BZ# 1389316 ) The SHA-3 implementation in nettle now conforms to FIPS 202 nettle is a cryptographic library that is designed to fit easily in almost any context. With this update, the Secure Hash Algorithm 3 (SHA-3) implementation has been updated to conform the final Federal Information Processing Standard (FIPS) 202 draft. (BZ# 1252936 ) scap-security-guide rebased to version 0.1.30 The scap-security-guide project provides a guide for configuration of the system from the final system's security point of view. The package has been upgraded to version 0.1.30. Notable improvements include: The NIST Committee on National Security Systems (CNSS) Instruction No. 1253 profile is now included and updated for Red Hat Enterprise Linux 7. The U.S. Government Commercial Cloud Services (C2S) profile inspired by the Center for Internet Security (CIS) benchmark is now provided. The remediation scripts are now included in benchmarks directly, and the external shell library is no longer necessary. The Defense Information Systems Agency (DISA) Security Technical Implementation Guide (STIG) profile for Red Hat Enterprise Linux 7 has been updated to be equal to the DISA STIG profile for Red Hat Enterprise Linux 6. The draft of the Criminal Justice Information Services (CJIS) Security Policy profile is now available for Red Hat Enterprise Linux 7. (BZ# 1390661 ) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/new_features_security |
Chapter 5. Installing Capsule on AWS | Chapter 5. Installing Capsule on AWS On your AWS environment, complete the following steps: Connect to the new instance. Install Capsule Server. For more information, see Installing Capsule Server . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/deploying_red_hat_satellite_on_amazon_web_services/installing_capsule_on_aws |
Chapter 5. Using the Nexus Repository Manager plugin | Chapter 5. Using the Nexus Repository Manager plugin The Nexus Repository Manager plugin displays the information about your build artifacts in your Developer Hub application. The build artifacts are available in the Nexus Repository Manager. Important The Nexus Repository Manager plugin is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features, see Technology Preview Features Scope . Additional detail on how Red Hat provides support for bundled community dynamic plugins is available on the Red Hat Developer Support Policy page. The Nexus Repository Manager is a front-end plugin that enables you to view the information about build artifacts. Prerequisites Your Developer Hub application is installed and running. You have installed the Nexus Repository Manager plugin. Procedure Open your Developer Hub application and select a component from the Catalog page. Go to the BUILD ARTIFACTS tab. The BUILD ARTIFACTS tab contains a list of build artifacts and related information, such as VERSION , REPOSITORY , REPOSITORY TYPE , MANIFEST , MODIFIED , and SIZE . | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/using_dynamic_plugins/using-the-nexus-repository-manager-plugin |
Chapter 3. Performing Additional Configuration on Satellite Server | Chapter 3. Performing Additional Configuration on Satellite Server 3.1. How to Configure Inter-Satellite Synchronization Red Hat Satellite uses Inter-Satellite Synchronization (ISS) to synchronize content between two Satellite Servers including those that are air-gapped. You can use ISS in cases such as: If you want to copy some but not all content from your Satellite Server to other Satellite Servers. For example, you have Content Views that your IT department consumes from Satellite Server, and you want to copy content from those Content Views to other Satellite Servers. If you want to copy all Library content from your Satellite Server to other Satellite Servers. For example, you have Products and repositories that your IT department consumes from Satellite Server in the Library, and you want to copy all Products and repositories in that organization to other Satellite Servers. Note You cannot use ISS to synchronize content from Satellite Server to Capsule Server. Capsule Server supports synchronization natively. For more information, see Capsule Server Overview in Planning for Red Hat Satellite . There are different ways of using ISS. The way you can use depends on your multi-server setup that can fall to one of the following scenarios. 3.1.1. ISS Network Sync in a Disconnected Scenario In a disconnected scenario, there is the following setup: The upstream Satellite Server is connected to the Internet. This server consumes content from the Red Hat Content Delivery Network (CDN) or custom sources. The downstream Satellite Server is completely isolated from all external networks. The downstream Satellite Server can communicate with a connected upstream Satellite Server over an internal network. Figure 3.1. The Satellite ISS Disconnected Scenario You can configure your downstream Satellite Server to synchronize content from the upstream Satellite Server over the network. See Section 3.2, "Configuring Satellite Server to Synchronize Content over a Network" . 3.1.2. ISS Export Sync in an Air-Gapped Scenario In an air-gapped scenario, there is the following setup: The upstream Satellite Server is connected to the Internet. This server consumes content from the Red Hat CDN or custom sources. The downstream Satellite Server is completely isolated from all external networks. The downstream Satellite Server does not have a network connection to a connected upstream Satellite Server. Figure 3.2. The Satellite ISS Air-Gapped Scenario The only way for an air-gapped downstream Satellite Server to receive content updates is by exporting payload from the upstream Satellite Server, bringing it physically to the downstream Satellite Server, and importing the payload. For more information, see Synchronizing Content Between Satellite Servers in the Content Management Guide . Configure your downstream Satellite Server to synchronize content using exports. See Section 3.3, "Configuring Satellite Server to Synchronize Content Using Exports" . 3.2. Configuring Satellite Server to Synchronize Content over a Network Configure a downstream Satellite Server to synchronize repositories from a connected upstream Satellite Server over HTTPS. Prerequisites A network connection exists between the upstream Satellite Server and the downstream Satellite Server. You imported the subscription manifest on both the upstream and downstream Satellite Server. On the upstream Satellite Server, you enabled the required repositories for the organization. The upstream user is an admin or has the following permissions: view_organizations view_products edit_organizations (to download the CA certificate) view_lifecycle_environments view_content_views On the downstream Satellite Server, you have imported the SSL certificate of the upstream Satellite Server using the contents of http:// upstream-satellite.example.com /pub/katello-server-ca.crt . For more information, see Importing SSL Certificates in the Content Management Guide . The downstream user is an admin or has the permissions to create product repositories and organizations. Procedure Navigate to Content > Subscriptions . Click the Manage Manifest button. Navigate to the CDN Configuration tab. Select the Network Sync tab. In the URL field, enter the address of the upstream Satellite Server. In the Username , enter your username for upstream login. In the Password , enter your password or personal access token for upstream login. In the Organization label field, enter the label of the upstream organization. Optional: In the Lifecycle Environment Label field, enter the label of the upstream lifecycle environment. Default is Library . Optional: In the Content view label field, enter the label of the upstream Content View. Default is Default_Organization_View . From the SSL CA Content Credential menu, select a CA certificate used by the upstream Satellite Server. Click Update . In the Satellite web UI, navigate to Content > Products . Click Sync Now to synchronize the repositories. You can also create a synchronization plan to ensure updates on a regular basis. For more information, see Creating a Synchronization Plan in the Content Management Guide . CLI Procedure Connect to your downstream Satellite Server using SSH. View information about the upstream CA certificate: Note the ID of the CA certificate for the step. Set CDN configuration to an upstream Satellite Server: The default lifecycle environment label is Library . The default Content View label is Default_Organization_View . 3.3. Configuring Satellite Server to Synchronize Content Using Exports If you deployed your downstream Satellite Server as air-gapped, configure your Satellite Server as such to avoid attempts to consume content from a network. Procedure In the Satellite web UI, navigate to Content > Subscriptions . Click the Manage Manifest button. Switch to the CDN Configuration tab. Select the Export Sync tab. Click Update . CLI Procedure Log in to your Satellite Server using SSH. Set CDN configuration to sync using exports: Additional Resources For more information about synchronizing content using exports, see How to Synchronize Content Using Export and Import in the Content Management Guide . 3.4. Importing Kickstart Repositories Kickstart repositories are not provided by the Content ISO image. To use Kickstart repositories in your disconnected Satellite, you must download a binary DVD ISO file for the version of Red Hat Enterprise Linux that you want to use and copy the Kickstart files to Satellite. To import Kickstart repositories for Red Hat Enterprise Linux 7, complete Section 3.4.1, "Importing Kickstart Repositories for Red Hat Enterprise Linux 7" . To import Kickstart repositories for Red Hat Enterprise Linux 8, complete Section 3.4.2, "Importing Kickstart Repositories for Red Hat Enterprise Linux 8" . 3.4.1. Importing Kickstart Repositories for Red Hat Enterprise Linux 7 To import Kickstart repositories for Red Hat Enterprise Linux 7, complete the following steps on Satellite. Procedure Navigate to the Red Hat Customer Portal at access.redhat.com and log in. In the upper left of the window, click Downloads . To the right of Red Hat Enterprise Linux 7 , click Versions 7 and below . From the Version list, select the required version of the Red Hat Enterprise Linux 7, for example 7.7. In the Download Red Hat Enterprise Linux window, locate the binary DVD version of the ISO image, for example, Red Hat Enterprise Linux 7.7 Binary DVD , and click Download Now . When the download completes, copy the ISO image to Satellite Server. On Satellite Server, create a mount point and temporarily mount the ISO image at that location: Create Kickstart directories: Copy the kickstart files from the ISO image: Add the following entries to the listing files: To the /var/www/html/pub/sat-import/content/dist/rhel/server/7/listing file, append the version number with a new line. For example, for the RHEL 7.7 ISO, append 7.7 . To the /var/www/html/pub/sat-import/content/dist/rhel/server/7/7.7/listing file, append the architecture with a new line. For example, x86_64 . To the /var/www/html/pub/sat-import/content/dist/rhel/server/7/7.7/x86_64/listing file, append kickstart with a new line. Copy the .treeinfo files from the ISO image: If you do not plan to use the mounted binary DVD ISO image, unmount and remove the directory: In the Satellite web UI, enable the Kickstart repositories. 3.4.2. Importing Kickstart Repositories for Red Hat Enterprise Linux 8 To import Kickstart repositories for Red Hat Enterprise Linux 8, complete the following steps on Satellite. Procedure Navigate to the Red Hat Customer Portal at access.redhat.com and log in. In the upper left of the window, click Downloads . Click Red Hat Enterprise Linux 8 . In the Download Red Hat Enterprise Linux window, locate the binary DVD version of the ISO image, for example, Red Hat Enterprise Linux 8.1 Binary DVD , and click Download Now . When the download completes, copy the ISO image to Satellite Server. On Satellite Server, create a mount point and temporarily mount the ISO image at that location: Create directories for Red Hat Enterprise Linux 8 AppStream and BaseOS Kickstart repositories: Copy the kickstart files from the ISO image: Note that for BaseOS, you must also copy the contents of the /mnt/ iso /images/ directory. Add the following entries to the listing files: To the /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/appstream/listing file, append kickstart with a new line. To the /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/baseos/listing file, append kickstart with a new line: To the /var/www/html/pub/sat-import/content/dist/rhel8/listing file, append the version number with a new line. For example, for the RHEL 8.1 binary ISO, append 8.1 . Copy the .treeinfo files from the ISO image: Open the /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/baseos/kickstart/treeinfo file for editing. In the [general] section, make the following changes: Change packagedir = AppStream/Packages to packagedir = Packages Change repository = AppStream to repository = . Change variant = AppStream to variant = BaseOS Change variants = AppStream,BaseOS to variants = BaseOS In the [tree] section, change variants = AppStream,BaseOS to variants = BaseOS . In the [variant-BaseOS] section, make the following changes: Change packages = BaseOS/Packages to packages = Packages Change repository = BaseOS to repository = . Delete the [media] and [variant-AppStream] sections. Save and close the file. Verify that the /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/baseos/kickstart/treeinfo file has the following format: Open the /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/appstream/kickstart/treeinfo file for editing. In the [general] section, make the following changes: Change packagedir = AppStream/Packages to packagedir = Packages Change repository = AppStream to repository = . Change variants = AppStream,BaseOS to variants = AppStream In the [tree] section, change variants = AppStream,BaseOS to variants = AppStream In the [variant-AppStream] section, make the following changes: Change packages = AppStream/Packages to packages = Packages Change repository = AppStream to repository = . Delete the following sections from the file: [checksums] , [images-x86_64] , [images-xen] , [media] , [stage2] , [variant-BaseOS] . Save and close the file. Verify that the /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/appstream/kickstart/treeinfo file has the following format: If you do not plan to use the mounted binary DVD ISO image, unmount and remove the directory: In the Satellite web UI, enable the Kickstart repositories. 3.5. Enabling the Satellite Client 6 Repository The Satellite Client 6 repository provides the katello-agent , katello-host-tools , and puppet packages for clients registered to Satellite Server. You must enable the repository for each Red Hat Enterprise Linux version that you need to manage hosts. Continue with a procedure below according to the operating system version for which you want to enable the Satellite Client 6 repository. Red Hat Enterprise Linux 9 & Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 7 & Red Hat Enterprise Linux 6 3.5.1. Red Hat Enterprise Linux 9 & Red Hat Enterprise Linux 8 To use the CLI instead of the Satellite web UI, see the procedure relevant for your Red Hat Enterprise Linux version: CLI procedure for Red Hat Enterprise Linux 9 CLI procedure for Red Hat Enterprise Linux 8 Prerequisites Ensure that you import all content ISO images that you require into Satellite Server. Procedure In the Satellite web UI, navigate to Content > Red Hat Repositories . In the Available Repositories pane, enable the Recommended Repositories to get the list of repositories. Click Red Hat Satellite Client 6 for RHEL 9 x86_64 (RPMs) or Red Hat Satellite Client 6 for RHEL 8 x86_64 (RPMs) to expand the repository set. For the x86_64 architecture, click the + icon to enable the repository. If the Satellite Client 6 items are not visible, it may be because they are not included in the Red Hat Subscription Manifest obtained from the Customer Portal. To correct that, log in to the Customer Portal, add these repositories, download the Red Hat Subscription Manifest and import it into Satellite. For more information, see Managing Red Hat Subscriptions in Managing Content . Enable the Satellite Client 6 repository for every supported major version of Red Hat Enterprise Linux running on your hosts. After enabling a Red Hat repository, a Product for this repository is automatically created. CLI procedure for Red Hat Enterprise Linux 9 Enable the Satellite Client 6 repository using the hammer repository-set enable command: CLI procedure for Red Hat Enterprise Linux 8 Enable the Satellite Client 6 repository using the hammer repository-set enable command: 3.5.2. Red Hat Enterprise Linux 7 & Red Hat Enterprise Linux 6 Note You require Red Hat Enterprise Linux Extended Life Cycle Support (ELS) Add-on subscription to enable the repositories of Red Hat Enterprise Linux 6. For more information, see Red Hat Enterprise Linux Extended Life Cycle Support (ELS) Add-on guide. To use the CLI instead of the Satellite web UI, see the procedure relevant for your Red Hat Enterprise Linux version: CLI procedure for Red Hat Enterprise Linux 7 CLI procedure for Red Hat Enterprise Linux 6 Prerequisites Ensure that you import all content ISO images that you require into Satellite Server. .Procedure In the Satellite web UI, navigate to Content > Red Hat Repositories . In the Available Repositories pane, enable the Recommended Repositories to get the list of repositories. In the Available Repositories pane, click on Satellite Client 6 (for RHEL 7 Server) (RPMs) or Satellite Client 6 (for RHEL 6 Server - ELS) (RPMs) to expand the repository set. If the Satellite Client 6 items are not visible, it may be because they are not included in the Red Hat Subscription Manifest obtained from the Customer Portal. To correct that, log in to the Customer Portal, add these repositories, download the Red Hat Subscription Manifest and import it into Satellite. For more information, see Managing Red Hat Subscriptions in Managing Content . For the x86_64 architecture, click the + icon to enable the repository. Enable the Satellite Client 6 repository for every supported major version of Red Hat Enterprise Linux running on your hosts. After enabling a Red Hat repository, a Product for this repository is automatically created. CLI procedure for Red Hat Enterprise Linux 7 Enable the Satellite Client 6 repository using the hammer repository-set enable command: CLI procedure for Red Hat Enterprise Linux 6 Enable the Satellite Client 6 repository using the hammer repository-set enable command: 3.6. Synchronizing the Satellite Client 6 Repository Use this section to synchronize the Satellite Client 6 repository from the Red Hat Content Delivery Network (CDN) to your Satellite. This repository provides the katello-agent , katello-host-tools , and puppet packages for clients registered to Satellite Server. Continue with a procedure below according to the operating system version for which you want to synchronize the Satellite Client 6 repository. Red Hat Enterprise Linux 9 & Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 7 & Red Hat Enterprise Linux 6 3.6.1. Red Hat Enterprise Linux 9 & Red Hat Enterprise Linux 8 To use the CLI instead of the Satellite web UI, see the procedure relevant for your Red Hat Enterprise Linux version: CLI procedure for Red Hat Enterprise Linux 9 CLI procedure for Red Hat Enterprise Linux 8 Procedure In the Satellite web UI, navigate to Content > Sync Status . Click the arrow to the Red Hat Enterprise Linux for x86_64 product to view available content. Select Red Hat Satellite Client 6 for RHEL 9 x86_64 RPMs or Red Hat Satellite Client 6 for RHEL 8 x86_64 RPMs whichever is applicable. Click Synchronize Now . CLI procedure for Red Hat Enterprise Linux 9 Synchronize your Satellite Client 6 repository using the hammer repository synchronize command: CLI procedure for Red Hat Enterprise Linux 8 Synchronize your Satellite Client 6 repository using the hammer repository synchronize command: 3.6.2. Red Hat Enterprise Linux 7 & Red Hat Enterprise Linux 6 Note You require Red Hat Enterprise Linux Extended Life Cycle Support (ELS) Add-on subscription to synchronize the repositories of Red Hat Enterprise Linux 6. For more information, see Red Hat Enterprise Linux Extended Life Cycle Support (ELS) Add-on guide. To use the CLI instead of the Satellite web UI, see the procedure relevant for your Red Hat Enterprise Linux version: CLI procedure for Red Hat Enterprise Linux 7 CLI procedure for Red Hat Enterprise Linux 6 Procedure In the Satellite web UI, navigate to Content > Sync Status . Click the arrow to the Red Hat Enterprise Linux Server or Red Hat Enterprise Linux Server - Extended Life Cycle Support whichever product is applicable to view available content. Select Red Hat Satellite Client 6 (for RHEL 7 Server) RPMs x86_64 or Red Hat Satellite Client 6 for RHEL 6 Server - ELS RPMs x86_64 based on your operating system version. Click Synchronize Now . CLI procedure for Red Hat Enterprise Linux 7 Synchronize your Satellite Client 6 repository using the hammer repository synchronize command: CLI procedure for Red Hat Enterprise Linux 6 Synchronize your Satellite Client 6 repository using the hammer repository synchronize command: 3.7. Enabling Power Management on Managed Hosts To perform power management tasks on managed hosts using the intelligent platform management interface (IPMI) or a similar protocol, you must enable the baseboard management controller (BMC) module on Satellite Server. Prerequisites All managed hosts must have a network interface of BMC type. Satellite Server uses this NIC to pass the appropriate credentials to the host. For more information, see Adding a Baseboard Management Controller (BMC) Interface in the Managing Hosts guide. Procedure To enable BMC, enter the following command: 3.8. Configuring DNS, DHCP, and TFTP on Satellite Server To configure the DNS, DHCP, and TFTP services on Satellite Server, use the satellite-installer command with the options appropriate for your environment. To view a complete list of configurable options, enter the satellite-installer --scenario satellite --help command. Any changes to the settings require entering the satellite-installer command again. You can enter the command multiple times and each time it updates all configuration files with the changed values. To use external DNS, DHCP, and TFTP services instead, see Chapter 4, Configuring Satellite Server with External Services . Adding Multihomed DHCP details If you want to use Multihomed DHCP, you must inform the installer. Prerequisites Ensure that the following information is available to you: DHCP IP address ranges DHCP gateway IP address DHCP nameserver IP address DNS information TFTP server name Use the FQDN instead of the IP address where possible in case of network changes. Contact your network administrator to ensure that you have the correct settings. Procedure Enter the satellite-installer command with the options appropriate for your environment. The following example shows configuring full provisioning services: You can monitor the progress of the satellite-installer command displayed in your prompt. You can view the logs in /var/log/foreman-installer/satellite.log . You can view the settings used, including the initial_admin_password parameter, in the /etc/foreman-installer/scenarios.d/satellite-answers.yaml file. For more information about configuring DHCP, DNS, and TFTP services, see Configuring Network Services in the Provisioning guide. 3.9. Disabling DNS, DHCP, and TFTP for Unmanaged Networks If you want to manage TFTP, DHCP, and DNS services manually, you must prevent Satellite from maintaining these services on the operating system and disable orchestration to avoid DHCP and DNS validation errors. However, Satellite does not remove the back-end services on the operating system. Procedure On Satellite Server, enter the following command: In the Satellite web UI, navigate to Infrastructure > Subnets and select a subnet. Click the Capsules tab and clear the DHCP Capsule , TFTP Capsule , and Reverse DNS Capsule fields. In the Satellite web UI, navigate to Infrastructure > Domains and select a domain. Clear the DNS Capsule field. Optional: If you use a DHCP service supplied by a third party, configure your DHCP server to pass the following options: For more information about DHCP options, see RFC 2132 . Note Satellite does not perform orchestration when a Capsule is not set for a given subnet and domain. When enabling or disabling Capsule associations, orchestration commands for existing hosts can fail if the expected records and configuration files are not present. When associating a Capsule to turn orchestration on, ensure the required DHCP and DNS records as well as the TFTP files are in place for the existing Satellite hosts in order to prevent host deletion failures in the future. 3.10. Configuring Satellite Server for Outgoing Emails To send email messages from Satellite Server, you can use either an SMTP server, or the sendmail command. Prerequisite Some SMTP servers with anti-spam protection or grey-listing features are known to cause problems. To setup outgoing email with such a service either install and configure a vanilla SMTP service on Satellite Server for relay or use the sendmail command instead. Procedure In the Satellite web UI, navigate to Administer > Settings . Click the Email tab and set the configuration options to match your preferred delivery method. The changes have an immediate effect. The following example shows the configuration options for using an SMTP server: Table 3.1. Using an SMTP server as a delivery method Name Example value Delivery method SMTP SMTP address smtp.example.com SMTP authentication login SMTP HELO/EHLO domain example.com SMTP password password SMTP port 25 SMTP username [email protected] The SMTP username and SMTP password specify the login credentials for the SMTP server. The following example uses gmail.com as an SMTP server: Table 3.2. Using gmail.com as an SMTP server Name Example value Delivery method SMTP SMTP address smtp.gmail.com SMTP authentication plain SMTP HELO/EHLO domain smtp.gmail.com SMTP enable StartTLS auto Yes SMTP password password SMTP port 587 SMTP username user @gmail.com The following example uses the sendmail command as a delivery method: Table 3.3. Using sendmail as a delivery method Name Example value Delivery method Sendmail Sendmail location /usr/sbin/sendmail Sendmail arguments -i For security reasons, both Sendmail location and Sendmail argument settings are read-only and can be only set in /etc/foreman/settings.yaml . Both settings currently cannot be set via satellite-installer . For more information see the sendmail 1 man page. If you decide to send email using an SMTP server which uses TLS authentication, also perform one of the following steps: Mark the CA certificate of the SMTP server as trusted. To do so, execute the following commands on Satellite Server: Where mailca.crt is the CA certificate of the SMTP server. Alternatively, in the Satellite web UI, set the SMTP enable StartTLS auto option to No . Click Test email to send a test message to the user's email address to confirm the configuration is working. If a message fails to send, the Satellite web UI displays an error. See the log at /var/log/foreman/production.log for further details. Note For information on configuring email notifications for individual users or user groups, see Configuring Email Notifications in Administering Red Hat Satellite . 3.11. Configuring Satellite Server with a Custom SSL Certificate By default, Red Hat Satellite uses a self-signed SSL certificate to enable encrypted communications between Satellite Server, external Capsule Servers, and all hosts. If you cannot use a Satellite self-signed certificate, you can configure Satellite Server to use an SSL certificate signed by an external certificate authority (CA). When you configure Red Hat Satellite with custom SSL certificates, you must fulfill the following requirements: You must use the privacy-enhanced mail (PEM) encoding for the SSL certificates. You must not use the same SSL certificate for both Satellite Server and Capsule Server. The same CA must sign certificates for Satellite Server and Capsule Server. An SSL certificate must not also be a CA certificate. An SSL certificate must include a subject alt name (SAN) entry that matches the common name (CN). An SSL certificate must be allowed for Key Encipherment using a Key Usage extension. An SSL certificate must not have a shortname as the CN. You must not set a passphrase for the private key. To configure your Satellite Server with a custom certificate, complete the following procedures: Section 3.11.1, "Creating a Custom SSL Certificate for Satellite Server" Section 3.11.2, "Deploying a Custom SSL Certificate to Satellite Server" Section 3.11.3, "Deploying a Custom SSL Certificate to Hosts" If you have external Capsule Servers registered to Satellite Server, configure them with custom SSL certificates. For more information, see Configuring Capsule Server with a Custom SSL Certificate in Installing Capsule Server . 3.11.1. Creating a Custom SSL Certificate for Satellite Server Use this procedure to create a custom SSL certificate for Satellite Server. If you already have a custom SSL certificate for Satellite Server, skip this procedure. Procedure To store all the source certificate files, create a directory that is accessible only to the root user: Create a private key with which to sign the certificate signing request (CSR). Note that the private key must be unencrypted. If you use a password-protected private key, remove the private key password. If you already have a private key for this Satellite Server, skip this step. Create the /root/satellite_cert/openssl.cnf configuration file for the CSR and include the following content: Generate CSR: 1 Path to the private key. 2 Path to the configuration file. 3 Path to the CSR to generate. Send the certificate signing request to the certificate authority (CA). The same CA must sign certificates for Satellite Server and Capsule Server. When you submit the request, specify the lifespan of the certificate. The method for sending the certificate request varies, so consult the CA for the preferred method. In response to the request, you can expect to receive a CA bundle and a signed certificate, in separate files. 3.11.2. Deploying a Custom SSL Certificate to Satellite Server Use this procedure to configure your Satellite Server to use a custom SSL certificate signed by a Certificate Authority. The katello-certs-check command validates the input certificate files and returns the commands necessary to deploy a custom SSL certificate to Satellite Server. Important Do not store the SSL certificates or .tar bundles in /tmp or /var/tmp directory. The operating system removes files from these directories periodically. As a result, satellite-installer fails to execute while enabling features or upgrading Satellite Server. Procedure Validate the custom SSL certificate input files. Note that for the katello-certs-check command to work correctly, Common Name (CN) in the certificate must match the FQDN of Satellite Server. 1 Path to Satellite Server certificate file that is signed by a Certificate Authority. 2 Path to the private key that was used to sign Satellite Server certificate. 3 Path to the Certificate Authority bundle. If the command is successful, it returns two satellite-installer commands, one of which you must use to deploy a certificate to Satellite Server. Example output of katello-certs-check Note that you must not access or modify /root/ssl-build . From the output of the katello-certs-check command, depending on your requirements, enter the satellite-installer command that installs a new Satellite with custom SSL certificates or updates certificates on a currently running Satellite. If you are unsure which command to run, you can verify that Satellite is installed by checking if the file /etc/foreman-installer/scenarios.d/.installed exists. If the file exists, run the second satellite-installer command that updates certificates. Important satellite-installer needs the certificate archive file after you deploy the certificate. Do not modify or delete it. It is required, for example, when upgrading Satellite Server. On a computer with network access to Satellite Server, navigate to the following URL: https://satellite.example.com . In your browser, view the certificate details to verify the deployed certificate. 3.11.3. Deploying a Custom SSL Certificate to Hosts After you configure Satellite Server to use a custom SSL certificate, you must also install the katello-ca-consumer package on every host that is registered to this Satellite Server. Procedure On each host, install the katello-ca-consumer package: 3.12. Using External Databases with Satellite As part of the installation process for Red Hat Satellite, the satellite-installer command installs PostgreSQL databases on the same server as Satellite. In certain Satellite deployments, using external databases instead of the default local databases can help with the server load. Red Hat does not provide support or tools for external database maintenance. This includes backups, upgrades, and database tuning. You must have your own database administrator to support and maintain external databases. To create and use external databases for Satellite, you must complete the following procedures: Section 3.12.2, "Preparing a Host for External Databases" . Prepare a Red Hat Enterprise Linux 8 or Red Hat Enterprise Linux 7 server to host the external databases. Section 3.12.3, "Installing PostgreSQL" . Prepare PostgreSQL with databases for Satellite, Candlepin and Pulp with dedicated users owning them. Section 3.12.4, "Configuring Satellite Server to use External Databases" . Edit the parameters of satellite-installer to point to the new databases, and run satellite-installer . 3.12.1. PostgreSQL as an External Database Considerations Foreman, Katello, and Candlepin use the PostgreSQL database. If you want to use PostgreSQL as an external database, the following information can help you decide if this option is right for your Satellite configuration. Satellite supports PostgreSQL version 12. Advantages of External PostgreSQL: Increase in free memory and free CPU on Satellite Flexibility to set shared_buffers on the PostgreSQL database to a high number without the risk of interfering with other services on Satellite Flexibility to tune the PostgreSQL server's system without adversely affecting Satellite operations Disadvantages of External PostgreSQL Increase in deployment complexity that can make troubleshooting more difficult The external PostgreSQL server is an additional system to patch and maintain If either Satellite or the PostgreSQL database server suffers a hardware or storage failure, Satellite is not operational If there is latency between the Satellite server and database server, performance can suffer If you suspect that the PostgreSQL database on your Satellite is causing performance problems, use the information in Satellite 6: How to enable postgres query logging to detect slow running queries to determine if you have slow queries. Queries that take longer than one second are typically caused by performance issues with large installations, and moving to an external database might not help. If you have slow queries, contact Red Hat Support. 3.12.2. Preparing a Host for External Databases Install a freshly provisioned system with the latest Red Hat Enterprise Linux 8 or Red Hat Enterprise Linux 7 server to host the external databases. Subscriptions for Red Hat Software Collections and Red Hat Enterprise Linux do not provide the correct service level agreement for using Satellite with external databases. You must also attach a Satellite subscription to the base operating system that you want to use for the external databases. Prerequisites The prepared host must meet Satellite's Storage Requirements . Procedure Use the instructions in Attaching the Satellite Infrastructure Subscription to attach a Satellite subscription to your server. Disable all repositories and enable only the following repositories: For Red Hat Enterprise Linux 7: For Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 8, enable the following modules: Note Enablement of the module satellite:el8 warns about a conflict with postgresql:10 and ruby:2.5 as these modules are set to the default module versions on Red Hat Enterprise Linux 8. The module satellite:el8 has a dependency for the modules postgresql:12 and ruby:2.7 that will be enabled with the satellite:el8 module. These warnings do not cause installation process failure, hence can be ignored safely. For more information about modules and lifecycle streams on Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux Application Streams Life Cycle . 3.12.3. Installing PostgreSQL You can install only the same version of PostgreSQL that is installed with the satellite-installer tool during an internal database installation. You can install PostgreSQL using Red Hat Enterprise Linux 8 or Red Hat Enterprise Linux Server 7 repositories. Satellite supports PostgreSQL version 12. Installing PostgreSQL on Red Hat Enterprise Linux 8 Installing PostgreSQL on Red Hat Enterprise Linux 7 3.12.3.1. Installing PostgreSQL on Red Hat Enterprise Linux 8 Procedure To install PostgreSQL, enter the following command: To initialize PostgreSQL, enter the following command: Edit the /var/lib/pgsql/data/postgresql.conf file: Remove the # and edit to listen to inbound connections: Edit the /var/lib/pgsql/data/pg_hba.conf file: Add the following line to the file: To start, and enable PostgreSQL service, enter the following commands: Open the postgresql port on the external PostgreSQL server: Switch to the postgres user and start the PostgreSQL client: Create three databases and dedicated roles: one for Satellite, one for Candlepin, and one for Pulp: Exit the postgres user: From Satellite Server, test that you can access the database. If the connection succeeds, the commands return 1 . 3.12.3.2. Installing PostgreSQL on Red Hat Enterprise Linux 7 Procedure To install PostgreSQL, enter the following command: To initialize PostgreSQL, enter the following command: Edit the /var/opt/rh/rh-postgresql12/lib/pgsql/data/postgresql.conf file: Remove the # and edit to listen to inbound connections: Edit the /var/opt/rh/rh-postgresql12/lib/pgsql/data/pg_hba.conf file: Add the following line to the file: To start, and enable PostgreSQL service, enter the following commands: Open the postgresql port on the external PostgreSQL server: Switch to the postgres user and start the PostgreSQL client: Create three databases and dedicated roles: one for Satellite, one for Candlepin, and one for Pulp: Exit the postgres user: From Satellite Server, test that you can access the database. If the connection succeeds, the commands return 1 . 3.12.4. Configuring Satellite Server to use External Databases Use the satellite-installer command to configure Satellite to connect to an external PostgreSQL database. Prerequisite You have installed and configured a PostgreSQL database on a Red Hat Enterprise Linux server. Procedure To configure the external databases for Satellite, enter the following command: To enable the Secure Sockets Layer (SSL) protocol for these external databases, add the following options: | [
"hammer content-credential show --name=\" My_Upstream_CA_Cert \" --organization=\" My_Downstream_Organization \"",
"hammer organization configure-cdn --name=\" My_Downstream_Organization \" --type=network_sync --url https:// upstream-satellite.example.com --username upstream_username --password upstream_password --ssl-ca-credential-id \" My_Upstream_CA_Cert_ID\" \\ --upstream-organization-label=\"_My_Upstream_Organization \" [--upstream-lifecycle-environment-label=\" My_Lifecycle_Environment \"] [--upstream-content-view-label=\" My_Content_View \"]",
"hammer organization configure-cdn --name=\" My_Organization \" --type=export_sync",
"mkdir /mnt/ iso mount -o loop rhel-binary-dvd.iso /mnt/ iso",
"mkdir --parents /var/www/html/pub/sat-import/content/dist/rhel/server/7/7.7/x86_64/kickstart/",
"cp -a /mnt/ iso /* /var/www/html/pub/sat-import/content/dist/rhel/server/7/7.7/x86_64/kickstart/",
"cp /mnt/ iso /.treeinfo /var/www/html/pub/sat-import/content/dist/rhel/server/7/7.7/x86_64/kickstart/treeinfo",
"umount /mnt/ iso rmdir /mnt/ iso",
"mkdir /mnt/ iso mount -o loop rhel-binary-dvd.iso /mnt/ iso",
"mkdir --parents /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/appstream/kickstart mkdir --parents /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/baseos/kickstart",
"cp -a /mnt/ iso /AppStream/* /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/appstream/kickstart cp -a /mnt/ iso /BaseOS/* /mnt/ iso /images/ /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/baseos/kickstart",
"cp /mnt/ iso /.treeinfo /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/appstream/kickstart/treeinfo cp /mnt/ iso /.treeinfo /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/baseos/kickstart/treeinfo",
"[checksums] images/efiboot.img = sha256:9ad9beee4c906cd05d227a1be7a499c8d2f20b3891c79831325844c845262bb6 images/install.img = sha256:e246bf4aedfff3bb54ae9012f959597cdab7387aadb3a504f841bdc2c35fe75e images/pxeboot/initrd.img = sha256:a66e3c158f02840b19c372136a522177a2ab4bd91cb7269fb5bfdaaf7452efef images/pxeboot/vmlinuz = sha256:789028335b64ddad343f61f2abfdc9819ed8e9dfad4df43a2694c0a0ba780d16 [general] ; WARNING.0 = This section provides compatibility with pre-productmd treeinfos. ; WARNING.1 = Read productmd documentation for details about new format. arch = x86_64 family = Red Hat Enterprise Linux name = Red Hat Enterprise Linux 8.1.0 packagedir = Packages platforms = x86_64,xen repository = . timestamp = 1571146127 variant = BaseOS variants = BaseOS version = 8.1.0 [header] type = productmd.treeinfo version = 1.2 [images-x86_64] efiboot.img = images/efiboot.img initrd = images/pxeboot/initrd.img kernel = images/pxeboot/vmlinuz [images-xen] initrd = images/pxeboot/initrd.img kernel = images/pxeboot/vmlinuz [release] name = Red Hat Enterprise Linux short = RHEL version = 8.1.0 [stage2] mainimage = images/install.img [tree] arch = x86_64 build_timestamp = 1571146127 platforms = x86_64,xen variants = BaseOS [variant-BaseOS] id = BaseOS name = BaseOS packages = Packages repository = . type = variant uid = BaseOS",
"[general] ; WARNING.0 = This section provides compatibility with pre-productmd treeinfos. ; WARNING.1 = Read productmd documentation for details about new format. arch = x86_64 family = Red Hat Enterprise Linux name = Red Hat Enterprise Linux 8.1.0 packagedir = Packages platforms = x86_64,xen repository = . timestamp = 1571146127 variant = AppStream variants = AppStream version = 8.1.0 [header] type = productmd.treeinfo version = 1.2 [release] name = Red Hat Enterprise Linux short = RHEL version = 8.1.0 [tree] arch = x86_64 build_timestamp = 1571146127 platforms = x86_64,xen variants = AppStream [variant-AppStream] id = AppStream name = AppStream packages = Packages repository = . type = variant uid = AppStream",
"umount /mnt/ iso rmdir /mnt/ iso",
"hammer repository-set enable --basearch=\"x86_64\" --name \"Red Hat Satellite Client 6 for RHEL 9 x86_64 (RPMs)\" --organization \"My_Organization\" --product \"Red Hat Enterprise Linux for x86_64\"",
"hammer repository-set enable --basearch=\"x86_64\" --name \"Red Hat Satellite Client 6 for RHEL 8 x86_64 (RPMs)\" --organization \"My_Organization\" --product \"Red Hat Enterprise Linux for x86_64\"",
"hammer repository-set enable --basearch=\"x86_64\" --name \"Red Hat Satellite Client 6 (for RHEL 7 Server) (RPMs)\" --organization \"My_Organization\" --product \"Red Hat Enterprise Linux Server\"",
"hammer repository-set enable --basearch=\"x86_64\" --name \"Red Hat Satellite Client 6 (for RHEL 6 Server - ELS) (RPMs)\" --organization \"My_Organization\" --product \"Red Hat Enterprise Linux Server - Extended Life Cycle Support\"",
"hammer repository synchronize --name \"Red Hat Satellite Client 6 for RHEL 9 x86_64 RPMs\" --organization \"My_Organization\" --product \"Red Hat Enterprise Linux for x86_64\"",
"hammer repository synchronize --name \"Red Hat Satellite Client 6 for RHEL 8 x86_64 RPMs\" --organization \"My_Organization\" --product \"Red Hat Enterprise Linux for x86_64\"",
"hammer repository synchronize --async --name \"Red Hat Satellite Client 6 for RHEL 7 Server RPMs x86_64\" --organization \"My_Organization\" --product \"Red Hat Enterprise Linux Server\"",
"hammer repository synchronize --async --name \"Red Hat Satellite Client 6 for RHEL 6 Server - ELS RPMs x86_64\" --organization \"My_Organization\" --product \"Red Hat Enterprise Linux Server - Extended Life Cycle Support\"",
"satellite-installer --foreman-proxy-bmc \"true\" --foreman-proxy-bmc-default-provider \"freeipmi\"",
"satellite-installer --scenario satellite --foreman-proxy-dns true --foreman-proxy-dns-managed true --foreman-proxy-dns-interface eth0 --foreman-proxy-dns-zone example.com --foreman-proxy-dns-reverse 2.0.192.in-addr.arpa --foreman-proxy-dhcp true --foreman-proxy-dhcp-managed true --foreman-proxy-dhcp-interface eth0 --foreman-proxy-dhcp-additional-interfaces eth1 --foreman-proxy-dhcp-additional-interfaces eth2 --foreman-proxy-dhcp-range \" 192.0.2.100 192.0.2.150 \" --foreman-proxy-dhcp-gateway 192.0.2.1 --foreman-proxy-dhcp-nameservers 192.0.2.2 --foreman-proxy-tftp true --foreman-proxy-tftp-managed true --foreman-proxy-tftp-servername 192.0.2.3",
"satellite-installer --foreman-proxy-dhcp false --foreman-proxy-dns false --foreman-proxy-tftp false",
"Option 66: IP address of Satellite or Capsule Option 67: /pxelinux.0",
"cp mailca.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust enable update-ca-trust",
"mkdir /root/satellite_cert",
"openssl genrsa -out /root/satellite_cert/satellite_cert_key.pem 4096",
"[ req ] req_extensions = v3_req distinguished_name = req_distinguished_name x509_extensions = usr_cert prompt = no [ req_distinguished_name ] CN = satellite.example.com [ v3_req ] basicConstraints = CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth, clientAuth, codeSigning, emailProtection subjectAltName = @alt_names [ usr_cert ] basicConstraints=CA:FALSE nsCertType = client, server, email keyUsage = nonRepudiation, digitalSignature, keyEncipherment extendedKeyUsage = serverAuth, clientAuth, codeSigning, emailProtection nsComment = \"OpenSSL Generated Certificate\" subjectKeyIdentifier=hash authorityKeyIdentifier=keyid,issuer [ alt_names ] DNS.1 = satellite.example.com",
"openssl req -new -key /root/satellite_cert/satellite_cert_key.pem \\ 1 -config /root/satellite_cert/openssl.cnf \\ 2 -out /root/satellite_cert/satellite_cert_csr.pem 3",
"katello-certs-check -c /root/satellite_cert/satellite_cert.pem \\ 1 -k /root/satellite_cert/satellite_cert_key.pem \\ 2 -b /root/satellite_cert/ca_cert_bundle.pem 3",
"Validation succeeded. To install the Red Hat Satellite Server with the custom certificates, run: satellite-installer --scenario satellite --certs-server-cert \" /root/satellite_cert/satellite_cert.pem \" --certs-server-key \" /root/satellite_cert/satellite_cert_key.pem \" --certs-server-ca-cert \" /root/satellite_cert/ca_cert_bundle.pem \" To update the certificates on a currently running Red Hat Satellite installation, run: satellite-installer --scenario satellite --certs-server-cert \" /root/satellite_cert/satellite_cert.pem \" --certs-server-key \" /root/satellite_cert/satellite_cert_key.pem \" --certs-server-ca-cert \" /root/satellite_cert/ca_cert_bundle.pem \" --certs-update-server --certs-update-server-ca",
"yum localinstall http:// satellite.example.com /pub/katello-ca-consumer-latest.noarch.rpm",
"subscription-manager repos --disable '*' subscription-manager repos --enable=rhel-server-rhscl-7-rpms --enable=rhel-7-server-rpms --enable=rhel-7-server-satellite-6.11-rpms",
"subscription-manager repos --disable '*' subscription-manager repos --enable=satellite-6.11-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"dnf module enable satellite:el8",
"dnf install postgresql-server postgresql-evr",
"postgresql-setup initdb",
"vi /var/lib/pgsql/data/postgresql.conf",
"listen_addresses = '*'",
"vi /var/lib/pgsql/data/pg_hba.conf",
"host all all Satellite_ip /24 md5",
"systemctl start postgresql systemctl enable postgresql",
"firewall-cmd --add-service=postgresql firewall-cmd --runtime-to-permanent",
"su - postgres -c psql",
"CREATE USER \"foreman\" WITH PASSWORD ' Foreman_Password '; CREATE USER \"candlepin\" WITH PASSWORD ' Candlepin_Password '; CREATE USER \"pulp\" WITH PASSWORD ' Pulpcore_Password '; CREATE DATABASE foreman OWNER foreman; CREATE DATABASE candlepin OWNER candlepin; CREATE DATABASE pulpcore OWNER pulp;",
"\\q",
"PGPASSWORD=' Foreman_Password ' psql -h postgres.example.com -p 5432 -U foreman -d foreman -c \"SELECT 1 as ping\" PGPASSWORD=' Candlepin_Password ' psql -h postgres.example.com -p 5432 -U candlepin -d candlepin -c \"SELECT 1 as ping\" PGPASSWORD=' Pulpcore_Password ' psql -h postgres.example.com -p 5432 -U pulp -d pulpcore -c \"SELECT 1 as ping\"",
"yum install rh-postgresql12-postgresql-server rh-postgresql12-syspaths rh-postgresql12-postgresql-evr",
"postgresql-setup initdb",
"vi /var/opt/rh/rh-postgresql12/lib/pgsql/data/postgresql.conf",
"listen_addresses = '*'",
"vi /var/opt/rh/rh-postgresql12/lib/pgsql/data/pg_hba.conf",
"host all all Satellite_ip /24 md5",
"systemctl start postgresql systemctl enable postgresql",
"firewall-cmd --add-service=postgresql firewall-cmd --runtime-to-permanent",
"su - postgres -c psql",
"CREATE USER \"foreman\" WITH PASSWORD ' Foreman_Password '; CREATE USER \"candlepin\" WITH PASSWORD ' Candlepin_Password '; CREATE USER \"pulp\" WITH PASSWORD ' Pulpcore_Password '; CREATE DATABASE foreman OWNER foreman; CREATE DATABASE candlepin OWNER candlepin; CREATE DATABASE pulpcore OWNER pulp;",
"\\q",
"PGPASSWORD=' Foreman_Password ' psql -h postgres.example.com -p 5432 -U foreman -d foreman -c \"SELECT 1 as ping\" PGPASSWORD=' Candlepin_Password ' psql -h postgres.example.com -p 5432 -U candlepin -d candlepin -c \"SELECT 1 as ping\" PGPASSWORD=' Pulpcore_Password ' psql -h postgres.example.com -p 5432 -U pulp -d pulpcore -c \"SELECT 1 as ping\"",
"satellite-installer --scenario satellite --foreman-db-host postgres.example.com --foreman-db-password Foreman_Password --foreman-db-database foreman --foreman-db-manage false --katello-candlepin-db-host postgres.example.com --katello-candlepin-db-name candlepin --katello-candlepin-db-password Candlepin_Password --katello-candlepin-manage-db false --foreman-proxy-content-pulpcore-manage-postgresql false --foreman-proxy-content-pulpcore-postgresql-host postgres.example.com --foreman-proxy-content-pulpcore-postgresql-db-name pulpcore --foreman-proxy-content-pulpcore-postgresql-password Pulpcore_Password --foreman-proxy-content-pulpcore-postgresql-user pulp",
"--foreman-db-sslmode verify-full --foreman-db-root-cert <path_to_CA> --katello-candlepin-db-ssl true --katello-candlepin-db-ssl-verify true --katello-candlepin-db-ssl-ca <path_to_CA> --foreman-proxy-content-pulpcore-postgresql-ssl true --foreman-proxy-content-pulpcore-postgresql-ssl-root-ca <path_to_CA>"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/installing_satellite_server_in_a_disconnected_network_environment/performing-additional-configuration |
4. Authentication | 4. Authentication You must authenticate each request for the Customer Portal. To authenticate a request, generate an authentication token based on your Customer Portal username and password, then declare that token in the Authorization header of each subsequent request. For more information about requesting an authentication token, see Getting started with Red Hat APIs . | null | https://docs.redhat.com/en/documentation/red_hat_customer_portal/1/html/customer_portal_integration_guide/authentication |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.